chash stringlengths 16 16 | content stringlengths 267 674k |
|---|---|
e155120b1b315ca3 | The Full Wiki
Quantum: Quiz
Did you know ...
More interesting facts on Quantum
Include this on your site/blog:
Question 1: An example of an entity that is quantized is the energy transfer of ________ of matter (called fermions) and of photons and other bosons.
Elementary particleStandard ModelParticle physicsQuark
Question 2: This means that the magnitude can take on only certain discrete ________ values, rather than any value, at least within a range.
NumberReal numberIrrational numberComplex number
Question 3: As incorporated into the theory of ________, this is regarded by physicists as part of the fundamental framework for understanding and describing nature at the infinitesimal level, for the very practical reason that it works.
Schrödinger equationIntroduction to quantum mechanicsQuantum mechanicsWave–particle duality
Question 4: In ________, a quantum (plural: quanta) is the minimum unit of any physical entity involved in an interaction.
Quantum mechanicsPhysicsParticle physicsUniverse
Question 5: A photon, for example, is a single quantum of light, and may thus be referred to as a "________".
ElectronPhotonStandard ModelAtom
Question 6: The energy of an electron bound to an ________ (at rest) is said to be quantized, which results in the stability of atoms, and of matter in general.
Question 7: The word comes from the ________ "quantus", for "how much." Behind this, one finds the fundamental notion that a physical property may be "quantized", referred to as "quantization".
LatinVulgar LatinOld LatinRoman Empire
Question 8: There is a related term of ________.
Azimuthal quantum numberAtomQuantum numberParity (physics)
Got something to say? Make a comment.
Your name
Your email address |
497e4d6a6628b6ad | The minimal realist interpretation
of quantum theory
Minimizing quantum strangeness
Do we really have to give up most of our common sense ideas about how the universe works because of quantum theory? The minimal realist interpretation aims to answer this question by trying to preserve as much of common sense as possible. It goes beyond the minimal interpretation of quantum theory, but only as far as this is necessary to preserve classical realism and common sense. To understand the reasoning which leads to the minimal realist interpretation, it seems useful to imagine a 19th century scientist appearing our world, who learns about the mathematical apparatus of quantum theory (as well as that of relativity), learns the minimal interpretation, and tries to make sense of all this, but completely ignores all the other interpretations of quantum theory. So, he has everything he needs to do the physics. But he is not ready to give up, without serious evidence, any part of classical common sense, in particular classical ideas about realism and causality.
The interpretation assumes that there exists configuration space \(q=(q^1,\ldots.q^N)\in Q\) and presupposes that the Schrödinger equation has the usual form: \[ i \hbar \frac{\partial}{\partial t} \psi(q,t) = \left(-\frac{\hbar^2}{2} \delta^{ij}\frac{\partial}{\partial q^i}\frac{\partial}{\partial q^j} + V(q)\right) \psi(q,t). \]
This is sufficient to handle relativistic field theories, in particular the standard model of particle physics.
The result is a realist interpretation where we have, as in classical Lagrange formalism, a continuous trajectory in the configuration space \(\mathbf{q(t)\in Q}\).
But we have only incomplete knowledge of this trajectory, and the wave function \(\mathbf{\psi(q,t)\in \mathcal{L}^2(Q,\mathbb{C})}\) defines our incomplete knowledge of the trajectory.
The knowledge defined by the wave function is defined by the Madelung variables, namely by the probability density \(\rho(q,t) = |\psi|^2\) and, where it is non-zero, also by the phase \(S(q) = \hbar \Im \ln \psi(q)\), which defines the potential for the average velocity of the probability flow: \[ v^i(q,t) = \delta^{ij} \frac{\partial}{\partial q^j} S(q,t).\]
For the probability density together with this velocity, a continuity equation follows from the Schrödinger equation: \[\partial_t \rho(q,t) + \frac{\partial}{\partial q^i}\left(\rho(q,t) v^i(q,t)\right) = 0\]
This continuity equation justifies the thesis that there exists a continuous trajectory in the configuration space.
The Schrödinger equation gives also a second equation for the phase function \(S(q)\) named the quantum Hamilton-Jacobi equation: \[-\partial_t S(q) = \frac{1}{2} \delta^{ij}\frac{\partial S(q)}{\partial q^i}\frac{\partial S(q)}{\partial q^j} + V(q) + Q(q).\]
It differs from the classical Hamilton-Jacobi equation by the a quantum potential \[Q(q) = -\frac{\hbar^2}{2} \frac{\Delta \sqrt{\rho(q)}}{\sqrt{\rho(q)}}. \]
This potential automatically disappears in the classical limit \(\hbar\to 0\). This makes the classical limit completely unproblematic.
Relation to other interpretations
The major objections against realist interpretations
Given that the minimal realist interpretation reuses some well-known formulas from other realist interpretions, it is quite obvious what could be objected against it: The objections applied to those other realist interpretation may be tried here too.
Some of them (namely incompatibility with relativistic symmetry and the Pauli objection that it introduces an unnatural asymmetry between configuration and momentum variables) have to be answered, but this is, fortunately, not a big problem.
Other arguments (like the Wallstrom objection, the surrealistic trajectories argument, and objections against a wave function of the universe) are not valid objections against the minimal realist interpretation, because we have avoided to make any claims, reasonable or not, which go toward a more fundamental, subquantum theory.
Incompatibility with relativistic physics
Once we give a physical interpretation to the "Bohmian velocity" \(\vec{v}(q)\), we have to face the objection that this violates fundamental Lorentz invariance. While the velocity itself is not observable, it appears in the Schrödinger equation, and is interpreted as a physical velocity - the average velocity of the configuration. It depends on the complete configuration, thus, on the complete state of the universe, without taking into account restrictions created by Einstein causality.
This objection is unavoidable for any realist interpretation, given the violation of Bell's inequality by quantum theory. This was the very point for Bell to prove his theorem - to show that the problem is unavoidable for every realist interpretation.
But is this a serious problem? Here it is important to distinguish two problems – compatibility with relativistic physics, and compatibility with relativistic metaphysics. Of course, relativists like to think that they don't accept any metaphysics, but this is only a nonsensical remnant of positivism - in reality, they like to reject theories for purely metaphysical reasons.
With Lorentz symmetry for observables there is no problem at all. The minimal realist interpretation is compatible with relativistic QFT, and for all the observables Lorentz invariance holds. But Lorentz invariance is not fundamental - the interpretation has a hidden preferred frame. The time coordinate \(t\) of the Schrödinger equation is the preferred absolute time, and it is a hidden variable. (Note that even in non-relativistic quantum theory \(t\) is a hidden variable, there exists no operator for time measurement which could measure \(t\), and every physical clock goes, with some nonzero probability, sometimes even backward in time.)
If we restrict ourselves to physics, to observable effects, then there is not much base for argumentation against interpretations which use a preferred frame.
Let's note here that it is not at all a serious objection against a hidden variable theory that among its hidden variables there is also a hidden preferred frame.
The Pauli objection
This objection is about the asymmetry between configuration and momentum variables in the interpretation. It was part of Pauli's rejection of Bohm's causal interpretation. In Pauli's words, "the artificial asymmetry introduced in the treatment of the two variables of a canonically conjugated pair characterizes this form of theory as artificial metaphysics" (Pauli 1953). The minimal realist interpretation also prefers the configuration space, thus, the objection could be applied here too.
But, first of all, there are important parts of physics which do not show this symmetry at all. So, in particular, the Hamilton operator has the form \(\hat{H} = \hat{p}^2 + V(\hat{q})\), thus, a quite different dependence on both variables.
Then, as I have shown in (Schmelzer 2009), even if we restrict ourselves to Hamilton operators which have the form \(\hat{H} = \hat{p}^2 + V(\hat{q})\), one can, for a given \(\hat{H}\), find different pairs of conjugate operators \(\hat{p},\hat{q}\) so that the same \(\hat{H}\) has the same form \(\hat{H} = \hat{p}^2 + V(\hat{q})\) but with different potentials \(V(\hat{q}\), and, as a consequence, with different physical predictions. So, the physical predictions of the theory depend on the choice of the operators \(\hat{p},\hat{q}\).
This is already sufficient to reject the Pauli objection.
The surrealistic trajectories argument against dBB theory
There have been objections against de Broglie-Bohm theory that the trajectories it predicts are quite surrealistic.
Given that we use the same dBB formula for the velocity, one could think that this argument may be applicable here too. But there is a difference, namely the dBB velocity is deterministic, while we interpret it only as an average velocity.
This makes a difference, as can be seen in the most simple case where the objection has been made - stable states the discrete eigenstates of one-dimensional theory. Once they are stable states, the probability distribution does not change in time. So, whatever the real velocities, if the flow is irrotational, the average velocity has to be zero. So, to object that these trajectories would be surrealistic makes no sense. The situation is different in dBB theory, where the velocity is deterministic. This deterministic velocity has to be zero too. That means, the electron in an atom does not move at all, even if it is attracted by the nucleus. This is already much more surrealistic.
So, avoiding to postulate that the velocity is deterministic also avoids the surrealistic trajectories argument.
An earlier and already outdated version of the interpretation named "paleoclassical" is presented in the article Schmelzer, I. (2015) The paleocclassical interpretation of quantum theory, arXiv:1103.3506, published in Reimer, A. (ed.), Horizons in World Physics, Volume 284, Nova Science Publishers.
I'm actually yet in the process of writing a paper about the minimal realist interpretation. |
7f8b1a521630fce8 | Seeing Past Darwin V: Life and Emergence
great spot of jupiterNatural genetic engineering genetic engineering in bacteria.
Bipedal goats and dogs.
Maze-solving slime mold, ferrets that see with their auditory cortex, fruit flies with inverted visual fields, and humans who "see" with their tongues.
These are some of the phenomena I've looked at in previous weeks in order to make the case that living beings possess a general ability to respond to challenges by means of appropriate compensation or adaptation.
I've been arguing that the existence of such a general power of "adaptivity" or "intelligent agency" cannot be explained by the theory of natural selection, but rather is the tacit presupposition that gives that theory its superficial plausibility.
But if natural selection cannot explain this power or capacity, what can? How is it possible for a physical system---the cell---to possess such a remarkable property? How can we best try to understand it scientifically?
Once we finally succeed in freeing our minds from the Darwinian style of thinking, our real work has only just begun.
In the weeks to come, I will be looking at some specific proposals for future research on the fundamental nature of life. Today, though, by way of preparation, I will sketch in some background---and make some important distinctions---that will clarify what those proposals can and cannot hope to accomplish.
* * *
There are so many things wrongs with the Darwin-determined way most of us think about life, it is hard to know where to start.
Here is a short list of widely accepted ideas that dominate most discussions of life and evolution, either explicitly or implicitly:
darwin's finchesThe Darwinian View of Life
• There is no deep difference between living and nonliving matter; therefore, it is idle to seek "essential" properties or a "definition" of life.
• In any case, the most fundamental fact about a living thing is its ability to undergo natural selection.
• Therefore, evolution---and hence replication---are conceptually more fundamental than physiology (or "metabolism").
• Therefore, DNA is more important than the other main chemical components of the cell: proteins bound to water and/or lipid membranes.
• Therefore, genes are fundamental and the most important question to be asked about any functional trait is its evolutionary origin; everything else is just biochemical detail.
• In this way, the seemingly teleological and normative features of living things can be "reduced" to the effects of the genes, and so satisfactorily explained by the theory of random genetic mutation and natural selection.
I submit that every one of these ideas is fundamentally mistaken, and that progress towards understanding life depends upon affirming its contrary:
The Bioessentialist View of Life
• There is a fundamental difference in kind between living and nonliving systems; the main task of biology is to understand the distinctive nature of living matter.
• The most fundamental fact about a living thing is its ability, by doing work selectively, to maintain itself in existence as the kind of physical system that it is.
• Therefore, metabolism is conceptually more fundamental than evolution or replication; in fact, replication---and perhaps to some extent even evolution itself---are under metabolic control.
• Therefore, the active agents of the cell---proteins bound to water and/or lipid membranes---are more important than genes, which are just passive templates that the cell makes use of as needed to maintain itself in existence.
• Therefore, metabolism is fundamental and the most important thing to ask about any functional trait is not its evolutionary origin, but rather what contribution it makes to metabolism---that is, to maintaining the system of which it is a part in existence.
• Since the teleological and normative features of life cannot be reduced to the genes or adequately explained by the theory of natural selection, we must seek to explain them directly---as an irreducible (or "emergent") property of the living state of matter itself.
There is no space here to discuss all of these claims separately in the detail they deserve. But I hope it is clear how each of these two diametrically opposed ways of looking at life stands or falls as a package deal.
What I wish to do in the rest of this column is discuss two further points, which are liable to be either misunderstood or else overlooked entirely:
1. Adopting the bioessentialist view of life means rejecting the reductionist and mechanistic view of nature as fully reducible to its smallest constituent parts, in favor of an "emergentist" view of nature as a whole.
2. On the bioessentialist view, the main thing we must seek to explain is not the origin of some trait or even of life itself, but rather how life really works---and this means understanding the form of stability that is distinctive of living systems.
Let's look at these two points more closely.
What Is Emergence?
The idea of emergence is, roughly, the claim that at least some wholes are more than the sum of their parts---that there is something added when certain wholes come into being that was not already there, in some latent form, in the parts.
If true, this would mean that we cannot learn everything there is to know about such wholes by examining the parts alone---there is new knowledge to be gained by studying the whole as a whole.
The denial of emergence is called "reductionism"---the idea that wholes are "nothing but" the sum of their parts. If the world is as the reductionist says it is, then once we've learned everything there is to know about the pieces a whole is made up of, there is nothing more to know.
There is undoubtedly something satisfying about the reductive method. Revealing the "mechanisms" underlying a phenomenon intuitively feels like the best sort of explanation. Any explanation that falls short of that ideal seems second-class.
quark soupOn the other hand, according to the standard big bang model in cosmology, the universe once contained nothing but "quark soup." Now, it contains stars and planets and bacteria and ferns and dogs and cats---and us.
It's a bit rich to say that somehow my dog Marty, barking at a passing car at this moment, was already there in the first femtosecond after the big bang, hidden somewhere deep inside the Schrödinger equation, needing only 13-odd billion years to become manifest.
It's even richer to say that I myself am an "epiphenomenon" of the quark-level of reality---that only the quarks (or whatever you take the bottom level of the cosmic onion to be) are "really" real, while I and everything I see around me are nothing but a "mental projection" or some sort of "illusion."
Whenever reductionists bring up illusions, I always want to ask: "Whose illusion?" How can I be subject to illusions if I don't exist? And doesn't the very concept of an illusion imply that some factual claims are right and others wrong? Where do these claims come from---where do right and wrong come from, where does science itself come from---if only the quarks "really" exist?
In short, it is self-refuting and incoherent to deny that the reality we see around us is really there. But if the macroscopic world is real, then a creative process must be at work in the world that brings new entities with new properties into existence as time goes by.
martyThe real question is not whether emergence is real, it is how best to understand it. The most sensible suggestion ties the idea of emergence to our understanding of fundamental physical principles.
The basic idea was explained 40 years ago by the highly distinguished, Nobel Prize--winning, condensed-matter physicist, Philip W. Anderson, who wrote that:
Anderson then goes on to give an example of how the modern theory of condensed matter (liquids and solids) enables us to understand a relatively simple system that is macroscopic in relation to elementary particles---an ammonia molecule. He explains in simple terms how such concepts as symmetry breaking and phase transitions allow us to understand the system's macroscopic properties, which are not otherwise deducible from the properties of its constituent particles.
Anderson concludes with these words:
In this case, we can see how the whole becomes not only more than but very different from the sum of its parts.(2)
He ends his brief paper by observing:
Surely there are more levels of organization between human ehtology and DNA than there are between DNA and quantum electrodynamics, and each level can require a whole new conceptual structure.(3)
Though Anderson's suggestion has been recently updated by another Nobel Prize-winning physicist, Robert B. Laughlin, and others, it is still not as widely known as it ought to be.(4) But it is slowly beginning to gain traction among both scientists and philosophers.
Philosopher of science Margaret Morrison, in particular, has stressed the fact that this physics-based approach provides at least a partial rebuttal to the familiar charge that emergence is little more than magic---"pixie dust," as one critic has called it. Here is how she puts it:
Not only does [emergent physics] call into question the very idea that an understanding of the fundamental laws that govern the microphysical world can explain macro level phenomena, it also casts doubt on the claim that when the former strategy fails our understanding of physical behavior must be restricted to local models. The relation to higher level theoretical principles like symmetry breaking and localization shows that certain kinds of stable behavior, though not derivable from fundamental theory, can nevertheless be explained in a systematic way, one that doesn't rely on the contingencies of particular situations.(5)
But, of course, even if emergent physics makes sense, as a general proposition, we are still a long way from understanding how adaptivity or intelligent agency can emerge as a property of cells, in particular.
Next, let's look at a few of the difficulties involved in making sense of life within a general framework of emergence.
Life as Functional Stability
For everything that exists, we can ask the question: How does it manage to go on existing as the kind of system that it is? In physics parlance, to ask this question is the raise the question of the system's "stability."
phononsFor example, it is in some respects still an open question why there is solid matter at all. But the basic answer seems to be---according to modern quantum field theory (QFT)---that in a piece of crystalline matter of a given kind there is an "effective field" (i.e., emergent field) that binds the lattice into a coherent whole through the exchange of force-carrier particles known as "phonons" (collective vibrational modes of the lattice).(6)
Similar effective fields are present in all the various forms of matter, though the particulars of the exchange particles will be different in each case. Nevertheless, all such instances will have a number of factors in common, as well. For example, they are all explained at a more fundamental level by the Pauli exclusion principle.
Note that physicists do not say: "The pieces of matter that just happen to have stuck together and survived in the past are the ones we see today."
Now, it is a striking fact that cells and other forms of living matter are very "stable" in the sense that they continue to persist as the kind of system that they are for a length of time that is very long in comparison with the thermodynamic relaxation time of their constituent parts.
We know from simple observation that this "stability" of living systems is due to two factors: the intricate coordination of thousands of chemical reactions in space and time; and the ability of cells to find new regimes of successful functioning in response to perturbation, whether from within or without.
We might speak of this sort of stability as "dynamical stability," and some authors do so. Certainly, this terminology captures an important aspect of the difference between cells and crystals.
But "dynamical stability" still does not go to the heart of the matter. That is because a number of nonliving systems are also in dynamical equilibrium---and so in that sense are "stable"---even though they are away from thermodynamic equilibrium and are in constant flux internally.
Among the best known of these cases are such natural phenomena as candle flames, hurricanes, and the Great Spot of Jupiter (top of page). Scientists have also invented a number of artificial systems that illustrate the same principles, such as Bénard cells and the Belousov-Zhabotinsky reaction.(7)
In all of these cases, the stability of the system is "dynamic" in the sense that its constituents are in constant motion, even while the overall system persists as the kind of system that it is. And yet, in each of these cases, we can explain the stability by reference to free-energy minimization under the constraint of a particular combination of energy flows and boundary conditions.
That is by no means the case when it comes to living systems.(8) Therefore, the concept of "dynamical stability" is too general to help us understand what distinguishes living from nonliving systems. We need a more precise notion.
bacteriumI have suggested that we call the kind of stability that is typical of living systems, functional stability.(9) This terminology makes clear the most distinctive aspect of living beings: What allows them to go on existing as the kind of system that they are is the functional, or teleological, coordination of all the chemical reactions occurring inside them.
Note the difference between this concept of functional stability and the functional organization of a manmade machine. In the former case, the stability arises from within, presumably under some sort of global constraint arising from living matter itself. The stability in question is robust, flexible, adaptive, and---in a word---intelligent. The stability and the inherent intelligence---the teleology and the agency---are two sides of the same coin, and both must emerge somehow from the physical properties of the living state of matter.
In the case of a true machine, the functional order has nothing whatever to do with the matter out of which the machine is composed. It is imposed upon the matter entirely from without---by us. The material parts out of which a machine is made are supremely indifferent to the purpose the whole is designed by us to serve. Moreover, the stability of a machine resides in the rigidity---not the flexibility, much less the inherent intelligence---of its parts.
In contrast to what happens inside a machine, everything that goes on within a living being possesses an inherent purpose---namely, maintaining the organism in existence. That is the essential difference between living and nonliving things.
And that, above all, is what requires scientific explanation.
* * *
The pseudo-explanation of natural selection has blinded us to the importance of functional stability for too long. But the phenomenon is there, right in front of our eyes---both in the massive coherence and coordination of the biochemistry of life, and in the amazing adaptivity of living things to perturbation.
All the empirical evidence points to the existence of a fundamental power of intelligent agency underlying life. All we have to do is throw aside our mental blinders and look.
This does not mean that we currently possess the conceptual resources to explain intelligent agency as an emergent property of the living state of matter. It does mean that we need to start trying to develop such resources, if we ever wish to understand life and evolution in a fundamental way.
In future columns in this series, we'll be looking at a number of scientists who are attempting to do just that.
(1) Philip W. Anderson, "More Is Different," Science, 1972, 177: 393--396; p. 393.
(2) Ibid.; p. 395.
(3) Ibid.; p. 396.
(4) Robert B. Laughlin, et al., "The Middle Way," Proceedings of the National Academy of Sciences, USA, 2000, 97: 32--37.
(5) Margaret Morrison, "Emergence, Reduction, and Theoretical Principles: Rethinking Fundamentality," Philosophy of Science, 2006, 73: 876--887; p. 882.
(6) Howard M. Georgi, "Effective Quantum Field Theories," in Paul Davies, ed., The New Physics (Cambridge UP, 1989), pp. 446--457.
(7) For a readable introduction to nonequilibrium thermodynamics, see Eric D. Schneider and Dorion Sagan, Into the Cool: Energy Flow, Thermodynamics, and Life (University of Chicago Press, 2005). For a more rigorous treatment, see Dilip Kondepudi and Ilya Prigogine, Modern Thermodynamics: From Heat Engines to Dissipative Structures (John Wiley & Sons, 1998).
(8) I merely assert the point here as more or less self-evident, but for a closely reasoned argument, see Ernest Nagel, "Teleology Revisited," Philosophy of Science, 1977, 74: 261--301 (reprinted in Ernest Nagel, Teleology Revisited and Other Essays in the Philosophy and History of Science [Columbia UP, 1979], pp. 275--316). For two other incisive discussions importantly related to this point, see Howard H. Pattee, "The Physics of Symbols: Bridging the Epistemic Cut," BioSystems, 2001, 60: 5--21; and David L. Abel, The First Gene (LongView Press, 2011).
(9) See my Ph.D. dissertation, Teleological Realism in Biology (University of Notre Dame, 2011); pp. 181--188.
Add comment
Security code |
369aa9286cc4622d |
Etymology and discovery
The word quantum comes from the Latin quantus, meaning "how great". "Quanta", short for "quanta of electricity" (electrons), was used in a 1902 article on the photoelectric effect by Philipp Lenard, who credited Hermann von Helmholtz for using the word in the area of electricity. However, the word quantum in general was well known before 1900.[2] It was often used by physicians, such as in the term quantum satis. Both Helmholtz and Julius von Mayer were physicians as well as physicists. Helmholtz used quantum with reference to heat in his article[3] on Mayer's work, and the word quantum can be found in the formulation of the first law of thermodynamics by Mayer in his letter[4] dated July 24, 1841.
Beyond electromagnetic radiation
See also
3. ^ E. Helmholtz, Robert Mayer's Priorität (in German)
4. ^ Herrmann, Armin (1991). "Heimatseite von Robert J. Mayer" (in German). Weltreich der Physik, GNT-Verlag. Archived from the original on 1998-02-09.CS1 maint: BOT: original-url status unknown (link)
5. ^ Planck, M. (1901). "Ueber die Elementarquanta der Materie und der Elektricität". Annalen der Physik (in German). 309 (3): 564–566. Bibcode:1901AnP...309..564P. doi:10.1002/andp.19013090311.
6. ^ Planck, Max (1883). "Ueber das thermodynamische Gleichgewicht von Gasgemengen". Annalen der Physik (in German). 255 (6): 358. Bibcode:1883AnP...255..358P. doi:10.1002/andp.18832550612.
10. ^ Klein, Martin J. (1961). "Max Planck and the beginnings of the quantum theory". Archive for History of Exact Sciences. 1 (5): 459. doi:10.1007/BF00327765.
Further reading
• J. Mehra and H. Rechenberg, The Historical Development of Quantum Theory, Vol.1, Part 1, Springer-Verlag New York Inc., New York 1982.
Deepak Chopra
Deepak Chopra (; Hindi: [d̪iːpək tʃoːpraː]; born October 22, 1946) is an Indian-born American author, public speaker, alternative medicine advocate, and a prominent figure in the New Age movement. Through his books and videos, he has become one of the best-known and wealthiest figures in alternative medicine.Chopra studied medicine in India before emigrating to the United States in 1970 where he completed residencies in internal medicine and endocrinology. As a licensed physician, he became chief of staff at the New England Memorial Hospital (NEMH) in 1980. He met Maharishi Mahesh Yogi in 1985 and became involved with the Transcendental Meditation movement (TM). He resigned his position at NEMH shortly thereafter to establish the Maharishi Ayurveda Health Center. Chopra gained a following in 1993 after he was interviewed on The Oprah Winfrey Show about his books. He then left the TM movement to become the executive director of Sharp HealthCare's Center for Mind-Body Medicine and in 1996 he co-founded the Chopra Center for Wellbeing.Chopra believes that a person may attain "perfect health", a condition "that is free from disease, that never feels pain", and "that cannot age or die". Seeing the human body as being undergirded by a "quantum mechanical body" composed not of matter but of energy and information, he believes that "human aging is fluid and changeable; it can speed up, slow down, stop for a time, and even reverse itself," as determined by one's state of mind. He claims that his practices can also treat chronic disease.The ideas Chopra promotes have been regularly criticized by medical and scientific professionals as pseudoscience. This criticism has been described as ranging "from dismissive [to] damning". Philosopher Robert Carroll states Chopra attempts to integrate Ayurveda with quantum mechanics to justify his teachings. Chopra argues that what he calls "quantum healing" cures any manner of ailments, including cancer, through effects that he claims are literally based on the same principles as quantum mechanics. This has led physicists to object to his use of the term quantum in reference to medical conditions and the human body. Evolutionary biologist Richard Dawkins has said that Chopra uses "quantum jargon as plausible-sounding hocus pocus". Chopra's treatments generally elicit nothing but a placebo response, and have drawn criticism that the unwarranted claims made for them may raise "false hope" and lure sick people away from legitimate medical treatments.
Many-worlds interpretation
The original relative state formulation is due to Hugh Everett in 1957. Later, this formulation was popularized and renamed many-worlds by Bryce Seligman DeWitt in the 1960s and 1970s. The decoherence approaches to interpreting quantum theory have been further explored and developed, becoming quite popular. MWI is one of many multiverse hypotheses in physics and philosophy. It is currently considered a mainstream interpretation along with the other decoherence interpretations, collapse theories (including the historical Copenhagen interpretation), and hidden variable theories such as the Bohmian mechanics.
In many-worlds, the subjective appearance of wavefunction collapse is explained by the mechanism of quantum decoherence, and this is supposed to resolve all of the correlation paradoxes of quantum theory, such as the EPR paradox and Schrödinger's cat, since every possible outcome of every event defines or exists in its own "history" or "world".
Like all elementary particles, photons are currently best explained by quantum mechanics and exhibit wave–particle duality, exhibiting properties of both waves and particles. For example, a single photon may be refracted by a lens and exhibit wave interference with itself, and it can behave as a particle with definite and finite measurable position or momentum, though not both at the same time as per Heisenberg's uncertainty principle. The photon's wave and quantum qualities are two observable aspects of a single phenomenon—they cannot be described by any mechanical model; a representation of this dual property of light that assumes certain points on the wavefront to be the seat of the energy is not possible. The quanta in a light wave are not spatially localized.
The modern concept of the photon was developed gradually by Albert Einstein in the early 20th century to explain experimental observations that did not fit the classical wave model of light. The benefit of the photon model is that it accounts for the frequency dependence of light's energy, and explains the ability of matter and electromagnetic radiation to be in thermal equilibrium. The photon model accounts for anomalous observations, including the properties of black-body radiation, that others (notably Max Planck) had tried to explain using semiclassical models. In that model, light is described by Maxwell's equations, but material objects emit and absorb light in quantized amounts (i.e., they change energy only by certain particular discrete amounts). Although these semiclassical models contributed to the development of quantum mechanics, many further experiments beginning with the phenomenon of Compton scattering of single photons by electrons, validated Einstein's hypothesis that light itself is quantized. In December 1926, American physical chemist Gilbert N. Lewis coined the widely-adopted name "photon" for these particles in a letter to Nature. After Arthur H. Compton won the Nobel Prize in 1927 for his scattering studies, most scientists accepted that light quanta have an independent existence, and the term "photon" was accepted.
Planck constant
The Planck constant (denoted h, also called Planck's constant) is a physical constant that is the quantum of electromagnetic action, which relates the energy carried by a photon to its frequency. A photon's energy is equal to its frequency multiplied by the Planck constant. The Planck constant is of fundamental importance in quantum mechanics, and in metrology it is the basis for the definition of the kilogram.
At the end of the 19th century, physicists were unable to explain why the observed spectrum of black body radiation, which by then had been accurately measured, diverged significantly at higher frequencies from that predicted by existing theories. In 1900, Max Planck empirically derived a formula for the observed spectrum. He assumed that a hypothetical electrically charged oscillator in a cavity that contained black body radiation could only change its energy in a minimal increment, E, that was proportional to the frequency of its associated electromagnetic wave. He was able to calculate the proportionality constant, h, from the experimental measurements, and that constant is named in his honor. In 1905, the value E was associated by Albert Einstein with a "quantum" or minimal element of the energy of the electromagnetic wave itself. The light quantum behaved in some respects as an electrically neutral particle, as opposed to an electromagnetic wave. It was eventually called a photon. Max Planck received the 1918 Nobel Prize in Physics "in recognition of the services he rendered to the advancement of Physics by his discovery of energy quanta".
Since energy and mass are equivalent, the Planck constant also relates mass to frequency. By 2017, the Planck constant had been measured with sufficient accuracy in terms of the SI base units, that it was central to replacing the metal cylinder, called the International Prototype of the Kilogram (IPK), that had defined the kilogram since 1889. The new definition was unanimously approved at the General Conference on Weights and Measures (CGPM) on 16 November 2018 as part of the 2019 redefinition of SI base units. For this new definition of the kilogram, the Planck constant, as defined by the ISO standard, was set to 6.62607015×10−34 J⋅s exactly. The kilogram was the last SI base unit to be re-defined by a fundamental physical property to replace a physical artefact.
Quantum Leap
Quantum Leap is an American science-fiction television series that originally aired on NBC for five seasons, from March 1989 through May 1993. Created by Donald P. Bellisario, it starred Scott Bakula as Dr. Sam Beckett, a physicist who leaps through spacetime during an experiment in time travel, by temporarily taking the place of other people to correct historical mistakes. Dean Stockwell co-stars as Admiral Al Calavicci, Sam's womanizing, cigar-smoking companion and best friend, who appears to him as a hologram.
The series features a mix of humor, drama, romance, social commentary, and science fiction. The show was ranked #19 on TV Guide's "Top Cult Shows Ever".
Quantum computing
Quantum computing is the use of quantum-mechanical phenomena such as superposition and entanglement to perform computation. A quantum computer is used to perform such computation, which can be implemented theoretically or physically.The field of quantum computing is actually a sub-field of quantum information science, which includes quantum cryptography and quantum communication. Quantum Computing was started in the early 1980s when Richard Feynman and Yuri Manin expressed the idea that a quantum computer had the potential to simulate things that a classical computer could not. In 1994, Peter Shor shocked the world with an algorithm that had the potential to decrypt all secured communications.There are two main approaches to physically implementing a quantum computer currently, analog and digital. Analog approaches are further divided into quantum simulation, quantum annealing, and adiabatic quantum computation. Digital quantum computers use quantum logic gates to do computation. Both approaches use quantum bits or qubits.Qubits are fundamental to quantum computing and are somewhat analogous to bits in a classical computer. Qubits can be in a 1 or 0 quantum state. But they can also be in a superposition of the 1 and 0 states. However, when qubits are measured they always give a 0 or a 1 based on the quantum state they were in.
Today's physical quantum computers are very noisy and quantum error correction is a burgeoning field of research. Quantum supremacy is hopefully the next milestone that quantum computing will achieve soon. While there is much hope, money, and research in the field of quantum computing, as of March 2019 there have been no commercially useful algorithms published for today's noisy quantum computers.
Quantum entanglement
Measurements of physical properties such as position, momentum, spin, and polarization, performed on entangled particles are found to be correlated. For example, if a pair of particles is generated in such a way that their total spin is known to be zero, and one particle is found to have clockwise spin on a certain axis, the spin of the other particle, measured on the same axis, will be found to be counterclockwise, as is to be expected due to their entanglement. However, this behavior gives rise to seemingly paradoxical effects: any measurement of a property of a particle performs an irreversible collapse on that particle and will change the original quantum state. In the case of entangled particles, such a measurement will be on the entangled system as a whole.
Such phenomena were the subject of a 1935 paper by Albert Einstein, Boris Podolsky, and Nathan Rosen, and several papers by Erwin Schrödinger shortly thereafter, describing what came to be known as the EPR paradox. Einstein and others considered such behavior to be impossible, as it violated the local realism view of causality (Einstein referring to it as "spooky action at a distance") and argued that the accepted formulation of quantum mechanics must therefore be incomplete.
Later, however, the counterintuitive predictions of quantum mechanics were verified experimentally in tests where the polarization or spin of entangled particles were measured at separate locations, statistically violating Bell's inequality. In earlier tests it couldn't be absolutely ruled out that the test result at one point could have been subtly transmitted to the remote point, affecting the outcome at the second location. However so-called "loophole-free" Bell tests have been performed in which the locations were separated such that communications at the speed of light would have taken longer—in one case 10,000 times longer—than the interval between the measurements.According to some interpretations of quantum mechanics, the effect of one measurement occurs instantly. Other interpretations which don't recognize wavefunction collapse dispute that there is any "effect" at all. However, all interpretations agree that entanglement produces correlation between the measurements and that the mutual information between the entangled particles can be exploited, but that any transmission of information at faster-than-light speeds is impossible.Quantum entanglement has been demonstrated experimentally with photons, neutrinos, electrons, molecules as large as buckyballs, and even small diamonds. The utilization of entanglement in communication and computation is a very active area of research.
Quantum field theory
Quantum gravity
Quantum gravity (QG) is a field of theoretical physics that seeks to describe gravity according to the principles of quantum mechanics, and where quantum effects cannot be ignored, such as near compact astrophysical objects where the effects of gravity are strong.
The current understanding of gravity is based on Albert Einstein's general theory of relativity, which is formulated within the framework of classical physics. On the other hand, the other three fundamental forces of physics are described within the framework of quantum mechanics and quantum field theory, radically different formalisms for describing physical phenomena. It is sometimes argued that a quantum mechanical description of gravity is necessary on the grounds that one cannot consistently couple a classical system to a quantum one.While a quantum theory of gravity may be needed to reconcile general relativity with the principles of quantum mechanics, difficulties arise when applying the usual prescriptions of quantum field theory to the force of gravity via graviton bosons. The problem is that the theory one gets in this way is not renormalizable (it predicts infinite values for some observable properties such as the mass of particles) and therefore cannot be used to make meaningful physical predictions. As a result, theorists have taken up more radical approaches to the problem of quantum gravity, the most popular approaches being string theory and loop quantum gravity. Although some quantum gravity theories, such as string theory, try to unify gravity with the other fundamental forces, others, such as loop quantum gravity, make no such attempt; instead, they make an effort to quantize the gravitational field while it is kept separate from the other forces.
Strictly speaking, the aim of quantum gravity is only to describe the quantum behavior of the gravitational field and should not be confused with the objective of unifying all fundamental interactions into a single mathematical framework. A quantum field theory of gravity that is unified with a grand unified theory is sometimes referred to as a theory of everything (TOE). While any substantial improvement into the present understanding of gravity would aid further work towards unification, the study of quantum gravity is a field in its own right with various branches having different approaches to unification.
One of the difficulties of formulating a quantum gravity theory is that quantum gravitational effects only appear at length scales near the Planck scale, around 10−35 meter, a scale far smaller, and equivalently far larger in energy, than those currently accessible by high energy particle accelerators. Therefore physicists lack experimental data which could distinguish between the competing theories which have been proposed and thus gedanken experimental approaches are suggested as a testing tool for these theories.
Quantum mechanics
Quantum of Solace
Quantum of Solace is a 2008 spy film, the twenty-second in the James Bond series produced by Eon Productions, directed by Marc Forster and written by Paul Haggis, Neal Purvis and Robert Wade. It is the second film to star Daniel Craig as the fictional MI6 agent James Bond. The film also stars Olga Kurylenko, Mathieu Amalric, Gemma Arterton, Jeffrey Wright, and Judi Dench. In the film, Bond seeks revenge for the death of his lover, Vesper Lynd, and is assisted by Camille Montes, who is plotting revenge for the murder of her own family. The trail eventually leads them to wealthy businessman Dominic Greene, a member of the Quantum organisation, who intends to stage a coup d'état in Bolivia to seize control of their water supply.
Producer Michael G. Wilson developed the film's plot while the previous film in the series, Casino Royale, was being shot. Purvis, Wade, and Haggis contributed to the script. Craig and Forster had to write some sections themselves due to the Writers' Strike, though they were not given the screenwriter credit in the final cut. The title was chosen from a 1959 short story in Ian Fleming's For Your Eyes Only, though the film does not contain any elements of that story. Location filming took place in Mexico, Panama, Chile, Italy, Austria and Wales, while interior sets were built and filmed at Pinewood Studios. Forster aimed to make a modern film that also featured classic cinema motifs: a vintage Douglas DC-3 was used for a flight sequence, and Dennis Gassner's set designs are reminiscent of Ken Adam's work on several early Bond films. Taking a course away from the usual Bond villains, Forster rejected any grotesque appearance for the character Dominic Greene to emphasise the hidden and secret nature of the film's contemporary villains.
The film was also marked by its frequent depictions of violence, with a 2012 study by the University of Otago in New Zealand finding it to be the most violent film in the franchise. Whereas Dr. No featured 109 "trivial or severely violent" acts, Quantum of Solace had a count of 250—the most depictions of violence in any Bond film—even more prominent since it was also the shortest film in the franchise. Quantum of Solace premiered at the Odeon Leicester Square on 29 October 2008, gathering mixed reviews, which mainly praised Craig's gritty performance and the film's action sequences, but felt that the film was less impressive than its predecessor Casino Royale. As of September 2016, it is the fourth-highest-grossing James Bond film, without adjusting for inflation, earning $586 million worldwide, and becoming the seventh highest-grossing film of 2008.
Richard Feynman
Schrödinger's cat
Schrödinger equation
The Schrödinger equation is a linear partial differential equation that describes the wave function or state function of a quantum-mechanical system. It is a key result in quantum mechanics, and its discovery was a significant landmark in the development of the subject. The equation is named after Erwin Schrödinger, who derived the equation in 1925, and published it in 1926, forming the basis for the work that resulted in his Nobel Prize in Physics in 1933.
In classical mechanics, Newton's second law (F = ma) is used to make a mathematical prediction as to what path a given physical system will take over time following a set of known initial conditions. Solving this equation gives the position and the momentum of the physical system as a function of the external force on the system. Those two parameters are sufficient to describe its state at each time instant. In quantum mechanics, the analogue of Newton's law is Schrödinger's equation.
The concept of a wave function is a fundamental postulate of quantum mechanics; the wave function defines the state of the system at each spatial position, and time. Using these postulates, Schrödinger's equation can be derived from the fact that the time-evolution operator must be unitary, and must therefore be generated by the exponential of a self-adjoint operator, which is the quantum Hamiltonian. This derivation is explained below.
In the Copenhagen interpretation of quantum mechanics, the wave function is the most complete description that can be given of a physical system. Solutions to Schrödinger's equation describe not only molecular, atomic, and subatomic systems, but also macroscopic systems, possibly even the whole universe. Schrödinger's equation is central to all applications of quantum mechanics including quantum field theory which combines special relativity with quantum mechanics. Theories of quantum gravity, such as string theory, also do not modify Schrödinger's equation.[citation needed]
The Schrödinger equation is not the only way to study quantum mechanical systems and make predictions. The other formulations of quantum mechanics include matrix mechanics, introduced by Werner Heisenberg, and the path integral formulation, developed chiefly by Richard Feynman. Paul Dirac incorporated matrix mechanics and the Schrödinger equation into a single formulation.
String theory
Uncertainty principle
Introduced first in 1927, by the German physicist Werner Heisenberg, it states that the more precisely the position of some particle is determined, the less precisely its momentum can be known, and vice versa. The formal inequality relating the standard deviation of position σx and the standard deviation of momentum σp was derived by Earle Hesse Kennard later that year and by Hermann Weyl in 1928:
Historically, the uncertainty principle has been confused with a related effect in physics, called the observer effect, which notes that measurements of certain systems cannot be made without affecting the systems, that is, without changing something in a system. Heisenberg utilized such an observer effect at the quantum level (see below) as a physical "explanation" of quantum uncertainty. It has since become clearer, however, that the uncertainty principle is inherent in the properties of all wave-like systems, and that it arises in quantum mechanics simply due to the matter wave nature of all quantum objects. Thus, the uncertainty principle actually states a fundamental property of quantum systems and is not a statement about the observational success of current technology. It must be emphasized that measurement does not mean only a process in which a physicist-observer takes part, but rather any interaction between classical and quantum objects regardless of any observer.Since the uncertainty principle is such a basic result in quantum mechanics, typical experiments in quantum mechanics routinely observe aspects of it. Certain experiments, however, may deliberately test a particular form of the uncertainty principle as part of their main research program. These include, for example, tests of number–phase uncertainty relations in superconducting or quantum optics systems. Applications dependent on the uncertainty principle for their operation include extremely low-noise technology such as that required in gravitational wave interferometers.
Wave interference
In physics, interference is a phenomenon in which two waves superpose to form a resultant wave of greater, lower, or the same amplitude. Constructive and destructive interference result from the interaction of waves that are correlated or coherent with each other, either because they come from the same source or because they have the same or nearly the same frequency. Interference effects can be observed with all types of waves, for example, light, radio, acoustic, surface water waves, gravity waves, or matter waves. The resulting images or graphs are called interferograms.
Images, videos and audio are available under their respective licenses. |
8086f226bb11c208 | College Math Teaching
August 2, 2012
MAA Mathfest Madison Day 1, 2 August 2012
I am sitting in the main ballroom waiting for the large public talks to start. I should be busy most of the day; it looks as if there will be some interesting all day long.
I like this conference not only for the variety but also for the timing; it gives me some momentum going into the academic year.
I regret not taking my camera; downtown Madison is scenic and we are close to the water. The conference venue is just a short walk away from the hotel; I see some possibilities for tomorrow’s run. Today: just weights and maybe a bit of treadmill in the afternoon.
The Talks
The opening lecture was the MAA-AMS joint talk by David Mumford of Brown University. This guy’s credentials are beyond stellar: Fields Medal, member of the National Academy of Science, etc.
His talk was about applied and pure mathematics and how there really shouldn’t be that much of a separation between the two, though there is. For one thing: pure mathematics prestige is measured by the depth of the result; applied mathematical prestige is mostly measured by the utility of the produced model. Pure mathematicians tend to see applied mathematics as shallow and simple and they resent the fact that applied math…gets a lot more funding.
He talked a bit about education and how the educational establishment ought to solicit input from pure areas; he also talked about computer science education (in secondary schools) and mentioned that there should be more emphasis on coding (I agree).
He mentioned that he tended to learn better when he had a concrete example to start from (I am the same way).
What amused me: his FIRST example was on PDE (partial differential equations) model of neutron flux through nuclear reactors used for submarines; note that these reactors were light water, thermal reactors (in that the fission reaction became self sustaining via the absorption of neutrons whose energy levels had been lowered by a moderator (the neutrons lose energy when they collide with atoms that aren’t too much heavier).
Of course, in nuclear power school, we studied the PDEs of the situation after the design had been developed; these people had to come up with an optimal geometry to begin with.
Note that they didn’t have modern digital computers; they used analogue computers modeled after simple voltage drops across resistors!
About the PDE: you had two neutron populations: “fast” neutrons (ones at high energy levels) and “slow” neutrons (ones at lower energy levels). The fast neutrons are slowed down to become thermal neutrons. But thermal neutrons in turn cause more fissions thereby increasing the fast neutron flux; hence you have two linked PDEs. Of course there is leakage, absorption by control rods, etc., and the classical PDEs can’t be solved in closed form.
Another thing I didn’t know: Clairaut (from the “symmetry of mixed partial derivatives” fame) actually came up with the idea of the Fourier series before Fourier did; he did this in an applied setting.
Next talk Amie Wilkinson of Northwestern (soon to be University of Chicago) gave a talk about dynamical systems. She is one of those who has publication in the finest journals that mathematics has to offer (stellar).
The whole talk was pretty good. Highlights: she mentioned Henri Poincare and how he worked on the 3-body problem (one massive body, one medium body, and one tiny body that didn’t exert gravitational force on the other bodies). This creates a 3-dimensional system whose dynamics live in 3-space (the system space is, of course, has much higher dimension). Now consider a closed 2 dimensional manifold in that space and a point on that manifold. Now study the orbit of that point under the dynamical system action. Eventually, that orbit intersects the 2 dimensional manifold again. The action of moving from the first point to the first intersection point actually describes a motion ON THE TWO MANIFOLD and if we look at ALL intersections, we get a the orbit of that point, considered as an action on the two dimensional manifold.
So, in some sense, this two manifold has an “inherited” action on it. Now if we look at, say, a square on that 2-dimensional manifold, it was proved that this square comes back in a “folded” fashion: this is the famed “Smale Horseshoe map“:
Other things: she mentioned that there are dynamical systems that are stable with respect to perturbations that have unstable orbits (with respect to initial conditions) and that these instabilities cannot be perturbed away; they are inherent to the system. There are other dynamical systems (with less stability) that have this property as well.
There is, of course, much more. I’ll link to the lecture materials when I find them.
Last morning Talk
Bernd Sturmfels on Tropical Mathematics
Ok, quickly, if you have a semi-ring (no additive inverses) with the following operations:
x \oplus y = min (x,y) and x \otimes y = x + y (check that the operations distribute), what good would it be? Why would you care about such a beast?
Answer: many reasons. This sort of object lends itself well to things like matrix operations and is used for things such as “least path” problems (dynamic programming) and “tree metrics” in biology.
Think of it this way: if one is considering, say, an “order n” technique in numerical analysis, then the products of the error terms adds to the order, and the sum of the errors gives the, ok, maximum of the two summands (very similar).
The PDF of the slides in today’s lecture can be found here.
May 26, 2012
Eigenvalues, Eigenvectors, Eigenfunctions and all that….
The purpose of this note is to give a bit of direction to the perplexed student.
I am not going to go into all the possible uses of eigenvalues, eigenvectors, eigenfuntions and the like; I will say that these are essential concepts in areas such as partial differential equations, advanced geometry and quantum mechanics:
Quantum mechanics, in particular, is a specific yet very versatile implementation of this scheme. (And quantum field theory is just a particular example of quantum mechanics, not an entirely new way of thinking.) The states are “wave functions,” and the collection of every possible wave function for some given system is “Hilbert space.” The nice thing about Hilbert space is that it’s a very restrictive set of possibilities (because it’s a vector space, for you experts); once you tell me how big it is (how many dimensions), you’ve specified your Hilbert space completely. This is in stark contrast with classical mechanics, where the space of states can get extraordinarily complicated. And then there is a little machine — “the Hamiltonian” — that tells you how to evolve from one state to another as time passes. Again, there aren’t really that many kinds of Hamiltonians you can have; once you write down a certain list of numbers (the energy eigenvalues, for you pesky experts) you are completely done.
(emphasis mine).
So it is worth understanding the eigenvector/eigenfunction and eigenvalue concept.
First note: “eigen” is German for “self”; one should keep that in mind. That is part of the concept as we will see.
The next note: “eigenfunctions” really are a type of “eigenvector” so if you understand the latter concept at an abstract level, you’ll understand the former one.
The third note: if you are reading this, you are probably already familiar with some famous eigenfunctions! We’ll talk about some examples prior to giving the formal definition. This remark might sound cryptic at first (but hang in there), but remember when you learned \frac{d}{dx} e^{ax} = ae^{ax} ? That is, you learned that the derivative of e^{ax} is a scalar multiple of itself? (emphasis on SELF). So you already know that the function e^{ax} is an eigenfunction of the “operator” \frac{d}{dx} with eigenvalue a because that is the scalar multiple.
The basic concept of eigenvectors (eigenfunctions) and eigenvalues is really no more complicated than that. Let’s do another one from calculus:
the function sin(wx) is an eigenfunction of the operator \frac{d^2}{dx^2} with eigenvalue -w^2 because \frac{d^2}{dx^2} sin(wx) = -w^2sin(wx). That is, the function sin(wx) is a scalar multiple of its second derivative. Can you think of more eigenfunctions for the operator \frac{d^2}{dx^2} ?
Answer: cos(wx) and e^{ax} are two others, if we only allow for non zero eigenvalues (scalar multiples).
So hopefully you are seeing the basic idea: we have a collection of objects called vectors (can be traditional vectors or abstract ones such as differentiable functions) and an operator (linear transformation) that acts on these objects to yield a new object. In our example, the vectors were differentiable functions, and the operators were the derivative operators (the thing that “takes the derivative of” the function). An eigenvector (eigenfunction)-eigenvalue pair for that operator is a vector (function) that is transformed to a scalar multiple of itself by the operator; e. g., the derivative operator takes e^{ax} to ae^{ax} which is a scalar multiple of the original function.
Formal Definition
We will give the abstract, formal definition. Then we will follow it with some examples and hints on how to calculate.
First we need the setting. We start with a set of objects called “vectors” and “scalars”; the usual rules of arithmetic (addition, multiplication, subtraction, division, distributive property) hold for the scalars and there is a type of addition for the vectors and scalars and the vectors “work together” in the intuitive way. Example: in the set of, say, differentiable functions, the scalars will be real numbers and we have rules such as a (f + g) =af + ag , etc. We could also use things like real numbers for scalars, and say, three dimensional vectors such as [a, b, c] More formally, we start with a vector space (sometimes called a linear space) which is defined as a set of vectors and scalars which obey the vector space axioms.
Now, we need a linear transformation, which is sometimes called a linear operator. A linear transformation (or operator) is a function L that obeys the following laws: L(\vec{v} + \vec{w}) = L(\vec{v}) + L(\vec{w} ) and L(a\vec{v}) = aL(\vec{v}) . Note that I am using \vec{v} to denote the vectors and the undecorated variable to denote the scalars. Also note that this linear transformation L might take one vector space to a different vector space.
Common linear transformations (and there are many others!) and their eigenvectors and eigenvalues.
Consider the vector space of two-dimensional vectors with real numbers as scalars. We can create a linear transformation by matrix multiplication:
L([x,y]^T) = \left[ \begin{array}{cc} a & b \\ c & d \end{array} \right] \left[ \begin{array}{c} x \\ y \end{array} \right]=\left[ \begin{array}{c} ax+ by \\ cx+dy \end{array} \right] (note: [x,y]^T is the transpose of the row vector; we need to use a column vector for the usual rules of matrix multiplication to apply).
It is easy to check that the operation of matrix multiplying a vector on the left by an appropriate matrix is yields a linear transformation.
Here is a concrete example: L([x,y]^T) = \left[ \begin{array}{cc} 1 & 2 \\ 0 & 3 \end{array} \right] \left[ \begin{array}{c} x \\ y \end{array} \right]=\left[ \begin{array}{c} x+ 2y \\ 3y \end{array} \right]
So, does this linear transformation HAVE non-zero eigenvectors and eigenvalues? (not every one does).
Let’s see if we can find the eigenvectors and eigenvalues, provided they exist at all.
For [x,y]^T to be an eigenvector for L , remember that L([x,y]^T) = \lambda [x,y]^T for some real number \lambda
So, using the matrix we get: L([x,y]^T) = \left[ \begin{array}{cc} 1 & 2 \\ 0 & 3 \end{array} \right] \left[ \begin{array}{c} x \\ y \end{array} \right]= \lambda \left[ \begin{array}{c} x \\ y \end{array} \right] . So doing some algebra (subtracting the vector on the right hand side from both sides) we obtain \left[ \begin{array}{cc} 1 & 2 \\ 0 & 3 \end{array} \right] \left[ \begin{array}{c} x \\ y \end{array} \right] - \lambda \left[ \begin{array}{c} x \\ y \end{array} \right] = \left[ \begin{array}{c} 0 \\ 0 \end{array} \right]
At this point it is tempting to try to use a distributive law to factor out \left[ \begin{array}{c} x \\ y \end{array} \right] from the left side. But, while the expression makes sense prior to factoring, it wouldn’t AFTER factoring as we’d be subtracting a scalar number from a 2 by 2 matrix! But there is a way out of this: one can then insert the 2 x 2 identity matrix to the left of the second term of the left hand side:
Notice that by doing this, we haven’t changed anything except now we can factor out that vector; this would leave:
(\left[ \begin{array}{cc} 1 & 2 \\ 0 & 3 \end{array} \right] - \lambda\left[ \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right] )\left[ \begin{array}{c} x \\ y \end{array} \right] = \left[ \begin{array}{c} 0 \\ 0 \end{array} \right]
Which leads to:
(\left[ \begin{array}{cc} 1-\lambda & 2 \\ 0 & 3-\lambda \end{array} \right] ) \left[ \begin{array}{c} x \\ y \end{array} \right] = \left[ \begin{array}{c} 0 \\ 0 \end{array} \right]
Now we use a fact from linear algebra: if [x,y]^T is not the zero vector, we have a non-zero matrix times a non-zero vector yielding the zero vector. This means that the matrix is singular. In linear algebra class, you learn that singular matrices have determinant equal to zero. This means that (1-\lambda)(3-\lambda) = 0 which means that \lambda = 1, \lambda = 3 are the respective eigenvalues. Note: when we do this procedure with any 2 by 2 matrix, we always end up with a quadratic with \lambda as the variable; if this quadratic has real roots then the linear transformation (or matrix) has real eigenvalues. If it doesn’t have real roots, the linear transformation (or matrix) doesn’t have non-zero real eigenvalues.
Now to find the associated eigenvectors: if we start with \lambda = 1 we get
(\left[ \begin{array}{cc} 0 & 2 \\ 0 & 2 \end{array} \right] \left[ \begin{array}{c} x \\ y \end{array} \right] = \left[ \begin{array}{c} 0 \\ 0 \end{array} \right] which has solution \left[ \begin{array}{c} x \\ y \end{array} \right] = \left[ \begin{array}{c} 1 \\ 0 \end{array} \right] . So that is the eigenvector associated with eigenvalue 1.
If we next try \lambda = 3 we get
In the general “k-dimensional vector space” case, the recipe for finding the eigenvectors and eigenvalues is the same.
1. Find the matrix A for the linear transformation.
2. Form the matrix A - \lambda I which is the same as matrix A except that you have subtracted \lambda from each diagonal entry.
3. Note that det(A - \lambda I) is a polynomial in variable \lambda ; find its roots \lambda_1, \lambda_2, ...\lambda_n . These will be the eigenvalues.
4. Start with \lambda = \lambda_1 Substitute this into the matrix-vector equation det(A - \lambda I) \vec{v_1} = \vec{0} and solve for \vec({v_1} . That will be the eigenvector associated with the first eigenvalue. Do this for each eigenvalue, one at a time. Note: you can get up to k “linearly independent” eigenvectors in this manner; that will be all of them.
Practical note
Yes, this should work “in theory” but practically speaking, there are many challenges. For one: for equations of degree 5 or higher, it is known that there is no formula that will find the roots for every equation of that degree (Galios proved this; this is a good reason to take an abstract algebra course!). Hence one must use a numerical method of some sort. Also, calculation of the determinant involves many round-off error-inducing calculations; hence sometimes one must use sophisticated numerical techniques to get the eigenvalues (a good reason to take a numerical analysis course!)
Consider a calculus/differential equation related case of eigenvectors (eigenfunctions) and eigenvalues.
Our vectors will be, say, infinitely differentiable functions and our scalars will be real numbers. We will define the operator (linear transformation) D^n = \frac{d^n}{dx^n} , that is, the process that takes the n’th derivative of a function. You learned that the sum of the derivatives is the derivative of the sums and that you can pull out a constant when you differentiate. Hence D^n is a linear operator (transformation); we use the term “operator” when we talk about the vector space of functions, but it is really just a type of linear transformation.
We can also use these operators to form new operators; that is (D^2 + 3D)(y) = D^2(y) + 3D(y) = \frac{d^2y}{dx^2} + 3\frac{dy}{dx} We see that such “linear combinations” of linear operators is a linear operator.
So, what does it mean to find eigenvectors and eigenvalues of such beasts?
Suppose we with to find the eigenvectors and eigenvalues of (D^2 + 3D) . An eigenvector is a twice differentiable function y (ok, we said “infinitely differentiable”) such that (D^2 + 3D) = \lambda y or \frac{d^2y}{dx^2} + 3\frac{dy}{dx} = \lambda y which means \frac{d^2y}{dx^2} + 3\frac{dy}{dx} - \lambda y = 0 . You might recognize this from your differential equations class; the only “tweak” is that we don’t know what \lambda is. But if you had a differential equations class, you’d recognize that the solution to this differential equation depends on the roots of the characteristic equation m^2 + 3m - \lambda = 0 which has solutions: m = -\frac{3}{2} \pm \frac{\sqrt{9-4\lambda}}{2} and the solution takes the form e^{m_1}, e^{m_2} if the roots are real and distinct, e^{ax}sin(bx), e^{ax}cos(bx) if the roots are complex conjugates a \pm bi and e^{m}, xe^{m} if there is a real, repeated root. In any event, those functions are the eigenfunctions and these very much depend on the eigenvalues.
Of course, reading this little note won’t make you an expert, but it should get you started on studying.
I’ll close with a link on how these eigenfunctions and eigenvalues are calculated (in the context of solving a partial differential equation).
August 19, 2011
Partial Differential Equations, Differential Equations and the Eigenvalue/Eigenfunction problem
Suppose we are trying to solve the following partial differential equation:
\frac{\partial \psi}{\partial t} = 3 \frac{\partial ^2 \phi}{\partial x^2} subject to boundary conditions:
\psi(0) = \psi(\pi) = 0, \psi(x,0) = x(x-\pi)
It turns out that we will be using techniques from ordinary differential equations and concepts from linear algebra; these might be confusing at first.
The first thing to note is that this differential equation (the so-called heat equation) is known to satisfy a “uniqueness property” in that if one obtains a solution that meets the boundary criteria, the solution is unique. Hence we can attempt to find a solution in any way we choose; if we find it, we don’t have to wonder if there is another one lurking out there.
So one technique that is often useful is to try: let \psi = XT where X is a function of x alone and T is a function of t alone. Then when we substitute into the partial differential equation we obtain:
XT^{\prime} = 3X^{\prime\prime}T which leads to \frac{T^{\prime}}{T} = 3\frac{X^{\prime\prime}}{X}
The next step is to note that the left hand side does NOT depend on x ; it is a function of t alone. The right hand side does not depend on t as it is a function of x alone. But the two sides are equal; hence neither side can depend on x or t ; they must be constant.
Hence we have \frac{T^{\prime}}{T} = 3\frac{X^{\prime\prime}}{X} = \lambda
So far, so good. But then you are told that \lambda is an eigenvalue. What is that about?
The thing to notice is that T^{\prime} - \lambda T = 0 and X^{\prime\prime} - \frac{\lambda}{3}X = 0
First, the equation in T can be written as D(T) = \lambda T with the operator D denoting the first derivative. Then the second can be written as D^2(X) = 3\lambda X where D^2 denotes the second derivative operator. Recall from linear algebra that these operators meet the requirements for a linear transformation if the vector space is the set of all functions that are “differentiable enough”. So what we are doing, in effect, are trying to find eigenvectors for these operators.
So in this sense, solving a homogeneous differential equation is really solving an eigenvector problem; often this is termed the “eigenfucntion” problem.
Note that the differential equations are not difficult to solve:
T = a exp(\lambda T) X = b exp(\sqrt{\frac{\lambda}{3}} x) + cexp(-\sqrt{\frac{\lambda}{3}} x) ; the real valued form of the equation in x depends on whether \lambda is positive, zero or negative.
But the point is that we are merely solving a constant coefficient differential equation just as we did in our elementary differential equations course with one important difference: we don’t know what the constant (the eigenvalue) is.
Now if we turn to the boundary conditions on x we see that a solution of the form A e^{bx} + Be^{-bx} cannot meet the zero at the boundaries conditions; we can rule out the \lambda = 0 condition as well.
Hence we know that \lambda is negative and we get X = a cos(\sqrt{\frac{\lambda}{3}} x) + b sin(\sqrt{\frac{\lambda}{3}} x) solution and then T = d e^{\lambda t } solution.
But now we notice that these solutions have a \lambda in them; this is what makes these ordinary differential equations into an “eigenvalue/eigenfucntion” problem.
So what values of \lambda will work? We know it is negative so we say \lambda = -w^2 If we look at the end conditions and note that T is never zero, we see that the cosine term must vanish (a = 0 ) and we can ensure that \sqrt{\frac{w}{3}}\pi = k \pi which implies that w = 3k^2 So we get a whole host of functions: \psi_k = a_k e^{-3k^2 t}sin(kx) .
Now we still need to meet the last condition (set at t = 0 ) and that is where Fourier analysis comes in. Because the equation was linear, we can add the solutions and get another solution; hence the X term is just obtained by taking the Fourier expansion for the function x(x-\pi) in terms of sines.
The coefficients are b_k = \frac{1}{\pi} \int^{\pi}_{-\pi} (x)(x-\pi) sin(kx) dx and the solution is:
\psi(x,t) = \sum_{k=1}^{\infty} e^{-3k^2 t} b_k sin(kx)
Quantum Mechanics and Undergraduate Mathematics XV: sample problem for stationary states
I feel a bit guilty as I haven’t gone over an example of how one might work out a problem. So here goes:
Suppose our potential function is some sort of energy well: V(x) = 0 for 0 < x < 1 and V(x) = \infty elsewhere.
Note: I am too lazy to keep writing \hbar so I am going with h for now.
So, we have the two Schrödinger equations with \psi being the state vector and \eta_k being one of the stationary states:
-\frac{h^2}{2m} \frac{\partial}{\partial x^2}\eta_k + V(x) \eta_k = ih\frac{\partial}{\partial t} \eta_k
Where e_k are the eigenvalues for \eta_k
Now apply the potential for 0 < x < 1 and the equations become:
-\frac{h^2}{2m} \frac{\partial}{\partial x^2}\eta_k = ih\frac{\partial}{\partial t} \eta_k
Yes, I know that equation II is a consequence of equation I.
Now we use a fact from partial differential equations: the first equation is really a form of the “diffusion” or “heat” equation; it has been shown that once one takes boundary conditions into account, the equation posses a unique solution. Hence if we find a solution by any means necessary, we don’t have to worry about other solutions being out there.
So attempt a solution of the form \eta_k = X_k T_k where the first factor is a function of x alone and the second is of t alone.
Now put into the second equation:
-\frac{h^2}{2m} X^{\prime\prime}_kT_k = e_k XT
Now assume T \ne 0 and divide both sides by T and do a little algebra to obtain:
X^{\prime\prime}_k +\frac{2m e_k}{h^2}X_k = 0
e_k are the eigenvalues for the stationary states; assume that these are positive and we obtain:
X = a_k cos(\frac{\sqrt{2m e_k}}{h} x) + b_k sin(\frac{\sqrt{2m e_k}}{h} x)
from our knowledge of elementary differential equations.
Now for x = 0 we have X_k(0) = a_k . Our particle is in our well and we can’t have values below 0; hence a_k = 0 . Now X(x) = b_k sin(\frac{\sqrt{2m e_k}}{h} x)
We want zero at x = 1 so \frac{\sqrt{2m e_k}}{h} = k\pi which means e_k = \frac{(k \pi h)^2}{2m} .
Now let’s look at the first Schrödinger equation:
-\frac{h^2}{2m}X_k^{\prime\prime} T_k = ihT_k^{\prime}X_k
This gives the equation: \frac{X_k^{\prime\prime}}{X_k} = -\frac{ 2m i}{h} \frac{T_k^{\prime}}{T_k}
Note: in partial differential equations, it is customary to note that the left side of the equation is a function of x alone and therefore independent of t and that the right hand side is a function of T alone and therefore independent of x ; since these sides are equal they must be independent of both t and x and therefore constant. But in our case, we already know that \frac{X_k^{\prime\prime}}{X_k} = -2m\frac{e_k}{h^2} . So our equation involving T becomes \frac{T_k^{\prime}}{T_k} = -2m\frac{e_k}{h^2} i \frac{h}{2m} = i\frac{e_k}{h} so our differential equation becomes
T_k {\prime} = i \frac{e_k}{h} T_k which has the solution T_k = c_k exp(i \frac{e_k}{h} t)
So our solution is \eta_k = d_k sin(\frac{\sqrt{2m e_k}}{h} x) exp(i \frac{e_k}{h} t) where e_k = \frac{(k \pi h)^2}{2m} .
This becomes \eta_k = d_k sin(k\pi x) exp(i (k \pi)^2 \frac{\hbar}{2m} t) which, written in rectangular complex coordinates is d_k sin(k\pi x) (cos((k \pi)^2 \frac{\hbar}{2m} t) + i sin((k \pi)^2 \frac{\hbar}{2m} t)
Here are some graphs: we use m = \frac{\hbar}{2} and plot for k = 1, k = 3 and t \in {0, .1, .2, .5} . The plot is of the real part of the stationary state vector.
August 6, 2011
MathFest Day 2 (2011: Lexington, KY)
Note: this is one procedure that was being modeled:
0 or finite if the genus is zero
Note: the set of rational points has a group structure.
Hence this is the form that is studied.
note: this curve is symmetric about the x axis.
Smaller talks
I enjoyed many of the short talks. Of note:
March 6, 2010
Why We Shouldn’t Take Uniqueness Theorems for Granted (Differential Equations)
Filed under: differential equations, partial differential equations, uniqueness of solution — collegemathteaching @ 11:07 pm
I made up this sheet for my students who are studying partial differential equations for the first time:
Remember all of those ”existence and uniqueness theorems” from ordinary differential equations; that is theorems like: “Given
y^{\prime }=f(t,y) where f is continuous on some rectangle
R=\{a<t<b,c<y<d\} and (t_{0},y_{0})\in R, then we are guaranteed at least one solution where y(t_{0})=y_{0}. Furthermore, if \frac{\partial f}{\partial y} is continuous in R then the solution is unique”.
Or, you learned that solutions to
y^{\prime \prime }+p(t)y^{\prime}+q(t)y=f(t), y(t_{0})=y_{0}, \ y^{\prime}(t_{0})=y_{1} existed and were unique so long as p,q, and f were continuous at t_{0}.
Well, things are very different in the world of partial differential
We learned that u(x,y)=x^{2}+xy+y^{2} is a solution to xu_{x}+yu_{y}=2u
(this is an easy exercise)
But, can attempt a solution of the form u(x,y)=f(x)g(y).
This separation of variables technique actually works; it is an exercise to see that u(x,y)=x^{r}y^{2-r} is also a solution for all real r!!!
Note that if we wanted to meet some sort of initial condition, say, u(1,1)=3, then u(x,y)=x^{2}+xy+y^{2}, and u(x,y)=3x^{r}y^{2-r} provide an infinite number of solutions to this problem. Note that this is a simple, linear partial differential equation!
Hence, to make any headway at all, we need to restrict ourselves to studying very specific partial differential equations in situations for which we do have some uniqueness theorems.
Create a free website or blog at |
3267429ffffeee03 | Borel functional calculus
In functional analysis, a branch of mathematics, the Borel functional calculus is a functional calculus (that is, an assignment of operators from commutative algebras to functions defined on their spectra), which has particularly broad scope.[1][2] Thus for instance if T is an operator, applying the squaring function ss2 to T yields the operator T2. Using the functional calculus for larger classes of functions, we can for example define rigorously the "square root" of the (negative) Laplacian operator −Δ or the exponential
The 'scope' here means the kind of function of an operator which is allowed. The Borel functional calculus is more general than the continuous functional calculus, and has a different focus from the holomorphic functional calculus.
More precisely, the Borel functional calculus allows us to apply an arbitrary Borel function to a self-adjoint operator, in a way which generalizes applying a polynomial function.
If T is a self-adjoint operator on a finite-dimensional inner product space H, then H has an orthonormal basis {e1, ..., e} consisting of eigenvectors of T, that is
Thus, for any positive integer n,
If only polynomials in T are considered, then one arrives at the holomorphic functional calculus. Are more general functions of T possible? Yes. Given a Borel function h, one can define an operator h(T) by specifying its behavior on the basis:
In general, any self-adjoint operator T is unitarily equivalent to a multiplication operator; this means that for many purposes, T can be considered as an operator
acting on L2 of some measure space. The domain of T consists of those functions for which the above expression is in L2. In this case, one can define analogously
For many technical purposes, the preceding formulation is good enough. However, it is desirable to formulate the functional calculus in a way in which it is clear that it does not depend on the particular representation of T as a multiplication operator. This we do in the next section.
The bounded functional calculus
Formally, the bounded Borel functional calculus of a self adjoint operator T on Hilbert space H is a mapping defined on the space of bounded complex-valued Borel functions f on the real line,
such that the following conditions hold
• πT is an involution-preserving and unit-preserving homomorphism from the ring of complex-valued bounded measurable functions on R.
• If ξ is an element of H, then
is a countably additive measure on the Borel sets of R. In the above formula 1E denotes the indicator function of E. These measures νξ are called the spectral measures of T.
• If η denotes the mapping zz on C, then:
Theorem. Any self-adjoint operator T has a unique Borel functional calculus.
This defines the functional calculus for bounded functions applied to possibly unbounded self-adjoint operators. Using the bounded functional calculus, one can prove part of the Stone's theorem on one-parameter unitary groups:
Theorem. If A is a self-adjoint operator, then
is a 1-parameter strongly continuous unitary group whose infinitesimal generator is iA.
As an application, we consider the Schrödinger equation, or equivalently, the dynamics of a quantum mechanical system. In non-relativistic quantum mechanics, the Hamiltonian operator H models the total energy observable of a quantum mechanical system S. The unitary group generated by iH corresponds to the time evolution of S.
We can also use the Borel functional calculus to abstractly solve some linear initial value problems such as the heat equation, or Maxwell's equations.
Existence of a functional calculus
The existence of a mapping with the properties of a functional calculus requires proof. For the case of a bounded self-adjoint operator T, the existence of a Borel functional calculus can be shown in an elementary way as follows:
First pass from polynomial to continuous functional calculus by using the Stone-Weierstrass theorem. The crucial fact here is that, for a bounded self adjoint operator T and a polynomial p,
Consequently, the mapping
is an isometry and a densely defined homomorphism on the ring of polynomial functions. Extending by continuity defines f(T) for a continuous function f on the spectrum of T. The Riesz-Markov theorem then allows us to pass from integration on continuous functions to spectral measures, and this is the Borel functional calculus.
Alternatively, the continuous calculus can be obtained via the Gelfand transform, in the context of commutative Banach algebras. Extending to measurable functions is achieved by applying Riesz-Markov, as above. In this formulation, T can be a normal operator.
Given an operator T, the range of the continuous functional calculus hh(T) is the (abelian) C*-algebra C(T) generated by T. The Borel functional calculus has a larger range, that is the closure of C(T) in the weak operator topology, a (still abelian) von Neumann algebra.
The general functional calculus
We can also define the functional calculus for not necessarily bounded Borel functions h; the result is an operator which in general fails to be bounded. Using the multiplication by a function f model of a self-adjoint operator given by the spectral theorem, this is multiplication by the composition of h with f.
Theorem. Let T be a self-adjoint operator on H, h a real-valued Borel function on R. There is a unique operator S such that
The operator S of the previous theorem is denoted h(T).
More generally, a Borel functional calculus also exists for (bounded) normal operators.
Resolution of the identity
Let T be a self-adjoint operator. If E is a Borel subset of R, and 1E is the indicator function of E, then 1E(T) is a self-adjoint projection on H. Then mapping
is a projection-valued measure called the resolution of the identity for the self adjoint operator T. The measure of R with respect to Ω is the identity operator on H. In other words, the identity operator can be expressed as the spectral integral . Sometimes the term "resolution of the identity" is also used to describe this representation of the identity operator as a spectral integral.
In the case of a discrete measure (in particular, when H is finite-dimensional), can be written as
in the Dirac notation, where each is a normalized eigenvector of T. The set is an orthonormal basis of H.
In physics literature, using the above as heuristic, one passes to the case when the spectral measure is no longer discrete and write the resolution of identity as
and speak of a "continuous basis", or "continuum of basis states", Mathematically, unless rigorous justifications are given, this expression is purely formal.
1. Kadison, Richard V.; Ringrose, John R. (1997). Fundamentals of the Theory of Operator Algebras: Vol 1. Amer Mathematical Society. ISBN 0-8218-0819-2.
2. Reed, Michael; Simon, Barry (1981). Methods of Modern Mathematical Physics. Academic Press. ISBN 0-12-585050-6. |
a8d83c41cd011621 | Solving the Measurement Problem and then Steppin’ Out over the Line Riding the Rarest Italian: Crossing the Streams to Retrieve Stable Bioactivity in Majorana Bound States of Dialy zed Human Platelet Lysates
Mark Roedersheimer*
Research, GITE/Burn Surgery, UC Denver, USA
© Mark Roedersheimer; Licensee Bentham Open.
* Address correspondence to this author at the Research, GITE/Burn Surgery, UC Denver, USA; E-mail:
Exhaustive dialysis (ED) of lysed human platelets against dilute HCl yields stable angiogenic activity. Dialysis against a constrained external volume, with subsequent relaxation of the separation upon opening the dialysis bag, produces material able to maintain phenotypes and viability of human cells in culture better than ED material. Significant graded changes in MTT viability measurement tracked with external volume. The presence of elements smaller than the MW cutoff, capable of setting up cycling currents initiated by oriented flow of HCl across the membrane, suggests that maturation of bioactivity occurred through establishment of a novel type of geometric phase. These information-rich bound states fit recent descriptions of topological order and Majorana fermions, suggesting relevance in testing Penrose and Hameroff’s theory of Orchestrated Objective Reduction, under conditions more general, and on finer scales, than those dependent on tubulin protein. The Berry curvature appears to be a good tool for building a general field theory of physiologic stress dependent on the quantum Hall effect. A new form of geometric phase, and an associated “geometric” quantum Hall effect underlying memory retrieval, dependent on the rate of path traversal and reduction from more than two initial field influences is described.
Keywords: Acidity, Alzheimer’s disease, caveolin-1, cystatin c, ebola, geometric phase, hemodialysis, majorana fermion, metallic hydrogen, platelet lysate, quantum hall effect.
To every differentiable symmetry generated by local actions, there corresponds a conserved current. -concise statement of Noether’s Theorem, from the wiki page
Humans can receive information contained in light and sound waves, by pathways that do not require the eyes or ears. The tendency to simplify information received, exemplified in the evolution of eyes and ears, to support efficient working models, underlies basic cognitive functions allowing a focus of attention on the most critical stimuli. Assuming that these models must be the same across members of a species misses the larger point that survival requires caution in handling information. If a chess player has been using a certain set of moves advantageously, that an opponent learns to read, and then counter effectively, knowledge of the maneuvers prior to this point of transition in the game have little relevance to future dynamics. A clear example that makes this point is the emergence of antibiotic resistance. The question is, how does an organism determine the most critical stimuli to focus on, when the ecosystem possesses an effectively unbounded number of dynamic influences, and only a few must be grasped and acted on rapidly, such as the molecular signatures of a deadly pathogen about to establish an infection?
The thesis of this paper is that orientation of attention, and consequent retrieval of memory necessary to address critical field influences, effectively stresses, responsible for the orientation, can be seen to occur through establishment of a purely geometric phase, dependent initially on more than two field inputs, and also the rate of path traversal, distinguishing it from the Pancharatnam-Berry phase [1]. While individual macromolecules, such as DNA, or proteins, such as tubulin, or small factors like ATP or H+, clearly play a role in these processes, the theory views these as only discrete aspects of deeper, underlying wave patterns fitting the time-independent Schrödinger equation for bound states. A pure “wave” view allows resolution of problems associated with breakdown of “causality,” typical of particle representations, if the associated paradoxes are just seen to reflect inadequate access to information, a natural aspect of the filtering done by living systems because of limited resources. This is a way of asserting the validity of “hidden variable” views to quantum mechanics, in line with those of Einstein, Schrödinger, and de Broglie-Bohm. In this paper the wave function is interpreted as a description of real, physical (not probabilistic) processes that contain memory in living systems. The only “indeterminacy” that derives from this view is in development of perceptual prowess, through valid assays, to access desired information.
1.1. A Platelet Lysate Model of Stress-induced Bioactivity Control
Platelets are a-nucleate cells of the blood essential to wound healing that contain diverse molecules expressed in the course of stress responses [2]. Clinicians typically worry if counts get low, because of the risk of internal hemorrhage. Under conditions of adequate platelet count, coagulation at sites of tearing in the endothelial layer resulting from normal stresses, where platelets would localize in clot plugs, presumably provide sufficient activity to restore vessel wall integrity. Considered as a dynamically porous sieve, the clot can allow some elements released to remain close to the wound surface, and others to diffuse out into the blood stream. Thus, critical aspects of the net bioactivity created near the vascular wall may result from transport properties tuned by interactions with the clot matrix, defined to a first approximation, by molecular weight or charge, in direct analogy with principles of gel chromatography. The dynamic aspect of these interactions suggest mechanisms for storage and retrieval of memory, present non-locally in the tissues of the organism, amounting to latent images of responses that exist to be retrieved in the presence of stressors.
If sufficient symmetry can be established using field interactions evolved against, and into the organism, that effect retrieval of memory needed to deal with stressors, such as angiogenic or antimicrobial activity from platelets, or collagen polymer formation, and relevant changes in these activities are measurable in endpoint properties of isolates made by application of the same field influences, then control and honing of the activity should be possible using Monte Carlo modeling techniques, directly analogous to methods for controlling neutron scattering interactions in nuclear reactors. Success requires a valid endpoint metric for the activity, and could be a measure of Ebola virulence suppression, or any pathogen, in culture. “Valid” means the measure tracks to higher-level responses, such as those observed in humans infected with the pathogen, AND the property sought actually exists in the mixture of factors that are being assayed.
Generally, systems of evolved biological complexity cannot be strictly understood simply in terms of variables such as energy, time, position, momentum, etc., because they yield little insight into the critical emergent properties that account for survival.
The state of an organism in the course of successfully defeating a viral pathogen could yield clues to field influences that are required to shape the necessary effect, such as a temperature profile. This would start as a retrieval of general “virus fighting” memory evolved into the organism, in the case of viruses one has never encountered. The final annealing of the factors into a state highly specific to the threat would occur in vivo when directly influenced by the potentially unlimited number of field effects resulting from the pathogen’s assault patterns. This methodology could also reveal properties of resistance to a virus after surviving exposure, leading to better methods of processing the platelet extracts of those unexposed into improved states for treating the virus. This can be seen as implying a kind of 2, or “public”-key encryption mechanism in responses to stress, since relevant influences clearly extend beyond those solely inside the organism, including gravity, light, gases, foods and liquids, or the opportunity to get good sleep covered by a warm blanket.
We have previously reported that platelet lysates dialyzed to exhaustion against 10 mM HCl maintain stability at 4°C and display an enhancement of dose-dependent angiogenic activity in vivo relative to undialyzed fractions [3, 4]. Removal of low MW elements, and diffusion of dilute acid into the space retaining higher molecular weight factors suggest the relevance of the model to context specific development of bioactivity at sites of wounding, or tumor stroma. Another dialysis study of ours documented effects on the final form of collagen gel assembly occurring under the conjoined, oriented influences of gravity and buffer ion diffusion into the acidified soluble phase across the membrane [5]. The finding of a distinct transition in morphology 3.4 mm away from the membrane in 1-g might be seen as evidence of a Pancharatnam-Berry phase, as it was lacking in gels assembled in microgravity. Because collagen clearly evolved as a response to gravity, this system suggests a way to build highly sensitive interferometers for gravity waves, as it can be expected that the point of transition will be sensitive to changes in collagen concentration, temperature, MW cutoff of the membrane, MW of buffer ions, and container geometry. Overall, these findings suggest that simple separations can regulate emergence of critical activities nascent in tissues such as platelets, or those rich in collagen, including, but certainly not limited to, bone, cartilage, and tendon, revealing the role of self-similarity in evolved stress responses.
Platelets contain bone morphogenetic proteins (BMPs) [6], members of the TGFbeta superfamily that track to the origins of multicellular life forms [7]. Variation in BMP levels between donors, as well as pH dependence of release has been reported [8]. Activation in vivo by denaturing conditions including extremes of pH [9], suggests their relevance to stress response. The designation “body morphogenetic proteins” has been proposed because of their involvement with so many developmental processes [10]. A dramatic example of the conserved nature of these factors is the demonstration that decapentaplegic, a Drosophila homoloque of mammalian BMP-2/-4, will induce endochondral bone formation in a mammal [11]. Considering that cells in vivo and in culture are also widely known to be highly sensitive to pH, the better focus for control of these well-documented activities should now be acidification, rather than the proteins themselves, as evolution will have recognized the expediency, and energy savings, of adjusting flows of protons, rather than the expression of large protein molecules, at times of serious stress.
1.2. The Case for Superconductivity, Metallic Hydrogen, and Majorana Fermions in Living Systems
It is noteworthy that transition temperatures of the copper oxides, the first “high temperature” superconductors discovered, and the later iron-based composites, straddle the boiling points of liquid oxygen and nitrogen, far below the sublimation point of carbon dioxide [12]. A shift in transport of total available electrons through iron, relative to copper, at lower temperatures, along with supportive phase transitions of O2 and N2 in a complex aqueous media cyclically exposed to sunlight, suggests a means of thermoregulation by the environment of early life forms, through direct activation of iron-dependent energy production mechanisms. The fact that copper-dependent amine oxidases (CDAOs) eliminate diverse biogenic amines and are induced in states of neuronal injury [13], suggest the foundational role of copper-based electron transport in regulation of drive, needed at higher temperature, to address the imminent “eat, or be eaten” reality of each new dawn. The known hygroscopic character of copper oxides suggests the limited relevance of methods to measure the superconductivity of such materials, if they cannot do so in the presence of water. [relevance noted on website, accessed online March 26, 2015.] .
Living tissues can easily relax restrictions on spin, charge and geometry, that hinder creation of theoretical superconducting states in solid metals [14], such as conjoined forms of two electrons (p-waves), since they possess freely mobile proton and divalent cation phases, most notably Ca++, that could stabilize these “exotic” -2, spin 1 electron states. Coalescence of protons and electrons into similarly “exotic” forms is also conceivable given their proximity in overlapping mobile phases, and presence of many negatively charged groups on macromolecules supportive of low velocity, and therefore long de Broglie wavelength proton states, in vivo. The value of these types of states could have driven evolution of controls in living systems for regulation of energy metabolism through diffuse ordering effects, manifesting as “superatoms,” in leaves responding to sunlight, through “optical molasses”-type influences, such as used to create Bose-Einstein condensates [15].
The initial product of water-splitting reactions from sunlight, with highest value to early life forms, in a setting of low oxygen, could have been metallic hydrogen, based on a BCS theory model of a stable phase at 95°C possessing properties of superconductivity [16]. Given an extraordinary energy density, it would be the ideal energy currency in hot springs near thermal vents, eventually driven out by the accumulation of oxygen. Beneficial effects of molecular hydrogen documented in a large number of diseases, with no clear dose-dependence [17], in line with a hormetic response, could suggest a secondary role in sensing, and clearance of the metallic form, if low oxygen states characteristic of metabolic dysfunction support accumulation of toxic levels in disease states.
Limitation of metallic phase formation by access to nucleons from water hydrogen atoms could underlie toxicity of heavy water in Eukaryotes [18]. Accordingly, increased toxicity in malignant cells [18] and near absence of toxicity in prokaryotes [18], may suggest the role of metallic hydrogen in diverse, chronic inflammatory states of oxygen dependent organisms susceptible to infections by prokaryotes. Removal of excess metallic hydrogen in a deep cleansing breath, reflected in levels of molecular hydrogen in the alveoli, would be difficult to confirm with current analytical methods. Generation of metallic hydrogen could control state shifts in plants, if diffusion through the outer membrane supports immediate conversion into the molecular form.
A obviating study documenting lack of significant physiological response, most prominently mtDNA/nDNA, in subjects aged 37-64 participating in a 6 week study of exercise and protein intake who consumed D20 for labeling purposes, showed a strikingly consistent VO2max bump of approximately 10% [19] suggesting a direct chemico-physical response in this setting. This can be explained if metallic phase formation is constrained by access to nucleons (protons and neutrons) from water that is relieved by consuming D2O, potentially obviating any need for a physiologic adaptation to the exercise regimen. It would be interesting to know if percent deuteration achieved in each subject, reported only as the group average, 1.5-2.5% [19, Fig. 3], tracked with VO2max, as a tight correlation would be expected if enhanced metallic phase formation underlies the effect. If phase formation involves beta decay, this may account for the natural role of Aluminum, as it is a good absorber of beta particles. Accordingly, aluminum may regulate transition of nucleons into the metallic phase, explaining toxicity at high levels.
The first Quantum corrals were made of iron adatoms on a copper surface allowing the mirage effect to be generated by placing cobalt in the corral [20].
It is noteworthy that cytochrome c oxidase possesses two iron and two copper centers [21]. Amyloid precursor protein has been shown to possess amine oxidase activity inhibited by zinc [22]. Copper can support self-assembly of Myelin basic protein [23] and large conformational changes in Alpha-synuclein upon binding [24]. The gentle production of H2O2 from PrP binding copper in a high occupancy mode has been interpreted to suggest involvement in cellular signaling mechanisms [25]. A theory suggesting the mirage effect only requires “an arrangement of adatoms or other defects that lead to a buildup of surface state electron amplitude at two locations within the coherence length of the electron” [26, p. 14] gives a clue to the role of Fe and Cu binding in these metalloproteins that are capable of assuming variable conformations. Accordingly, [O2] oscillation can determine configurations supporting a mirage effect that controls gating of electron flow toward oxidation of potent amines, or iron-based energy metabolism, as needed outside neurons.
CoCl2 can turn on erythropoetin (EPO) production under normoxic conditions [27]. In cells depleted of mitochondrial DNA in which hypoxia will not generate ROS, or induce expression of EPO, application of CoCl2 will increase ROS generation and maintain viability to induce expression of EPO [28]. Methionine aminopeptidase 2, a target of anti-angiogenic therapies, is on a short list of known cobaltoproteins [29]. Based on these reports, and the viability of Cobalt to disperse electron probability amplitude constrained in a quantum corral formed between copper and iron [20], it appears to be capable of controlling electron transport gating mechanisms established before oxygen was abundant on the planet, but still essential during states of low oxygen, typical of those driving angiogenic responses. The creation of a NaCoO2 compound with superconducting properties dependent on the presence of water [30] is consistent with a capacity of cobalt to regulate superconducting states under physiologic conditions.
An investigation of the utility of metal-organic frameworks (MOFs) for deuterium isotope separation established the superiority of a Zn/Cl MOF in terms of high selectivity, and support of quantum sieving up to 60 K, whereas a Co/Cl MOF was remarkable in supporting a small pore structure, low selectivity, but high binding capacity at 30 K [31]. The author suggests that the Co based material could be used advantageously in an initial step, followed by a second step with a material of higher selectivity [31, p. 134]. Maintenance of higher selectivity of the Zn/Cl MOF up to 50 K, compared to Co/Cl MOF at 30K [31, Fig (5). 14, p. 133], suggests a basis for a gating mechanism between metallic and molecular hydrogen phases dependent on differential deuterium concentrating effects of Co and Zn in a pre-oxygenic ecosystem. This is directly parallel to that proposed earlier for Fe and Cu in an oxygen-dependent ecosystem, through exploitation of temperature-dependent electrical superconductivity controlled by cyclical exposure to sunlight.
Living organisms should be seen as the ideal setting in which states of “non-trivial emergent excitation,” characteristic of a Majorana Fermion [14], may be found along with evidence of metallic hydrogen. This would amount to a complex waveform with the highest frequency components controlled by interconversion of water nucleons and electrons between states of molecular and metallic hydrogen, potentially accounting for enormous information, and energy storage capacity. The mode has been described as having the potential to support “fault-tolerant computation” [14]. This author notes that while deducing existence of a Majorana mode tells you nothing about the encoded qubits, retrieval of the information should occur if two of them are brought into contact [14, p. 26].
This report presents results of an experiment in which this appears to have occurred in the course of dialyzing platelet lysates against dilute acid, within a constrained volume, followed by fusion of the separated phases when one end of the bag was opened while still submerged in the larger volume. Analysis of these final states revealed a graded change in quantitative indices of regenerative activity in human cell culture suggesting they represent stable Majorana bound modes.
Three units of expired platelets (A, B, C) were obtained by courier mid-afternoon of the day following midnight expiration. These were aliquoted within a laminar flow hood into 50 ml conicals and pelleted using a benchtop centrifuge at low speeds (2000, 2500, 2500 rpm) for 20, 25, and 20 minutes, respectively, to pellet the platelets. The serum was pipetted off each pellet with care to avoid disruption. A total of approx 90 mls of sterile water was distributed equally on top of the pellets, for each unit. The batches were placed in a benchtop bath sonicator for 60, 60, or 40 secs, respectively. Following a second centrifugation at 3000 rpm, for 25 minutes, the supernatants were collected and re-pooled. At this point material from each unit was divided and either exhaustively dialyzed (ED), by putting 40 ml of the isolate in a 6-8kD dialysis bag, and carrying through two 1 L exchanges of 10 mM HCl over a 2-day period, or taken through a “constrained” dialysis (CD) process. Each involved overnight refrigeration with gentle agitation or stirring. For CD the 40 ml isolate was placed into a 6-8 kD dialysis membrane against either 300, 350, or 400 ml of 10 mM HCl. The following day the material inside the bag, and in the larger dialysis solution, were allowed to gradually reintegrate by opening one of the clips and placing back in the container, with removal of the bag an hour, to a few hours later. Test strips were used to assess pH for each isolate. Materials were sterile filtered through standard low protein binding membranes, and maintained stability (i.e. showed no overt signs of precipitation) at 4 C during the test period. Base media was DMEM with 4.5 gm/L glu,+l-glutamine, and 110 mg/ml sodium pyruvate.
Normal, human dermal fibroblasts were plated at 60,000/well in a 96 well plate in a volume of 300 microliters. After one day, base media was changed to include the test media. Feeding regimen was Mon, Wed, and Fri. Standard mammalian cell culture incubation conditions were used. An MTT assay (Cayman Chemical Co, Ann Arbor, MI item 10009365) was run at 7 days. Visual assessment indicated approximately identical cell number as on the day of plating.
Ranked evaluation of the following was done from the images: 1) the relative number of “-cyte” to “-blast “ forms on a total 4-cross scale (distributed across “f-cytes” or “f-blasts” columns); 2) the degree of differentiation of the blast forms overall, in terms of degrees of branching, distinct nuclei and prominent rough ER, on a 2-cross scale, 3) thickness of diffuse matrix, on a 3-cross scale, 4) amount of aligned fibrils on a 3-cross scale.
Since CD involves dilution by solution outside the bag, ED material was diluted comparably (10x) using vehicle (10 mM HCl), prior to analysis. An accounting of mass loss was done from a report of 2.6 +/- 0.6 pg/plt [32] and 3x1011 plts in a typical unit yields a total protein content of 780 mg/unit. A typical finding of 4-7 (ave. 5.5) mg/ml by Bradford assay from a single unit in 70-90 mls (ave. 80) yields (5.5x 80 mls =) 440 mg/unit. Pellet size after the second, higher speed spin is typically more than half original pellet prior to extraction, so a large fraction (on order of > 85-90%) of the total soluble protein appears to be accounted for in the ED sample. Percent additions were reported, as protein mass has been a weak predictor of ED angiogenic activity in the past.
One-way ANOVA was done on ED samples to evaluate resulting inter-unit variation, and on CD samples to evaluate the effect of external volume changes. T-tests were done to evaluate differences between serum, ED and CD groups.
Materials resulting from either process had a pH in the range of 2-3, typical of prior results with ED. Media containing platelet lysates maintained color and clarity after addition of all extract variants, up to the point of media changes, indicating minimal effect on pH across all the processes used. Slight color changes were noted in the serum groups at feeding, suggesting some change in pH in these treatments. Since metabolic activity in fibroblasts may be directed to matrix synthesis, migration, or proliferation, to variable degrees, morphology was evaluated and correlated to MTT readings.
3.1. Exhaustive Dialysis (ED) Yields Little Inter-unit Variation
MTT values resulting from old process applied to three different units were evaluated using the Null hypothesis Ho = no effective difference in resulting bioactivity across units.
1, 2, 3: F(2, 6) = 0.1249, < 5.143, Ho accepted
4, 5, 6: F(2, 6) = 0.136, < 5.143, Ho accepted
7, 8, 9: F(2, 6) = 1.795, < 5.143, Ho accepted
Thus, ED creates consistent bioactivity across units, based on the MTT assay.
3.2. Constrained Dialysis (CD) Volume Controls Changes in Bioactivity
MTT values for the same three units, with graded application of the new process (dialysis against 300, 350, or 400 ml), pooling the 4% and 8% groups (and excluding the 0.0 in group 14 as an outlier), and evaluating the Null hypothesis Ho = graded application of constrained process is not responsible for the observed variation in activity:
10+13, 11+14, 12+15: F(2,14) = 4.06, > 3.739, so Ho can be rejected.
Thus, 95% confidence can be asserted that CD creates a real effect across batches, assuming effects due to dilution are not controlling it. Upon reincorporation between the two portions this amounts to a change across the groups of roughly 26% (40/340 = 0.118, 40/390 = 0.103, and 40/440 = 0.091, so (.118-0.091)/(average dilution = .104) = 0.259). Separating the 4%, and 8% results should reveal relevant changes, if dilution has a strong effect. A two-tailed t-test was done since no prior knowledge exists to suggest which group should be greater. Averages for the 4 and 8% groups of 0.78556 and 0.54, respectively, yield a p-value of 0.295. This effect is contrary to a drop seen with the dilution across CD groups, suggesting that CD volume exerts the major effect on MTT reading.
3.3. CD Gives Improved Bioactivity Compared to ED Treatment
Applying a 2-tailed, unpaired t-test on 20% serum vs pooled 4% ED groups yields p = 0.0004. Thus, high confidence can be asserted that at 4%, ED material performs below standard 20% serum culture conditions. Applying a 2-tailed, paired t-test on pooled 4% ED groups vs. pooled 4% CD groups yields a p = 0.008. Thus, high confidence can be asserted that they are different. Applying a 2-tailed, unpaired t-test on the SG vs pooled 4% CD group yields a p = 0.5, giving no basis to claim a difference exists. These results are summarized in Fig. (1).
Fig. (1).
MTT response comparison of 4% exhaustive dialysis (ED) to constrained dialysis (CD) process groups normalized against 20% serum.
3.4. Morphology evidence:
One representative picture for each culture well was collected in a blinded fashion to document the morphology at the end of the test period. A notable finding was the appearance of a variable, fine precipitate, suggesting debris on the culture surface in the SG and groups 1-5, with conspicuous absence of this feature in groups 6-15. The possibility that this was evidence of infectious contamination was ruled out by absence of change in pH that would have been expected, and reflected in a change in media color and clarity.
Key features of these images included cells with either a fibrocyte (spindle cell) or a fibroblast character, the latter typified by 3 or more branched processes off a central mass with prominent nucleus and rough endoplasmic reticulum surrounding it. No marked change in cell number was noted by gross inspection during the culture period.
In some groups tracks of highly aligned bands of optically homogeneous matrix were observed, with a thickness approximately equal to a fibroblast migrating over the provisional matrix surface, indicative of differentiated fibroblast behavior, in line with normal tissue remodeling [33] and approximating dermal compartment healing at 5-6 days after injury [34].
In nearly all groups a diffuse provisional matrix was noted on the culture surface as rolling variations in optical density, from which more metabolically and phenotypically robust cells can develop and form organized collagen bands in line with a synergy between growth factor and matrix influences on differentiation [33]. Type I collagen expression is a recognized marker of fibroblast phenotype [35].
The results presented in Table 3 reveal increased proportions of “-blast” to “-cyte” forms, increased diffuse matrix synthesis, and increased amounts of aligned forms, as dose of ED (groups 1-9), and CD (groups 10-15) process materials increase. Groups 10, 11, and 12 (4% CD) were also superior by MTT assay (Fig. 1).
Table 1.
MTT Assay - Group designations of platelet extract supplemented media.
“Un-numbered” control group – 20% animal serum supplemented DMEM
1. Exhaustive process, 1x, donor unit A, 2% in DMEM
2. Exhaustive process, 1x, donor unit B, 2% in DMEM
3. Exhaustive process, 1x, donor unit C, 2% in DMEM
4. Exhaustive process, x/10, donor unit A, 4% in DMEM
5. Exhaustive process, x/10, donor unit B, 4% in DMEM
6. Exhaustive process, x/10, donor unit C, 4% in DMEM
7. Exhaustive process, x/10, donor unit A, 8% in DMEM
8. Exhaustive process, x/10, donor unit B, 8% in DMEM
9. Exhaustive process, x/10, donor unit C, 8% in DMEM
10. Constrained process, 1x, donor unit A, 4% in DMEM (ext. vol = 300 ml)
11. Constrained process, 1x, donor unit B, 4% in DMEM (ext. vol = 350 ml)
12. Constrained process, 1x, donor unit C, 4% in DMEM (ext. vol = 400 ml)
13. Constrained process, 1x, donor unit A, 8% in DMEM (ext. vol = 300 ml)
14. Constrained process, 1x, donor unit B, 8% in DMEM (ext. vol = 350 ml)
15. Constrained process, 1x, donor unit C, 8% in DMEM (ext. vol = 400 ml)
Table 2.
MTT assay values, normalized off blanks.
20% DMEM 0.85 1.24 0.91
1 0.52 0.55 0.94
2 0.40 0.54 1.10
3 0.42 0.81 1.14
4 0.06 0.14 0.52
5 0.14 0.10 0.37
6 0.11 0.14 0.67
7 0.05 0.16 0.99
8 0.52 1.02 1.09
9 0.59 1.13 1.20
10 0.52 1.35 1.35
11 0.16 1.20 1.30
12 0.35 0.24 0.60
13 0.00 0.89 0.59
14 0.29 1.35 0.56
15 0.09 0.31 0.24
Table 3.
Ranked morphological findings:
image f-cytes f-blasts fb-diff diffuse matrix aligned fibrils
20% DMEM +++ + ++
1 +++ + + +++
2 +++ + + ++
3 ++ ++ ++ ++
4 +++ + + ++
5 +++ + + ++
6 ++ ++ ++ +++ +
7 ++ ++ ++ +++ ++
8 +++ + + +++
9 ++ ++ ++ ++ +++
10 ++ ++ + +++ +++
11 ++ ++ ++ ++ ++
12 ++ ++ ++ +++ ++
13 + +++ ++ ++ ++
14 ++ ++ ++ ++ +
15 ++ ++ + +++ +
Data in this report shows how the diffusion of dilute acid across a dialysis membrane into a space containing a natural bioactive extract can change the bioactivity of the material in proportion to the external exchange volume, in a setting where major alterations in composition would not be expected. These changes were determined through visual evaluation of key phenotypic characteristics of the cells, and a quantitative MTT assay that is an appropriate measure of viability in such settings [36, p. 141]. Wound healing requires growth, and subsequent apoptosis of cells, suggesting the value of this type of effect in regulation of cellular viability, to avoid tumor formation. The effect could conceivably occur naturally across any selectively permeable membrane, such as clot structures, in vivo.
Wounding can be seen generically as disruptive or denaturing action on a tissue, resulting in breakdown of the normal relationships between soluble and insoluble phases. Lysis of a tissue, such as platelets in water, is a controllable event modeling these processes. Because survival following wounding requires retrieval of the most critical functions of the organism under conditions that disrupt cellular membranes, receptor mediated signal transduction mechanisms leading to gene expression have little relevance until the emergent processes of dispersed biomolecules, such as clotting factors, can support reestablishment of tissue organization and stable cell membranes. This is reflected in the fact that yeast lysates can still ferment [37], and collagen polymerization is controllable in the absence of cells through pH, and gravitational influences alone [5].
Placing a lysate of platelets within a dialysis bag against a larger volume of dilute HCl models the initial release of factors within a clot at an ischemic wound site, setting up an asymmetric flow of charge based on the smaller weight of protons than Cl- ions. Release of ions from high MW proteins weakly denatured by the faster proton wave would lead to their redistribution across the membrane, amounting to a positive current inversion, since common protein bound ions, such as calcium, iron, zinc and copper, can assume +2 charge in vivo. Consequent disruption of CDAO activity would lead to elevated potent amine levels, that can account for initiation of responses to this well-known type of initial asymmetry, such as occurs during digestion in the stomach. Because amines typically only assume a +1 charge, and have higher MW than divalent cations, this asymmetry could be the basis for counterion effects that initiate looping currents able to support states of superconductivity in line with a “heavy-fermion” mode, or the FFLO state [38, 39]. These may fit descriptions of Fröhlich condensation, Orchestrated Objective Reduction, or other forms of “macroscopic quantum coherence” [40].
The initial state of charge asymmetry might also support creation of a unique variant of the quantum Hall effect (QHE), based on a derivation of the Berry curvature responding to the rate of change of an external parameter controlling physical observables that is general enough to be relevant to living systems [41]. As the Berry curvature is a form of susceptibility [41], a measure of the change in an extensive property (ie mass, or volume) under variation of an intensive property, such as density (= M/V) [42], the concept translates naturally into a “field” based view of stress in living systems. The description of a “dynamical” QHE appears to model a continuous variation in the external (geometric) parameter, rather than the case of three discrete values studied here. This situation is clearly more typical of circumstances in vivo. Accordingly, acquisition of Pancharatnam-Berry phase between the discrete states, would have been a continuous function of the volume change, or container geometry, at fixed volume, resulting from a “dynamical” QHE, directly analogous to stress (a curvature) applied to a living system, leading to stable states such as were defined at the discrete volumes. These effects should be observable with appropriate sampling and measurement, using an adaptation of the model that has been presented.
Our results suggest observation of a pure “geometric” QHE, as discrete, stable states resulted strictly from the geometric constraint alone. The dynamical QHE was likely involved, initiated as the dialysis bag containing the lysate entered the exchange volume, driving acquisition of a time-dependent Pancharatnam-Berry phase, effectively a geometric aspect of the “stress,” or Berry curvature, postulated to be relevant to the time-reversal symmetry breaking properties of such systems [41]. Models of oscillating chemical reactions, general enough to encompass biological networks, and the human nervous system, have been developed [43, 44]. These authors emphasize examples of organization resulting from interfacial effects, photo-redox cycling of iron, the role of light in driving processes far from equilibrium, and the importance of concentration, over catalysis [43].
The data presented can explain worsening of periodontal status with hemodialysis vintage [45], lack of change in levels of cystatin c between sessions [46] and superior predictive power of cystatin c for cerebral microbleeds [47], if cystatin c can act as a CDAO, or a chaperone of CDAOs. Support of amine oxidase function in the presence of copper would explain protection of neuronal cells against mutant cu/zn SOD toxicity [48] if this form of SOD also has a CDAO function that is disrupted by the mutation. Evidence for these functions would establish how local potent amine levels directly regulate cu/zn-SOD activity.
Cystatin c co-localizes in amyloid deposits of both non-demented aged people and those with Alzheimer’s disease, and has been hypothesized to serve a protective role in AD [49]. Plasma semicarbazide-sensitive amine oxidase (SSAO) activity has been shown to correlate with renal dysfunction and levels of Cystatin c [50]. SSAO has been tracked to vascular adhesion protein-1 (VAP-1), a copper dependent amine oxidase, and 180 kD glycoprotein [51]. A relationship between AD, vascular dementia risk and vessel wall SSAO activity has been suggested [52]. A MW of only 13 kD, and co-localization at sites of deposition around neurons, may suggest cystatin c is expressed to address dynamic stresses, to augment function of CDAOs around neurons, and other cells. This could have been missed if copper binding and oxidase activity are simultaneously dependent on conformation, solvent environment, and copper availability. It is worth noting that copper binding to Cystatin B leads to inhibition of amyloid fibril formation [53], and Cystatin c overexpression rescues a Cystatin B mutant phenotype that causes progressive myoclonic epilepsy [54]. Tracking levels of biogenic amines such as dopamine added to physiologic solutions of variable ionic and polar character containing cystatin c and varying amounts of copper, might reveal, or rule out CDAO activity.
Leakage of fibrinogen has been found to presage microglial motility, perivascular clustering, and onset of axonal damage in a rodent model of multiple sclerosis [55]. Based on mechanisms discussed, a fibrin membrane might be acting as an insulator in these settings creating disruption of conduction mechanisms necessary for stability of axonal sheaths targeted in multiple sclerosis. Fibrin gel networks have been shown to possess self-similar structure [56] and their function as Josephson junctions possessing higher-order geometric properties in vivo could be seen to support “readout” of information as bioactivity, in diverse settings. This type of “biological” JJ would acquire directional character, typical of standard solid-state versions, only as stresses in the wound environment exert field influences able to establish it, based on healing requirements.
It is noteworthy that clotting factors V and VIII are related to Ceruloplasmin, a multicopper oxidase of the blue type [57], factors V and VIII bind copper [58], and copper can potentiate association of factor VIII heavy and light chains [58]. Factors V and VIII are also blue copper oxidases [59]. A report on the primary structure of ascorbate oxidase concluded that the small blue copper proteins likely evolved from the same ancestral gene as the multicopper oxidases [60]. Moreover, cysteine has been described as “obligatory” in formation of a blue site [61], and this author presents a theory of “rack induced bonding,” to explain high reduction potentials of these proteins, necessitating cooperative interactions of multiple influences around the active site, including the ligands themselves [61]. Cysteine residues can be acted on by H202, a product of the amine oxidase reaction, to become “direct” sensors of redox status [62] with obvious advantages in terms of energy cost and speed, over redox sensing mechanisms requiring post-translational modifications. These principles could explain the necessity for proteins, such as Cystatin c, to be called into service in support of local CDAO function for clearance of potent amines in settings of dynamic stress, and consequent changes in the solvent environment.
Heme-oxygenase (HO) is competitively inhibited by calveolin-1 (Cav-1) [63], a protein recently described for a role in hereditary pulmonary arterial hypertension (HPAH}, possessing a conserved cysteine near a frameshift mutation associated with two cases of the disease [64]. The redox sensitivity of this residue (cys-156) was demonstrated by S-nitrosation resulting from TNF or NO donor application that lead to rapid degradation of the protein [65]. HO is known for a critical role in the turnover of red blood cells, and a capacity to utilize heme as a prosthetic group or a substrate [66]. As pulmonary hypertension (PH) is a condition widely-known to be caused by abuse of amphetamines, the role of this redox sensitive Cys-156 becomes clear, if Cav-1 is a CDAO of the small blue copper type, and the mutations led to elevations in potent amines. Moreover, specific binding of Cav-1 and HO [63] could amount to formation of a dual Cu/Fe dependent oxygen sensing apparatus at wound sites, explaining the competitive inhibition of HO by Cav-1, based on a finite local oxygen level. Accordingly, an evolved relationship between these two proteins could support their assembly in times of stress to facilitate a dynamic geometric relationship between iron and copper, necessary for gating and orientation of electron flow from oxygen, dependent on the mirage effect, levels of potent amines, redox status, [O2] and other local conditions.
ADAMTS13 is a metalloprotease that mediates cleavage of vwF, with a recognized role in syndromes of thromobotic microangiopathy resulting from aberrant polymerization of vwF, and consequent clumping of platelets at sites of high shear stress in the vasculature [67]. This factor contains two “CUB” domains, a motif named for the presence of sequences sharing homology with complement elements C1r/C1s, a sea urchin protein, Uegf, and BMP-1 [67], a protease of the astacin family. Multiple cysteine residues within each of these two domains are required for stability and secretion of ADAMTS13 [68]. It is noteworthy that the astacin family was defined based on the 82% sequence identity in the 198 aa N-terminals of the human brush border acid hydrolase (PPH) and mouse kidney brush border enzyme (meprin A), with close identity to BMP-1 and astacin, a crayfish digestive protease, with 3 cysteines, among 37 strictly conserved residues, in a frame aligned on the entire 200-aa sequence of astacin [69].
Direct injection of BMP1 and whole CUB domains can dorsalize the ventral half of Xenopus embryos [70], suggesting the importance of these domains in early neural cell fate, as well as orientation of cells in the organism. An astacin family protein from crayfish has shown enhanced activity with Co substitution, and diminished activity with Cu substitution relative to Zn [71]. Another astacin family member, Blastula protease 10 (BP10) possesses structural domains similar to BMP-1 and showed a 960% enhancement in hydrolysis rate of N-benzoyl-arginine-p-nitroanilide when derivatized to Cu, rather than Zn [72]. These authors note the similarity of the active site and substrate specificity of BP10 to that of Serralysin, a metalloprotease involved with virulence mechanisms of Pseudomonas aeruginosa and Serratia marcescens that have been recently studied [73]. This evidence suggests that regulatory mechanisms dependent on the availability of copper and zinc are a very serious matter for survival of multicellular organisms.
The role of CUB domains in orientation is further suggested by similarities in the interaction of Neuropilin-1 (NRP-1) with Sema3a and plexin that regulates axonal guidance through cytoskeletal influences [74], and the vwF interaction with FVIII critical to regulation of platelet aggregation [67]. The presence of two CUB domains and two FactorV/FactorVIII-like discoidin domains in NRP-1 suggests it may have a role comparable to ADAMTS13 in the vwF axis. A switch in response to Sema3a can be regulated by ADAM metalloproteases [75]. Similarity to the FVIII/vwF/ADAMTS13 axis suggests how formation of Sema3a/NRP-1/Plexin complexes around neurons represents acquisition of a dynamic sensory capacity to “read-in” (or “sense”) levels of potent amines, redox status, and other aspects of the local environment, amounting to multiple oriented field influences. A “read-out” of these signs of stress, through influences on the cytoskeleton, either to stabilize the neural network, or destabilize it, and initiate regulation of axon guidance, directly parallels orientation of a “biologic” JJ described earlier for fibrin polymers.
Correlation of elevated RANTES, but not absence of hemorrhage, with survival in pediatric Ebola virus disease [76] may be explained if flow through clot structures serves a vital function, dependent on binding of copper, amounting to transformation of the entire surface area of the vascular wall into a “quantum computing” device highly specific to addressing viremia. The unique capacity of copper, among other transition metals tested, including iron and zinc, to support higher-order oligomerization of RANTES, with maintenance of function in states of redox stress [77], suggests a role in recovery of released copper from clot sites.
Factors V and VIII have also been noted to have complex metal ion requirements for secretion and functionality [78, 79]. Homology of the binding sites for APC in the FVa and FVIII light chain A domains with regions in ceruloplasmin has been interpreted to suggest involvement with ion binding [80]. FV and FVIII have dual sorting signals [78], though these authors describe only a shift from high to low [Ca++] for relevance in binding to LMAN1 during transit from ER to ER-Golgi intermediate, leaving open the possibility of an unrecognized signal mediating interaction with MCFD2. Evidence that FV and FVIII are blue copper oxidases related to ceruloplasmin suggests that copper is the second signal. Reduction in the activity of APC in the presence of copper, and reversal of inhibition in the presence of human serum albumin (HSA), or a high-affinity copper-binding analogue of HSA [81] reveals a novel mode of FV and FVIII activity regulation, and the potential utility of APC in sepsis to sequester free copper.
The role of copper may also explain the superiority of heparin over protease inhibitors or chelating anticoagulants, in maintaining stability of FVIII procoagulant activity (VIII:C) [82]. Recovery of activity from CPD plasma with recalcification in the presence of heparin plasma suggested restoration was due to renaturation rather than enzymatic action. These authors conclude by emphasizing the value of maintaining physiologic calcium ion availability for preservation of VIII:C activity [82]. Such physiologic environments would likely also support natural copper ion bioavailability.
It has been suggested that statins [76], and specifically Atorvastatin [83], could have value in treating Ebola. Statin treatment for dyslipidemia has been shown to significantly reduce serum zinc, copper and Ceruloplasmin [84]. Atorvastatin treatment improved arterial stiffness in elderly patients in association with a 20% reduction in von Willebrand factor, a 26.4% increase in Cu/Zn SOD activity [85], and cognition and depression in patients with AD or MCI in association with reduced ceruloplasmin [86]. Chelating activity of atorvastatin metabolites has been suggested to account for concentration dependent reduction in LDL oxidation by copper [87]. Induction of tissue factor expression in human THP-1 monocytic cells by ceruloplasmin or copper further suggests a central regulatory function [88]. The requirement for 8-hydroxyquinoline, a lipophilic chelator, in revealing the effect [88] suggests the difficulty in dissecting the role of copper in ex vivo settings.
Enhancement of tissue factor expression in monocytes by CD40 ligand [89] reveals the role of copper in outcomes of severe bloodborne infection. The finding of elevated sCD40L in survivors, with fatal outcomes correlating with elevated thrombomodulin, ferritin and D-dimer in Ebola infection [90] and vwF with fatal outcome in Sudan virus infection [91] suggest that monocytes act to integrate evidence of platelet activation (sCD40L) with levels of free copper, controlling tissue factor expression and mobilization of factors that can recover copper for the host, such as RANTES, and FV/VIII. Elevation of vwF, the binding partner of FV/FVIII, would suggest a state of depletion of FV/FVIII, with collapse of ability to synthesize these factors as the virus has gotten decisive access to the host’s copper, needed at clot sites. D-dimer elevation reflects clot degradation to gain this access. Elevated ferritin would reflect simultaneous release of free iron, the redox cycling partner of copper. Elevation of thrombomodulin may reflect depletion of APC, it’s binding partner.
The ability of zinc, copper and calcium to alter the structure and stability of SAA [92] reveals a role for SAA elevation in pediatric EHF cases [76] in support of copper transport. Accordingly, delivery of copper to monocytes explains induction of tissue factor [ref 36, in 76]. Reduction in endothelial NO synthase production and bioavailability of NO [ref 37, in 76], could occur by regulating copper-dependent complex formation between eNOS and caveolin-1 that can inhibit NO synthesis [93] in line with the interaction between HO-1 and caveolin-1, described previously. Caveolin-1 has recently been described as an “essential regulator of eNOS” with disruption underlying endothelial dysfunction [94]. This suggests a role for statins, as they can reduce vascular endothelial expression of caveolin-1, thereby increasing eNOS activity in the setting of cardiovascular disease [95]. The subtle chelating action of statins, and factors such as SAA and copper-dependent proteins, suggests that experimental work will be required to reveal optimal ways of achieving a desired effect on eNOS, or other critical activities beneficial to the host dependent on redistribution of copper ions.
Descriptions of Majorana bound modes [14] suggest they arose in the course of cyclic adiabatic processes applied to lysates of human platelets within a defined space. Two entangled initial states were developed based on the asymmetry created by the dialysis membrane and oriented flow of HCl. An initial “stress” created by placement in the larger dilute acid volume could support redistribution of copper ions across the membrane, leading to a transient drop in CDAO activity, and elevation of potent amine levels within the bag. Their redistribution to the larger external volume could account for aspects of the “entangled” states that would eventually stabilize, and support formation of looping currents of such low MW elements between the spaces. Under these conditions the separated states can acquire phase differences, a Pancharatnam-Berry phase, that can be undefined (singular) for some combination of parameters [1]. The Berry phase is described to be independent of the rate the path is traversed [96].
In the system examined, distinct states of matter were defined to exist through rigorous measurement, though their nature was likely initially dependent on more than 2 oriented field influences. These states can be seen as dependent not only on the path taken to create them, but also the rate at which the path was traversed, such as how long they were in contact across the membrane before their fusion, or the interval between lysis and placement in dialysis. Thus, the model provides a way of conceptualizing a rate-dependent phase resulting from an unlimited number of influences, typical of stresses on organisms. The name proposed for this is the Ramis-Ackroyd-Murray-Hudson phase, or Ramis phase for short. Mathematically, this obviates a dynamical phase factor, resulting in an equation of state defined by the Ramis phase in a single exponential term. In this study the phase shift was defined by external dialysis volume, and this parameter alone fully determined the final state, and separation between adjoining stationary states. The sense of being “in the moment,” often during intense concentration required for practice of some highly developed and cherished art, where a feeling of timelessness takes over, seamlessly integrating past, present and future in the artist, essential for the manifestation, could be seen to underlie creation and function of these states in humans.
The Ramis phase contained information that became stored in response to stress, retrievable upon fusion of the separated, bound modes, amounting to stable bioactivity, highly specific to countering the stress that created it. Other bioactivities of interest for regenerative or diagnostic purposes could result from this type of system applied to any lysed tissue possessing valuable biosynthetic properties. It should be informative to examine the effect of removing the bag fully, or to varying degrees, from the larger volume, transferring the material to a separate container, and then reintegrating at later times into the larger (low MW portion), as this would be predicted to destroy information nascent between the two, linked modes, maintained by cycling connections across the membrane.
The key to exploiting distinct stresses will be having a relevant assay to establish the appropriate field influences that force the necessary phase shift, and subsequent reduction into two final control parameters that alone can create the shift. In this example these parameters were volume (a geometric parameter) and diffusion (a time-dependent parameter), implying that the Pancharatnam-Berry phase drove evolution into the final purely geometric state. It seems intuitive that, generally, one parameter will be dynamic and the other geometric, a signature of the two original stresses in the ecosystem: cyclical sunlight exposure, and gravity. The process of reduction can be seen as finding a path of least action in a process space.
The separation and later integration of egg yolk and white portions in the creation of diverse foods, such as chocolate mousse, can be seen to depend similarly on a path of stresses, and time taken at each step, such as those involved with incorporation of ingredients like chocolate, butter, sugar, orange liqueur, and coffee, into the yolk phase, and subsequent integration of the white phase processed by application of a shearing field for the right amount of time to generate proper foam structure. Incorporation of excessively high percentages of cacao (>62%) has been observed to consistently disrupt creation of a desirable, smooth final state (author’s unpublished observation), suggesting a distinct symmetry breaking effect. This point of breaking may happen at higher percentages of cacao, with application of processes known to more experienced practitioners. The fact that interactions between these same materials in a geometrically confined space results in the birth of chicks under appropriate conditions, should not be missed for relevance to the ideas developed.
The “create-braid-measure” paradigm of Hasan and Kane [97] can be seen to have occurred in the system studied wherein creation of “entangled” separation supports braiding of vortices set up across the membrane with consequent “quantum computation”-like events. Measurement of the states occurred when brought back together into a stable superposition, locking the information into a new bound state, quantified in the assay system. This information determined viability and phenotypic properties of primary human cells relevant to regenerative responses. The physical and mathematical models presented and discussed in this paper suggest that topologically ordered states forming at non-zero temperature can underlie control of developmental responses in living systems. They may also provide the missing elements in the quest for exact solutions in General Relativity, if the concept of “energy conditions” [98] is broadened to “emergent property conditions” with information flow seen as the regulator of matter-energy transformation necessary for their fulfillment.
Because the states described have been quantified, and assuming they fit a form like the time-independent Schrödinger equation, solutions to the time-dependent form for additional states of the system should be straightforward, and easily generalized to other living systems. Since tensor representations of the Berry curvature can be developed [96], better weighting algorithms for MRI should be possible, if incident and readout magnetic fields are seen to support creation of Ramis phases that contain information retrievable as visual images upon their fusion. These ideas are also expected to be useful in development of methods for channeling sunlight and substrates into plant mashes, plant whole tissue mounts, or microbial culture systems requiring determination of eigenmodes of orientation with respect to gravity, sunlight, and substrate dosing able to lock these systems into states of desired biosynthetic behavior, such as may be necessary for confinement and isolation of metallic hydrogen phases.
The author acknowledges the kind assistance of James West, PhD for providing access to his lab at Vanderbilt University so the extracts could be made, putting together Fig. (1), and the material resources for these experiments, as well as the kind assistance of Tom Blackwell, for handling the cell cultures, the MTT assay and representative photo collection that was essential for proper blinding prior to analysis of the data. Dr. West and Steve Simske PhD are both acknowledged for assistance in obtaining and sharing access to referenced papers the author could not have otherwise obtained.
[1] http: //enwikipediaorg/wiki/Geometric_phase; accessed online March 26 2015.
[2] Nurden AT. Platelets, Inflammation and tissue regeneration Thromb Haemost 2011; 105(Suppl 1 ): S13-33.
[3] Roedersheimer MT. inventor The Regents of the University of Colorado, a body corporate, assignee Methods for extracting platelets and compositions obtained therefrom United States patent US 20100222253 13 April
[4] Roedersheimer M, Nijmeh H, Burns N, Sidiakova AA, Stenmark KR, Gerasimovskaya EV. Complementary effects of extracellular nucleotides and platelet-derived extracts on angiogenesis of vasa vasorum endothelial cells in vitro and subcutaneous Matrigel plugs in vivo Vasc Cell 2011; 3(1): 4.
[5] Roedersheimer MT, Bateman TA, Simske SJ. Effect of gravity and diffusion interface proximity on the morphology of collagen gels J Biomed Mater Res 1997; 37: 276-81.
[6] Sipe JB, Zhang J, Waits C, Skikne B, Garimella R, Anderson HC. Localization of bone morphogenetic proteins (BMPs)-2, -4, and -6 within megakaryocytes and platelets Bone 2004; 35(6): 1316-22.
[7] Burt DW, Law AS. Evolution of the transforming growth factor-beta superfamily Prog Growth Factor Res 1994; 5(1): 99-118.
[8] Kalen A, Wahlstrom O, Linder CH, Magnusson P. The content of bone morphogenetic proteins in platelets varies greatly between different platelet donors Biochem Biophys Res Commun 2008; 375(2): 261-4.
[9] Robertson IB, Rifkin DB. Unchaining the beast; insights from structural and evolutionary studies on TGFbeta secretion, sequestration, and activation Cytokine Growth Factor Rev 2013; 24(4): 355-72.
[10] Bragdon B, Moseychuk O, Saldanha S, King D, Julian J, Nohe A. Bone Morphogenetic Proteins a critical review Cell Signal 2011; 23: 609-20.
[11] Sampath TK, Rashka KE, Doctor JS, Tucker RF, Hoffmann FM. Drosophila transforming growth factor beta superfamily proteins induce endochondral bone formation in mammals Proc Natl Acad Sci USA 1993; 90(13): 6004-8.
[12] http: //enwikipediaorg/ wiki/High-temperature_superconductivity#Examples accessed online March 28 2015.
[13] Desiderio MA, Zini I, Davalli P , et al. Polyamines, ornithine decarboxylase, and diamine oxidase in the substantia nigra and striatum of the male rat after hemitransection J Neurochem 1988; 51(1): 25-31.
[14] Alicea J. New directions in the pursuit of Majorana fermions in solid state systems Rep Prog Phys 2012; (75): 076501.
[15] http: //wwwcoloradoedu/ physics/2000/bec/lascool4html accessed online March 26 2015.
[16] Ashcroft NW. Metallic Hydrogen A High-Temperature Superconductorκ Phys Rev Lett 1968; 21: 1748.
[17] Ohno K, Ito M, Ichihara M, Ito M. Molecular hydrogen as an emerging therapeutic medical gas for neurodegenerative and other diseases Oxid Med Cell Longev 2012; 2012: 353152.
[18] http: //enikipediaorg/wiki/ Heavy_water#Effect_on_biological_systems accessed online March 28 2015.
[19] Robinson MM, Turner SM, Hellerstein MK, Hamilton KL, Miller BF. Long-term synthesis rates of skeletal muscle DNA and protein are higher during aerobic training in older humans than in sedentary young subjects but are not altered by protein Supplementation FASEB J 2011; 25(9): 3240-9.
[20] Crommie MF, Lutz CP, Eigler DM. Confinement of electrons to quantum corrals on a metal surface Science 1993; 262(5131): 218-0.
[21] Tsukihara T, Aoyama H, Yamashita E , et al. Structures of metal sites of oxidized bovine heart cytochrome C oxidase at 28 A Science 1995; 269(5227): 1069-74.
[22] Duce JA, Ayton S, Miller AA , et al. Amine oxidase activity of κ-amyloid precursor protein modulates systemic and local catecholamine levels Mol Psychiatry 2013; 18(2): 245-54.
[23] Bund T, Boggs JM, Harauz G, Hellmann N, Hinderberger D. Copper uptake induces self-assembly of 18.kDa myelin basic protein (MBP) Biophys J 2010; 99(9): 3020-8.
[24] Tavassoly O, Nokhrin S, Dmitriev OY, Lee JS. Cu(II) and dopamine bind to alpha-synuclein and cause large conformational changes FEBS J 2014; 281(12): 2738-53.
[25] Liu L, Jiang D, McDonald A, Hao Y, Millhauser GL, Zhou F. Copper redox cycling in the prion protein depends critically on binding mode J Am Chem Soc 2011; 133(31): 12229-37.
[26] Fiete GA, Heller EJ. Theory of quantum corrals and quantum mirages Rev Mod Phys 2003; 75: 933.
[27] Goldwasser E, Jacobson LO, Fried W, Plzak LF. Studies on erythropoiesis.V.; The effect of cobalt on the production of erythropoietin Blood 1958; 13(1): 55-60.
[28] Chandel NS, Maltepe E, Goldwasser E, Mathieu CE, Simon MC, Schumacker PT. Mitochondrial reactive oxygen species trigger hypoxia-induced transcription Proc Natl Acad Sci USA 1998; 95(20): 11715-20.
[29] Kobayashi M, Shimizu S. Cobalt proteins Eur J Biochem 1999; 261(1): 1-9.
[30] Lynn JW, Huang Q, Brown CM , et al. Structure and Dynamics of Superconducting NaxCoO2 Hydrate and it's unhydrated analog Phys Rev B 2003; 68: 214516.
[31] Teufel JS. PhD dissertation, Experimental investigation of H2/D2 isotope separation by cryo-absorption in metal-organic frame-works Max-Planck-Institute fur Intelligente Syteme published April 10 2013 http:// elibunistuttgart
[32] Kelton JG, Steeves K. The amount of platelet-bound albumin parallels the amount of IgG on washed platelets from patients with immune thrombocytopenia Blood 1983; 62(4): 924-7.
[33] Rhee S, Grinnell F. Fibroblast mechanics in 3D collagen matrices Adv Drug Deliv Rev 2007; 59(13): 1299-305.
[34] Betz P, Nerlich A, Wilske J, Tubel J, Penning R, Eisenmenger W. Immunohistochemical localization of collagen types I and VI in human skin wounds Int J Legal Med 1993; 106(1): 31-4.
[35] Rinn JL, Wang JK, Liu H, Montgomery K, van de Rijn M, Chang HY. A systems biology approach to anatomic diversity of skin J Invest Dermatol 2008; 128(4): 776-82.
[36] Berridge MV, Herst PM, Tan AS. Tetrazolium dyes as tools in cell biology new insights into their cellular reduction Biotechnol Annu Rev 2005; 11: 127-52.
[37] http: //wwwnobelprizeorg/nobel_ prizes/chemistry/laureatesbuchner-biohtml 1907.
[38] Geshkenbein BV, Larkin AI. Vortices with half magnetic flux quanta in “heavy-fermion” superconductors Phys Rev B 1987; 36(1): 235-8.
[39] Uji S, Terashima T, Nishimura M , et al. Vortex dynamics and the Fulde-Ferrell-Larkin-Ovchinnikov State in a Magnetic-Field-Induced Organic Superconductor Phys Rev Lett 2006; 97: 157001.
[40] Reimers Jr, Mckemmish LK, McKenzie RH, Mark AE, Hush NS. Weak, strong, and coherent regimes of Fröhlich condensation and their applications to terahertz medicine and quantum consciousness 2009; 106(11): 4219-24.
[41] Gritsev V, Polkovnikov A. Dynamical quantum Hall effect in the parameter space Proc Natl Acad Sci USA 2012; 109(17): 6457-2.
[42] http: //enwikipediaorg/wiki/Susceptibility accessed March 28 2015.
[43] Avnir D, Kagan ML. The evolution of chemical patterns in reactive liquids driven by hydrodynamic instabilities CHAOS 1995; 5(3): 589-601.
[44] Kagan ML, Kepler TB, Epstein IR. Geometric phase shifts in chemical oscillators Nature 1991; 349(6309): 506-8.
[45] Garneata L, Slusanschi O, Preoteasa E, Corbu-Stancu A, Mircescu G. Periodontal status, inflammation, and malnutrition in hemodia-lysis patients - is there a linkκ J Ren Nutr 2015; 25(1): 67-74.
[46] Marsenic O, Wierenga A, Wilson DR , et al. Cystatin C in children on chronic hemodialysis Pediatr Nephrol 2013; 28(4): 647-53.
[47] Oh MY, Lee H, Kim JS , et al. Cystatin, C. a novel indicator of renal funtion reflects severity of cerebral microbleeds BMC Neurol 2014; 14: 127. doi 10.1186/1471-.
[48] Watanabe S, Hayakawa T, Wakasugi K, Yamanaka K. Cystatin C protects neuronal cells against mutant copper-zinc superoxide dismutase-mediated toxicity Cell Death Dis 2014; 5: e1497.
[49] Kaur G. Levy, E. Cystatin C in Alzheimer's Disease Front Mol Neurosci 2012; 5: 79.
[50] Januszewski AS, Mason N, Karschimkus CS , et al. Plasma semicarbazide-sensitive amine oxidase activity in type 1 diabetes is related to vascular and renal function but not to glycaemia Diab Vasc Dis Res 2014; 11(4): 262-9.
[51] Noonan T, Lukas S, Peet GW , et al. The oxidase activity of vascular adhesion protein-1 (VAP-1):is essential for function Am J Clin Exp Immunol 2013; 2(2): 172-85.
[52] Somfal GM, Knippel B, Ruzicska E , et al. Soluble semicarbazide-sensitive amine oxidase (SSAO) activity is related to oxidative stress and subchronic inflammation in streptazotocin-induced diabetic rats Neurochem Int 2006; 48(8): 746-52.
[53] Zerovnik E, Skerget K, Tusek-Znidaric M, Loeschner C, Brazier MW, Brown DR. High affinity copper binding by stefin B (cystatin B) and its role in the inhibition of amyloid fibrillation FEBS J 2006; 273(18): 4250-63.
[54] Kaur G, Mohan P, Pawlik M , et al. Cystatin C rescues degenerating neurons in a cystatin B-knockout mouse model of progressive myoclonus Am J Pathol 2010; 177(5): 2256-67.
[55] Davalos D, Ryu JK, Merlini M , et al. Fibrinogen-induced perivascular microglial clustering is required for the development of axonal damage in neuroinflammation Nat Commun 2012; 3: 1227.
[56] Kubota K, Kogure H, Masuda Y , et al. Gelation dynamics and gel structure of fibrinogen Colloids Surf B Biointerfaces 2004; 38(3-4): 103-9.
[57] Pisu P, Bellovino D, Gaetani S. Copper regulated synthesis, secretion and degradation of ceruloplasmin in a mouse immortalized hepatocytic cell line Cell Mol Biol (Noisy-le-grand) 2005; (Suppl 51 )OL859-67.
[58] Milne DB Nielsen FH. Effects of a diet low in copper on copper-status indicators in postmenopausal women Am J Clin Nutr 1996; 63: 358-64.
[59] Ryden LG, Hunt LT. Evolution of protein complexity The blue copper-containing oxidases and related proteins J Mol Evol 1993; 36(1): 41-66.
[60] Ohkawa J, Okada N, Shinmyo A, Takano M. Primary structure of cucumber (Cucumis sativus) ascorbate oxidase deduced from cDNA sequence Homology with blue copper proteins and tissue-specific expression Proc Natl Acad Sci USA 1989; 86(4): 1239-43.
[61] Malmstrom BG. Rack-induced bonding in blue-copper proteins Eur J Biochem 1994; 223(3): 711-8.
[62] Benoit R, Auer M. A direct way of redox sensing RNA Biol 2011; 8(1): 18-23.
[63] Taira J, Sugishima M, Kida Y, Oda E, Noguchi M, Higashimoto Y. Caveolin-1 is a competitive inhibitor of heme oxygenase-1 (HO-1):with heme identification of a minimum sequence in caveolin-1 for binding to HO-1 Biochemistry 2011; 50(32): 6824-31.
[64] Austin ED, Ma L, LeDuc C , et al. Whole exome sequencing to identify a novel gene (caveolin-1):associated with human pulmonary arterial hypertension Circ Cardiovasc Genet 2012; 5(3): 336-43.
[65] Bakhshi FR, Mao M, Shajahan AN , et al. Nitrosation-dependent caveolin I phosphorylation, ubiquitination, and degradation and its association with idiopathic pulmonary arterial hypertension Pulm Circ 2013; 3(4): 816-30.
[66] Fraser ST, Midwinter RG, Berger BS, Stocker R. Heme Oxygenase-1: a critical link between iron metabolism, erythropoiesis, and development Adv Hematol 2011; 2011: 473709.
[67] Lancellotti S, Basso M, De Cristofaro R. Proteolytic processing of von Willebrand factor by adamts13 and leukocyte proteases Mediterr J Hematol Infect Dis 2013; 5(1): e2013058.
[68] Zhou Z, Yeh HC, Jing H , et al. Cysteine residues in CUB-1 domain are critical for ADAMTS13 secretion and stability Thromb Haemost 2011; 105(1): 21-30.
[69] Dumermuth E, Sterchi EE, Jiang WP , et al. The Astacin family of metalloendopeptidases J Biol Chem 1991; 266(32): 21381-5.
[70] Lee HX, Mendes FA, Plouhinec JL, De Robertis EM. Enzymatic regulation of pattern BMP4 binds CUB domains of Tolloid and inhibits proteinase activity Genes Dev 2009; 23(21): 2551-62.
[71] Gomis-Ruth FX, Grams F, Yiallouros I , et al. Crystal structure, spectroscopic features, and catalytic properties of Cobalt(II), Nickel(II), and Mercury(II) derivatives of the zinc endopeptidase Astacin J Biol Chem 1994; 269(25): 17111-7.
[72] da Silva GF, Reuille RL, Ming LJ, Livingston BT. Overexpression and mechanistic characterization of blatula protease 10, a metalloprotease involved in Sea Urchin embryogenesis and development J Biol Chem 2006; 281(16): 10737-44.
[73] Butterworth MB, Zhang L, Liu X, Shanks RM, Thibodeau PH. Modulation of the epithelial sodium channel (ENaC) by bacterial metalloproteases and protease inhibitors PLoS One 2014; 9(6): e100313.
[74] Nakamura F, Kalb RG, Strittmatter SM. Molecular basis of semaphorin-mediated axon guidance J Neurobiol 2000; 44(2): 219-9.
[75] Romi E, Gokhman I, Wong E , et al. ADAM metalloproteases promote a developmental switch in responsiveness to axonal repel-lant Sema3a Nat Commun 2014; 5: 405.
[76] McElroy AK, Erickson BR, Flietstra TD , et al. Biomarker correlates of survival in pediatric patients with ebola virus disease Emerg Infect Dis 2014; 20(10): 1683-90.
[77] MacGregor HJ, Kato Y, Marshall LJ, Nevell TG, Shute JK. A copper-hydrogen peroxide redox system induces dityrosine cross-links and chemokine oligomerisation Cytokine 2011; 56(3): 669-75.
[78] Zheng C, Zhang B. Combined deficiency of coagulation factors V and VIII an update Semin Thromb Hemost 2013; 39(6): 613-20.
[79] Pitman DD, Tomkinson KN, Kaufman RJ. Post-translational requirements for functional factor V and factor VIII secretion in mammalian cells J Biol Chem 1994; 269(25): 17329-37.
[80] Walker FJ, Scandella D, Fay PJ. Identification of the binding site for activated protein C on the light chain of factors V and VIII J Biol Chem 1990; 265(3): 1484-9.
[81] Bar-Or D, Rael LT, Winkler JV, Yukl RL, Thomas GW, Shimonkevitz RP. Copper inhibits activated protein C protective effect of human albumin and an analogue of its high affinity copper-binding site, d-DAHK Biochem Biophys Res Commun 2002; 290(5): 1388-92.
[82] Rock GA, Cruickshank WH, Tackberry ES, Ganz PR, Palmer DS. Stability of VIII: C in plasma the dependence on protease activity and calcium Thromb Res 1983; 29(5): 521-35.
[83] Fedson DS. A practical treatment for patients with Ebola virus disease J Infect Dis 2015; 211(4): 661-2.
[84] Ghayour-Mobarhan M, Lamb DJ, Taylor A , et al. Effect of statin therapy on serum trace element status in dyslipidaemic subjects J Trace Elem Med Biol 2005; 19(1): 61-7.
[85] Wang J, Xu J, Zhou C , et al. Improvement of arterial stiffness by reducing oxidative stress damage in elderly hypertensive patients after 6 months of atorvastatin therapy J Clin Hypertens (Greenwich) 2012; 14(4): 245-9.
[86] Sparks DL, Petanceska S, Sabbagh M , et al. Cholesterol, copper and Abeta in controls, MCI, AD and the AD cholesterol lowering treatment trial (ADCLT) Curr Alzheimer Res 2005; 2(5): 527-39.
[87] Aviram M, Rosenblat M, Bisgaier CL, Newton RS. Atorvastatin and gemfibrozil metabolites, but not the parent drugs, are potent antioxidants against lipoprotein oxidation Atherosclerosis 1998; 138(2): 271-80.
[88] Crutchley DJ. Que, BG. Copper-induced tissue factor expression in human monocytic THP-1 cells and its inhibition by antioxidants Circulation 1995; 92(2): 238-43.
[89] Sanguigni V, Ferro D, Pignatelli P , et al. CD40 ligand enhances monocyte tissue factor expression and thrombin generation via oxidative stress in patients with hypercholesterolemia J Am Coll Cardiol 2005; 45(1): 35-42.
[90] McElroy AK, Erickson BR, Flietstra TD , et al. Ebola hemorrhagic Fever novel biomarker correlates of clinical outcome J Infect Dis 2014; 210(4): 558-66.
[91] McElroy AK, Erickson BR, Flietstra TD , et al. Von Willebrand factor is elevated in individuals infected with Sudan virus and is associated with adverse clinical outcomes Viral Immunol 2015; 28(1): 71-3.
[92] Wang L, Colon W. Effect of zinc, copper, and calcium on the structure and stability of serum amyloid, A Biochemistry 2007; 46(18): 5562-9.
[93] Ghosh S, Gachhui R, Crooks C, Wu C, Lisanti MP, Stuehr DJ. Interaction between caveolin-1 and the reductase domain of endothelial nitric oxide synthase.Consequences for catalysis J Biol Chem 1998; 273(35): 22267-71.
[94] Williams JJ, Palmer TM. Cavin-1: caveolae-dependent signaling and cardiovascular disease Biochem Soc Trans 2014; 42(2): 284-8.
[95] Balakumar P, Kathuria S, Taneja G, Kalra S, Mahadevan N. Is targeting eNOS a key mechanistic insight of cardiovascular defensive potentials of statinsκ J Mol Call Cardiol 2012; 52(1): 83-92.
[96] http: //enwikipediaorg/wiki/Berry_ connection_and_curvature accessed March 28 2015.
[97] Hasan MZ, Kane CL. Topological insulators Rev Mod Phys 2010; 82: 3045.
[98] http: //enwikipediaorg/wiki/Exact_ solutions_in_general_relativity#Difficulties_with_the_definition http: //wwwsuperconductorsorg/28c_rtschtm accessed March 28 2015. |
568c4ce430d43766 | By the mouth of two witnesses, or three witnesses, shall the one liable to death be put to death; he shall not be put to death by the mouth of one witness. (Devarim-Deuteronomy 17:6)
I wrote about witnesses in my earlier post imaginatively called Witnesses. As Deuteronomy, restates many mitzvoth (commandments) introduced earlier, which is why it is also called Mishneh Torah – the repetition of Torah – the portion Shoftim-Judges restates the law of witnesses first introduced in in the book of Bamidbar-Numbers, portion Massei. So we shall revisit this fascinating subject.
The Judgment of the Sanhedrin: "He is Guilty!"
The Judgment of the Sanhedrin: “He is Guilty!” (1892 painting by Nikolai Ge)
I. An accused criminal and a Schrödinger cat
Most criminals are convicted in the US bases on circumstantial evidence. Rare is the case when a jury gets to hear an eyewitness. For example, most of the evidence against Timothy McVeigh was circumstantial. A law professor at the University of Michigan, Robert Precht, said commenting on McVeigh’s trial, “Circumstantial evidence can be, and often is much more powerful than direct evidence.” Logically, this makes a lot of sense. Indeed, people lie intentionally or make innocent mistakes. A “smoking gun,” fingerprints, DNA evidence are examples of objective circumstantial evidence that often have greater weight in the minds of jurors deliberating the guilt of an accused criminal than a testimony of an eye witness whose credibility can be easily impeached. Consequently, many prosecutors prefer to rely on circumstantial evidence. No so in Jewish law. As the Torah teaches us in this Torah portion, only by the testimony of eyewitnesses can a Jewish court convict an accused criminal. This is not easy to understand. Doesn’t this mean that most criminals go free? Perhaps so… but the logic of the Torah can be easily understood from the point of view of Quantum Mechanics (QM).
In QM a particle or an ensemble of particles is described by a wavefunction obeying the Schrödinger equation. The wavefunction (or, more precisely, square amplitude of the wavefunction) describes the probability of finding a particle in the given area of the phase space. In other words, the wavefunction does not predict exact values for physical characteristics of the particle – only a probability distribution of finding some values more likely than others. However, when we measure a quantum-mechanical system, say a position of a subatomic particle, we always get a precise value for the characteristic we measure. It is if the distribution of probabilities or the probability “cloud” suddenly collapses into a single value. In QM, this is referred to as the collapse of the wavefunction. The trouble is, this collapse of the wavefunction does follow from the Schrödinger equation of from any other principle of the QM. It is not at all predicted by the QM and added ad hoc to explain experimental findings. When we study the equations, we see one picture. When we measure the system, we get another picture. This paradox is also known as the Measurement Problem.
The Measurement Problem leads to such strange phenomena as a Schrödinger cat. The cat is described by a wavefunction, which is a linear superposition of two states: the cat is alive, and the cat is dead. So the poor feline is in a surreal state of suspended animation, stuck somewhere between being dead or alive. It takes an observer to collapse the wavefunction of the cat to bring it to a certainty: either being dead or alive.
Some of the greatest minds of the 20th century, including Jewish-Hungarian genius mathematician, John Von Newman, Nobel laureate in physics, Eugene Wigner, and Princeton physics professor, late John Archibald Wheeler (Richard Feynman was his student), all thought that only human consciousness can collapse the wavefunction. This lead John Wheeler to coin the term, participatory observer, which means that an act of observation is not passive – the observer participates in creating reality by collapsing the wavefunction.
From the point of view of structural analysis, the Torah approach is parallel to the quantum-mechanical approach: a suspect, which may be innocent or guilty, like the Schrödinger cat, is in a state of linear superposition of two states: guilty and innocent. Only an eyewitness, i.e., a conscious observer, can collapse his wavefunction and bring certainty in place of uncertainty. This is why, in my humble opinion, the Torah requires an eyewitness’s testimony to convict a suspect — we need an eyewitness, a participating observer, to collapse the “wavefunction” of the accuse and resolve the uncertainty.
II Why One Eyewitness Is Not Enough?
You may have seen movies with the Perry Mason moment, when the witness on the witness stand points to the suspect sitting in the courtroom exclaiming, “He did it!” This makes it easy for the jury to reach a verdict beyond a reasonable doubt.
In QM, it takes only one observer to collapse the wavefunction. If so, following our logic, one eyewitness should be enough to convict a criminal. Why than Torah requires more than one witness? Why one eyewitness, no matter how respected (in the words of the Talmud, even if this witness is Moses himself), is not enough?
The obvious explanation is that one witness cannot be properly interrogated, his testimony cannot be compared with the testimony of another eyewitness. The Torah takes the principle of “innocent until found guilty” to a whole new level. Torah purposely makes convicting the accused very difficult, even almost impossible. All of the horrible punishments enumerated by the Torah, such as stoning, decapitation, strangulation – are meant as a deterrent – “And people will hear and they will be afraid and thus you will eradicate this evil from Israel,” is a common refrain in the Torah. Therefore, the Torah requires at least two witnesses so that judges can interrogate them pitting the testimony of one against the other so that inconsistency can be found and the testimony would be thrown out of court. We need not worry that many a criminal would go free. Ultimately, those who committed heinous crimes will not go unpunished, but the punishment will come from the “hands of Heaven.” This teaches us not to be arrogant and not to Play God, but rather rely on Him to carry out justice in the world.
Philo praised the requirement that a court should not rely on the testimony of a single witness as an “excellent commandment.” Philo was concerned that an individual may be deceived by a false first impression. Moreover, why should a court take the word of one witness against the word of the accused? Where there is no preponderance of the evidence, the judgment should not be made, argued Philo.
Perhaps another reason can be a profound realization that human knowledge is imperfect and is subject to uncertainty. Only God, who is omniscient, has perfect knowledge because He knows all by knowing Himself. It is a statement of humility, to acknowledge that one man can never have the perfect knowledge and certainty, which are domains of God. This may be another reason, perhaps, that Torah requires more than one witness to convict a suspect of a crime. Even then, we realize that testimony of two or more witnesses, at best, is an approximation of the perfect knowledge of God. This is our way of dealing with the inherent uncertainty of human knowledge. In the words of Karl Popper, a prominent 20th-century philosopher of science, we can only be certain that a theory is wrong; we can never be sure that it is right. Torah taught us this long before Popper.
Uncertainty, interrogating two witnesses… These concepts invoke airy similarity with the Heisenberg Uncertainty Principle. According to the Uncertainty Principle, which is the basis of Quantum physics, we can never be certain of the values of two complementary properties, we can never measure such properties with an unlimited degree of precision. Take for example such pairs of complementary properties as position and momentum, or time and energy. According to the Uncertainty Principle, the products of uncertainties in measurements of these properties must always be equal or greater than the Plank constant. If, for example, x is the uncertainty in the measurement of the position of a particle and p is the uncertainty in the measurement of the momentum of this particle, then
Similarly, if t is the uncertainty in the measurement of time and E is the uncertainty in the measurement of energy, than
Let us now consider the testimony of two witnesses. It is easy to understand that they are complimentary in a quantum-mechanical sense – the more precise is one testimony, the more imprecise must be the other in order not to contradict the first. Say, for example, the first witness testifies that the crime took place in the morning. Although the second witness is not aware of the testimony of the first witness – they are interrogated separately – it is easy for him to also say that the crime was committed in the morning. However, if the first witness says the crime was committed at 10:01 AM and the second witness says it was committed at 10:02, the two testimonies are inconsistent and are thrown out of court. So, not to contradict the first witness, the second witness should say that the crime was committed sometime in the morning but he doesn’t remember the exact hour. The more precise is the testimony of the first witness, the vaguer should be the testimony of the second witness not to contradict the first. Just as in Quantum Mechanics, were the product of uncertainties in measurements of complementary properties cannot be zero, here too, the product of uncertainties in testimonies of two witnesses cannot be zero. This is used by judges to find inconsistencies in testimonies of the witnesses to disqualify them and to acquit the accused. Since this is the goal, the Torah prescribes an easy mechanism to accomplish it, in the meanwhile, teaching us something about Quantum Physics.
III Two or Three Witnesses
According to the Torah, a suspect cannot be convicted by the testimony of one witness. “By the mouth of two witnesses, or three witnesses…” Why does the Torah say, “two witnesses, or three witnesses”? If two witnesses are enough, surely three would also be enough!
The Talmud (Makos, 5b) explains that if the third witness contradicts two other witnesses, the testimony is inadmissible. Although, in deciding halachic matters (questions of religious observance) the court goes according to the majority, in criminal matters, surprisingly, this is not the case. Even if in a group of 100 witnesses, 99 of them corroborate each other’s testimony, but one witness contradicts them, all 100 witnesses are disqualified and their testimony is thrown out of court. All witnesses are treated as a single group. This is hard to understand as it appears counter-intuitive.
The Talmud further explains that all witnesses offering their testimony about a crime must have observed not only the crime but must give the testimony about each other during that crime. At the first blush, this makes little sense – what difference does it make whether or not witnesses saw each other during the crime? So long as they all saw the crime and give identical testimony this should have been enough!
It turns out, this detail about the law of witnesses is the key to understanding why all witnesses are treated as a single group. By seeing each other, witnesses get entangled with each other. In the language of quantum mechanics, they share the same state, i.e., they are described by the same wavefunction. When mountain-climbers scale a summit being tight to each other by a single rope, if one falls, he or she can pull down the whole group with them. Here too, because all witnesses are entangled, one witness can invalidate the entire group no matter how large. Witnesses in Jewish law must not only see the crime but see and be seen by each other. This gives a new meaning to the old expression – see and be seen.
The Sanhedrin (illustration from the 1883 People’s Cyclopedia of Universal Knowledge)
Spread the love |
192016297c2d79ca | Theoretical estimates of the electron density for the first few hydrogen atom electron orbitals shown as cross-sections with color-coded probability density
An atom is a physical system which includes positive and negatively charged particles. Classical electrostatics can help understanding, but always bear in mind that an atom is a quantum electrodynamic system and many atomic phenomena require modern physics to describe.
Atomic Theory is the branch of chemistry concerned with the smallest form of an element that can exist chemically, the atom. Classical physics is helpful to understanding some properties of atoms. However, the range of behaviors of atoms exceeds the descriptive powers of classical physics. To explain the line spectrum of hydrogen, for example, Neils Bohr develped his early form of atomic theory. A more complete picture of the electronic structure of the atom is provided by modern quantum electrodynamics.
Questions directly concerned with Atomic Theory, or more generally, basic quantum mechanics, do appear with fair regularity on the MCAT, although they tend to be easier questions than they may seem at first glance. More important than the direct appearance of these concepts on the exam is that these initial chapters of Chemistry, dealing with the instrinsic structure of matter, i.e. Atomic Theory, Periodic Properties, and Chemical Bonding, are absolutely crucial for the scientific understanding of the physical and natural world. The rest of General Chemistry, Organic Chemistry, and Biology will make profoundly better sense, and be much more interesting besides, if you take special care to understand the structure of matter.
WikiPremed Resources
Atomic Theory Concepts
Concept chapter for Atomic Theory in PDF format
Atomic Theory Practice Items
Problem set for Atomic Theory in PDF format
Answer Key
Answers and explanations
Atomic Theory Images
Question Drill for Atomic Theory
Conceptual Vocabulary Self-Test
Basic Terms Crossword Puzzle
Basic Puzzle Solution
Learning Goals
Learn the basic apparati, mechanisms and conclusions of the most significant experiments of early modern Atomic Theory including J.J. Thomson's cathode ray experiment, Millikin's oil drop experiment, and Rutherford's experiment with gold foil and alpha rays.
Understand the consequences of Planck's experiment with black body radiation.
Be able to verbally reproduce the reasoning from evidence that led to Bohr's model of the hydrogen atom.
Master the basic description of the electronic structure of the atom in modern quantum theory in terms. Picture the orbitals of an atom. Understand how to use quantum numbers to describe them, the Pauli exclusion principle, the aufbau principle and Hund's rule.
Suggested Assignments
Get oriented in atomic theory using the question server. Complete the fundamental terms crossword puzzle. Here is the solution to the puzzle.
Study the atomic theory chapter. Perform the practice items. Here is the answer key for the problem set.
In In ExamKrackers Chemistry, read pp. 1-20. This chapter covers concepts from both atomic theory and periodic properties. Perform practice items 1-8 on pg. 21.
Take a review tour of atomic theory web resources.
Conceptual Vocabulary for Atomic Theory
An atom is the smallest particle still characterizing a chemical element
The electron is a fundamental subatomic particle that carries a negative electric charge.
The proton is a subatomic particle with an electric charge of one positive fundamental unit, a diameter of about 1.5 fm femtometer, and a mass that is about 1836 times the mass of an electron.
The neutron is a subatomic particle with no net electric charge and a mass that is slightly more than a proton
Atomic orbital
An atomic orbital is a mathematical description of the region in which an electron may be found around a single atom.
An ion is an atom or molecule which has lost or gained one or more electrons, making it negatively or positively charged.
Isotopes are any of the several different forms of an element with nuclei having the same number of protons but different numbers of neutrons.
Bohr model
The Bohr model depicts the atom as a small, positively charged nucleus surrounded by electrons that travel in circular orbits around the nucleus.
Hydrogen is a chemical element represented by the symbol H and an atomic number of 1.
Valence electron
Valence electrons are the electrons contained in the outermost electron shell of an atom.
Electron configuration
The electron configuration is the arrangement of electrons in an atom, molecule, or other physical structure such as a crystal.
Electron shell
An electron shell, also known as a main energy level, is a group of atomic orbitals with the same value of the principal quantum number.
Ernest Rutherford
Ernest Rutherford was a nuclear physicist who pioneered the orbital theory of the atom through his discovery of scattering off the nucleus with his gold foil experiment.
Alpha particle
Quantum leap
A quantum leap is a change of an electron from one energy state to another within an atom.
Emission spectrum
An element's emission spectrum is the relative intensity of electromagnetic radiation of each frequency it emits when it is excited.
Spin is the angular momentum intrinsic to a body, as opposed to orbital angular momentum, which is the motion of its center of mass about an external point.
Hund's rules
Hund's rules are a simple set of rules used to determine the term symbol that corresponds to the ground state of a multi-electron atom.
Aufbau principle
The Aufbau is used to determine the electron configuration of an atom, molecule or ion, postulating a hypothetical process in which an atom is built up by progressively adding electrons.
Law of definite proportions
The law of definite proportions states that a chemical compound always contains exactly the same proportion of elements by mass.
Rutherford model
The Rutherford model showed that the plum pudding model of the atom of J. J. Thomson was incorrect, presenting the atom as containing a central charge concentrated into a very small volume in comparison to the rest of the atom.
Quantum mechanics
Quantum mechanics is the study of the relationship between energy quanta and matter, in particular between photons and valence shell electrons.
Uncertainty principle
The Heisenberg uncertainty principle gives a lower bound on the product of the standard deviations of position and momentum for a system, implying that it is impossible to have a particle that has an arbitrarily well-defined position and momentum simultaneously.
Quantum state
The quantum state of a system corresponds to a set of numbers that fully describe a quantum system.
Pauli exclusion principle
The Pauli exclusion principle explains why matter occupies space exclusively for itself and does not allow other material objects to pass through it, while at the same time allowing light and radiation to pass.
Excited state
An excited state of a system is any quantum state of the system that has a higher energy than the ground state.
Rutherford scattering
Observation of the phenomenon of Rutherford scattering of alpha particles incident on gold foil led to the development of the orbital theory of the atom.
Principal quantum number
The principal quantum number has the greatest correlation to energy of the quantum numbers describing the unique quantum state of an electron in an atom.
Black body
A black body is an object that absorbs all electromagnetic radiation that falls onto it. No radiation passes through it and none is reflected.
Cathode ray
Cathode rays are streams of electrons observed in vacuum tubes.
Oil-drop experiment
The purpose of Robert Millikan and Harvey Fletcher's oil-drop experiment (1909) was to measure the electric charge of the electron.
J. J. Thomson
Sir Joseph John Thomson (1856 - 1940) was a British scientist credited for the discovery of the electron, of isotopes, and the invention of the mass spectrometer.
Spin quantum number
The spin quantum number is a quantum number that parametrizes the intrinsic angular momentum of a given particle.
Magnetic quantum number
The magnetic quantum number, along with the principal quantum number, the azimuthal quantum number, and the spin quantum number, describes the unique quantum state of an electron.
Schrödinger equation
The Schrödinger equation describes the space- and time- dependence of quantum mechanical systems.
Spectral line
A spectral line is a dark or bright line in an otherwise uniform and continuous spectrum, resulting from an excess or deficiency of photons in a narrow frequency range.
Niels Bohr
Niels Bohr (1885 - 1962) was a Danish physicist who made fundamental contributions to understanding atomic structure and quantum mechanics, for which he received the Nobel Prize in 1922.
Plum pudding model
The plum pudding model of the atom was proposed by J. J. Thomson, the discoverer of the electron in 1897 before the discovery of the atomic nucleus.
Azimuthal quantum number
The Azimuthal quantum number (or orbital angular momentum quantum number) is the quantum number for an atomic orbital which determines its orbital angular momentum.
Balmer series
The Balmer series describes a series of spectral line emissions of the hydrogen atom that reflect emissions of photons by electrons in excited states transitioning to the quantum level described by the principal quantum number n equals 2.
John Dalton
John Dalton (1766 - 1844) was an English chemist, meteorologist and physicist, best known for his pioneering work in the development of modern atomic theory.
Electron cloud
In the electron cloud analogy, the probability density of an electron, or wavefunction, is described as a region of space around the atomic or molecular nucleus representing the electron's likely location.
Stationary state
A stationary state is an eigenstate of a Hamiltonian, or in other words, a state of definite energy. The corresponding probability density has no time dependence.
Lyman series
The Lyman series is the series of transitions and resulting emission lines of the hydrogen atom as an electron goes from an electron shell of principal quantum number greater than or equal to 2 to the ground state.
Advanced terms that may appear in context in MCAT passages
Rydberg formula
The Rydberg formula is used in atomic physics for describing the wavelengths of spectral lines of many chemical elements.
Stark effect
The Stark effect is the shifting and splitting of spectral lines of atoms and molecules due to the presence of an external static electric field.
Zeeman effect
The Zeeman effect is the splitting of a spectral line into several components in the presence of a static magnetic field.
Paschen series
The Paschen series is the series of transitions and resulting emission lines of the hydrogen atom as an electron goes from an electron shell greater than or equal to 4 to n = 3.
Hyperfine structure
Hyperfine structure is a small perturbation in the energy levels, or spectra, of atoms or molecules due to the magnetic dipole-dipole interaction, arising from the interaction of the nuclear magnetic moment with the magnetic field of the electron.
Brackett series
In atomic physics, the Brackett series describes a series of spectral line emissions of the hydrogen atom that appear in emission when hydrogen atoms' electrons descend to the fourth energy level from a higher level.
Franck-Hertz experiment
In 1914, the Franck-Hertz experiment elegantly supported Niels Bohr's model of the atom by demonstrating that atoms could indeed only absorb specific amounts of energy.
Moseley's law
Moseley's law is an empirical law concerning the characteristic x-rays that are emitted by atoms which justified the conception of the nuclear model of the atom.
In X-ray spectroscopy, K-alpha emission lines result when an electron transitions to the innermost K shell from a 2p orbital of the second or L shell.
Spin-orbit interaction
The spin-orbit interaction is any interaction of a particle's spin with its motion.
Siegbahn notation
The Siegbahn notation is used in x-ray spectroscopy to name the spectral lines that are characteristic to elements. It was created by Manne Siegbahn.
Creative Commons License |
2bc585e1e1de478f | guest post by Prof Sir John Meurig Thomas (Materials Science, Cambridge)
CDBU welcomes a diversity of views. Please contact us if you would like to be part of the conversation.
This article is a slightly modified form of one that appears in Angewandt Chemie 2013 the original version of which can be downloaded here.
“Research at the institute is primarily curiosity driven, which is reflected in the five sections comprising this Review”(on the oxidation of carbon monoxide).
So wrote H.-J. Freund, G. Meijer, M. Scheffler, R. Schlögl and M. Wolf in the special issue of Angewandte Chemie (50, 10064 (2012)) to mark the centenary of the Fritz Haber Institute of the Max Planck Society and the 75th birthday of Gerhard Ertl.
These words were music to my ears. The philosophy that animates research at the Fritz Haber Institute (FHI) was one that motivated almost all scientific developments in the universities of the United Kingdom in former times. But this is no longer so; indeed, such has been the transformation in attitudes of policy makers and funding bodies that it has prompted many leading academics in this country to establish a Council for the Defence of British Universities (CDBU) so as to re-instate the kind of ethos that is still pervasive at the FHI and doubtless at many other Max Planck Institutes. Four former Presidents of both the Royal Society and the British Academy, along with the present holders of those prestigious posts, two former UK Government Cabinet Ministers, numerous academics representing the sciences and the humanities, and notable celebrities like David Attenborough and Michael Frayn, are among the founders of the CDBU.
The following passages constitute a modified version of the remarks I was invited to make at the British Academy in the inaugural meeting of the CDBU in late 2012.
Ever since the days of Isaac Newton, university teachers have cherished the freedom to investigate any aspect of the natural world irrespective of the need to justify the possible practical importance of their discoveries. In the early 1850s, for example, the young James Clerk Maxwell became fascinated by the experimental discoveries of Michael Faraday, especially the observation that light could be “manipulated” by a magnetic field. So intrigued was Maxwell by Faraday’s work that he decided to write a treatise on “Faraday’s lines of force” as his Research Fellowship submission to Trinity College, Cambridge. The outcome of Maxwell’s work led to the mathematical foundation of the phenomenon of electromagnetism. One of the consequences of the Maxwell-Faraday work is the realisation that every ray of light has a magnetic and electrical component. If this were not so, it would be impossible to explain the transmission and reception of radiowaves or to account for the mode of action of television, the telephone, DVDs, iPhones and iPads. Newton’s Laws do not help us one iota in understanding the mechanisms of these and the other electronic gadgets now in popular use. It was Faraday’s insatiable curiosity concerning the possible relation between magnetism and electricity that led him to discover electro-magnetic induction, which gave us the dynamo, the transformer and the means of generating continuous electricity now used worldwide in power stations.
In the 1920s, young Paul Dirac, stimulated by the work of Heisenberg, Born and Jordan in Germany, undertook his quantum mechanical studies, which were motivated by sheer intellectual curiosity and the desire to incorporate relativistic features into the Schrödinger equation. Dirac’s mathematical formulations led him to propose, in 1927, the existence of the positron, the first-ever suggestion that anti-matter was a reality. It took another four years before the experimental proof of the positron’s existence was established by Carl Anderson in the California Institute of Technology. For many decades thereafter the positron was regarded as a novelty with little prospect of it ever being harnessed for practical purposes. Now, however, almost every major hospital in the developed world uses positrons in the non-invasive medical technique of positron-emission tomography. Its many uses include charting cerebral activity and identifying stages in the growth of tumours.
Many other examples exist where university teachers, through inquisitive, intellectual adventures, have uncovered techniques of enormous and pervasive practical importance. It was pure curiosity that led scientists in the late 1940s to discover magnetic resonance spectroscopy and, a few decades later to another powerful, non-invasive medical technique, namely MRI, magnetic resonance imaging, now quite indispensable in most major hospitals.
In the 1950s, at Columbia University, Charles Townes became intrigued by the possibility that the population of electrons in simple molecules could be inverted, and also with the optical consequences of such inversion. When he proposed this experiment Isidor Rabi, a Nobel prize-winner colleague, told him he was wasting his time. Several other notable physicists doubted whether such an experiment would ever work. But Townes stubbornly persevered and so discovered the maser (the forerunner of the laser). This has changed our world comprehensively. In addition, it duly led to the discovery that our nearby galaxies shine maser light upon us.
The history of academic scientific endeavour is replete with important, transformational discoveries, the practical importance of which could not have been readily foreseen. Prominent examples are the discovery of X-rays, of nuclear fission, of antibiotics, antibodies, immuno-suppressive drugs (that make spare-part surgery feasible), and the structure of DNA, to name but a few. Scientific researchers know that discoveries cannot be planned: they pop-up, like Puck, in unexpected corners.
But why is it so relevant now to recall these facts? It is because scientific research in our universities is under threat: the freedom to pursue in untrammelled fashion research prompted by individual intellectual curiosity is being increasingly restricted by the paladins of the research councils. Public bodies that fund academic research in the UK now tend to emphasise the perceived practical importance of the scientific research which they decide to support financially. The Chief Executive of the UK’s Engineering and Physical Sciences Research Council (EPSRC), a body that spends some £900 million per annum on research grants, informed all applicants that from 15 November 2011, they should identify clearly the national importance of their proposed research project over a 10 to 50 year time frame.
This edict prompted outrage among academic researchers in the UK because they felt that it violated a cardinal principle of their proven prior attitudes. Delegations of academic scientists lobbied MPs and the British Prime Minister. It is gratifying to learn that, in response to these protests, the newly appointed chairman of EPSRC recently announced that the need for applicants to identify the national importance of their proposals over a 10 to 50 year span be rescinded. The CDBU welcomed this change of heart because one of its aims is to emphasise that scientific research, as well as being subject to accountability and having economic applications, should be animated by the desire to enhance our knowledge and understanding of the physical world, of human nature and of all forms of human activity.
No one disputes that there are several urgent scientific and technological quests that merit study in the national interest by academically-oriented researchers: the need for new means of converting replenishable feedstocks into useful energy and materials; the quest for better photo-voltaic systems and better industrially-applicable catalysts; improvements to existing light-emitting diodes and biotechnological converters are among the viable targets. But the best approach is to concentrate on identifying the talented individuals capable of proposing new ways of addressing these tasks, and to ensure that the requisite scientific training is provided in our higher educational institutes. This raises the question of how best to secure openings for talented young teacher-researchers. As the eminent U.S. chemist, Allen Bard, said a decade ago, the culture of academic research has shifted from evaluation based on excellence in teaching, creativity and productivity to one based on the amount of money raised. This is a consequence of implementing a “business model” for universities. In 2003 the UK Government explicitly encouraged universities to think of themselves as a business the primary function of which was to serve the world of commerce and an economy that demands instant return for financial investment. It is no accident that in the UK at present the Cabinet Minister for Universities and Science is in the Department of Business, Innovation and Skills. Moreover, UK universities are increasingly expected to generate their own funds (from patents and spin-off companies). If we think the quality of academic science suffers because of this approach then what, one wonders, will happen to the humanities.
It is undoubtedly mutually beneficial for academic scientists to interact with personnel in various manufacturing companies, and thereby help to foster work of national importance. But this must not be the only way forward. A very successful, but short-lived scheme in the UK that gave academics opportunities to indulge in “blue skies” research and to investigate natural phenomena out of curiosity (not financial profit), was the so-called ROPA initiative, introduced by the then Director General of the (UK) Research Councils, Sir John Cadogan. This gave academics the money and the freedom to explore whatever topic took their fancy, provided they had previously gained joint grants with private industry to pursue a mission-oriented project. Nearly half of the 1000 or so ROPA grants were so potentially interesting that industry was prompted to follow up the “blue skies” investigations of the academics.
The feeling amongst academics in the UK these days, and I imagine it prevails elsewhere, is that university personnel require a restoration of the proven qualities of intellectual freedom, which has contributed so much to the culture, and facilitated the economic growth and the communal well-being, of the nation.
In this regard, returning to the ethos of the FHI, it is prudent to recall the principles advocated by the late Max Perutz, founder and former Director of the Laboratory of Molecular Biology, LMB (of the UK Medical Research Council) in Cambridge; “Choose outstanding people and give them intellectual freedom; show genuine interest in everyone’s work and give younger colleagues public credit; enlist skilled support staff who design and build sophisticated and advanced new apparatus and instruments; facilitate the interchange of ideas, in the canteen as much as in seminars.”
Not only have Perutz’s principles led to fifteen Nobel Prize winners for scientists working at the LMB, there have also been numerous commercial successes that have flowed from the discoveries made and techniques developed there.
Unless the continual erosion of the intellectual freedom of scholarly academics is arrested and reversed the consequences for both the sciences and the humanities could prove catastrophic. |
de7ba7cb05b12f98 | paul le
Sequences and Series in Physics
One thing I wish my mathematics professors emphasized more (or at least mentioned in passing) was how useful sequences and series are in physics, especially Taylor Series.
The main idea with Taylor Series is that you can represent any analytic function as an infinite sum of terms calculated from the values of the function’s derivatives at a single point.
Any function that can be represented as a Taylor Series can be approximated using a finite number of terms of its Taylor Series, typically by only looking at the first few terms in the series.
What this means is, given a complicated function, you can approximate what that function will look like near a given point, typically by using a line or even a parabola:
Hooke's Law
Notice how the dotted line fits pretty nicely with the red line near the origin. Think of the dotted line as a complicated, nasty function, and the red line as an approximation of that complicated, nasty function, that works pretty well if you stay close to the origin. Where do you get the red line from?
It’s the first two terms of the Taylor Series, which is essentially a line:
Equation 1
This is a very powerful idea, because often times in physics, the best we can do is approximate. The Taylor Series representation of a function is a tool that allows us to approximate many things, and in many cases (like above, with the complicated, nasty function) simplifies things greatly, without sacrificing too much accuracy.
You will definitely come across this in your classical mechanics class at some point, most notably when you study Hooke’s Law again. If you look at the graph above, you will notice that it is a graph plotting the force exerted by a Hookean spring as a function of its displacement from its equilibrium position.
The dotted line is how the spring actually behaves, but it is a very complicated and nasty function (in this case it looks like a cubic function, but many times we are not that lucky), and the red line represents Hooke’s Law, which is simply the second term of the Taylor Series representation of the dotted line function:
Equation 2
If we take the constant to be at the origin, you get Hooke’s Law (the first term is a constant we can ignore in this case because it is zero, and the factor in front of the second term is equal to -k, which we determine experimentally):
Equation 3
This is only one, albeit an important application of sequences and series.
When you take introductory quantum mechanics, you learn how to use power series to solve the Schrödinger equation, in the context of a harmonic oscillator. Here is what that looks like, for some motivation: Harmonic oscillator - series solution
Adapted from my answer to a question on Quora.
Also published on Medium |
ced4374f3f83a959 | Download المحاضرة الثانية اساسيات الكم
yes no Was this document useful for you?
Thank you for your participation!
Document related concepts
EPR paradox wikipedia, lookup
Particle in a box wikipedia, lookup
Renormalization group wikipedia, lookup
Hidden variable theory wikipedia, lookup
Relativistic quantum mechanics wikipedia, lookup
T-symmetry wikipedia, lookup
Bohr–Einstein debates wikipedia, lookup
History of quantum field theory wikipedia, lookup
Wave–particle duality wikipedia, lookup
X-ray fluorescence wikipedia, lookup
Renormalization wikipedia, lookup
X-ray photoelectron spectroscopy wikipedia, lookup
Theoretical and experimental justification for the Schrödinger equation wikipedia, lookup
James Franck wikipedia, lookup
Rutherford backscattering spectrometry wikipedia, lookup
Electron wikipedia, lookup
Auger electron spectroscopy wikipedia, lookup
Tight binding wikipedia, lookup
Quantum electrodynamics wikipedia, lookup
Atom wikipedia, lookup
Atomic orbital wikipedia, lookup
Ionization wikipedia, lookup
Electron configuration wikipedia, lookup
Bohr model wikipedia, lookup
Atomic theory wikipedia, lookup
Hydrogen atom wikipedia, lookup
المحاضرة الثانية
اساسيات الكم
Fig. 1. Part of the emission spectrum of atomic hydrogen. Groups of lines have particular names, e.g. Balmer and
Lyman series.
Fig. 2 Some of the transitions that make up the Lyman and Balmer series in the
emission spectrum of atomic hydrogen.
Bohr’s theory of the atomic spectrum of hydrogen
In 1913, Niels Bohr combined elements of quantum theory and classical physics in a
treatment of the hydrogen atom. He stated two postulates for an electron in an atom:
(i) Stationary states exist in which the energy of the electron is constant; such states
are characterized by circular orbits about the nucleus in which the electron has an
angular momentum mvr. The integer, n, is the principal quantum number.
where m = mass of electron; v = velocity of electron; r = radius of the orbit; h = the Planck constant.
(ii) Energy is absorbed or emitted only when an electron moves from one stationary
state to another where n1 and n2 are the principal quantum numbers referring to the
energy levels En1 and En2 respectively.
If we apply the Bohr model to the H atom, the radius of each allowed circular orbit
can be determined from the equation below. The origin of this expression lies in the
centrifugal force acting on the electron as it moves in its circular orbit; for the orbit to
be maintained, the centrifugal force must equal the force of attraction between the
negatively charged electron and the positively charged nucleus.
Substitution of n =1 gives a radius for the first orbit of the H atom of 5:293 x 10-11 m,
or 52.93 pm. This value is called the Bohr radius of the H atom and is given the
symbol a0.
An increase in the principal quantum number from n = 1 to n=∞ has a special
significance; it corresponds to the ionization of the atom and the ionization energy,
IE, can be determined as shown in the following example.
Values of IEs are quoted per mole of atoms:
Although the SI unit of energy is the joule, ionization energies are often expressed in
electron volts (eV) (1 eV = 96:4853 = 96:5 kJ mol-1). |
262698400068dc14 | Uncertainty equation
How do you calculate uncertainty?
Standard measurement uncertainty (SD) divided by the absolute value of the measured quantity value. CV = SD/x or SD/mean value. Standard measurement uncertainty that is obtained using the individual standard measurement uncertainties associated with the input quantities in a measurement model.
What is the Heisenberg Uncertainty Principle equation?
What is uncertainty value?
In metrology, measurement uncertainty is the expression of the statistical dispersion of the values attributed to a measured quantity. Thus, the relative measurement uncertainty is the measurement uncertainty divided by the absolute value of the measured value, when the measured value is not zero.
What does uncertainty mean?
lack of certainty
What is percentage uncertainty?
The percent uncertainty can be interpreted as describing the uncertainty that would result if the measured value had been100 units . A similar quantity is the relative uncertainty (or fractional uncertainty).
How do you calculate uncertainty concentration?
Finally, the expanded uncertainty (U) of the concentration of your standard solution is U = k * u_combined = 1,2% (in general, k=2 is used). The molality is the amount of substance (in moles) of solute (the standard compound), divided by the mass (in kg) of the solvent.
How do you divide uncertainty?
What is the Heisenberg Uncertainty Principle and why is it important?
Why is he called Heisenberg in Breaking Bad?
What is wave function Psi?
The wave function’s symbol is the Greek letter psi, Ψ or ψ. The wave function Ψ is a mathematical expression. The Schrödinger equation is an equation of quantum mechanics: calculated wave functions have discrete, allowed values for electrons bound in atoms and molecules; all other values are forbidden.
What is uncertainty with example?
What are the two types of uncertainty?
A Taxonomy of UncertaintyModal uncertainty is uncertainty about what is possible or about what could be the case. Empirical uncertainty is uncertainty about what is the case (or has been or would be the case). Normative uncertainty is uncertainty about what is desirable or what should be the case.
Leave a Reply
Depreciation equation
Polar to cartesian equation calculator wolfram
|
373698e425c769d9 | Young Researchers Colloquium
Jacek Krajczok, Filip Rupniewski, Jacopo Schino
Fridays 15:00 - 16:00, room 403/Zoom.
Next talk:
23.04.2021 Speaker: Sandra Lucente (University of Bari, Department of Physics)
Title: Making space for spaces
Abstract: In this talk, we summarize the various definitions of dimension. Our aim is to suggest a presentation of these ideas to a wide audience. For this reason, we follow a literary
approach, connecting every space dimension to an invisible city by the Italian writer Italo Calvino and a real European city.
Previous talks:
16.04.2021 Speaker: Michał Łasica (Institute of Mathematics, Polish Academy of Sciences; University of Tokyo)
Title: The jagged landscape of fourth-order quasilinear parabolic systems
Abstract: I will briefly survey the existence theory of weak solutions to fourth-order quasilinear parabolic equations (and systems of equations), focusing on variational and monotonicity-based methods. I will try to explain how and why it's different from its better known second-order counterpart. Finally, I will present an existence result that I obtained recently in collaboration with Yoshikazu Giga.
09.04.2021 Speaker: Alexander Frei (University of Copenhagen)
Title: Relative Cuntz-Pimsner algebras: Gauge-invariant uniqueness theorem and the lattice of gauge-invariant ideals
Abstract: We start with an abstract definition of C*-correspondences comparing them to Fell bundles.
After a first few basic results, we then swiftly move on to their representations. We introduce here the concept of covariances and relative Cuntz-Pimsner algebras.
From here we go into a detailed analysis of covariances within the category of C*-correpondences.We obtain here a systematic reduction leading us to a parametrisation of relative Cuntz-Pimsner algebras.
With this at hand we arrive at the gauge-invariant uniqueness theorem, for all (arbitrary) gauge-equivariant representations at once.
From here we move on to the analysis part of the program. We study the covariances in the case of the Fock representation and its quotients. As a result we derive that the parametrisation of relative Cuntz-Pimsner algebras is classifying. In other words, we obtain a complete and intrinsic picture of the lattice of quotients, and equivalently of gauge-invariant ideals.
If time permits, we finish off with the next chapter on their induced Fell bundles, as already investigated by Schweizer.
26.03.2021 Speaker: Sanaz Pooya (Institute of Mathematics, Polish Academy of Sciences)
Title: Higher Kazhdan projections, L²-Betti numbers, and Baum-Connes conjectures
Abstract: The Baum-Connes conjecture suggests a link between operator algebras and topology/geometry. If it holds for a certain group, it provides topological tools to compute the K-theory of its reduced group C*-algebra. This conjecture has been confirmed for large classes of groups, such as amenable groups, but also for some Kazhdan's property (T) groups. Property (T) and its strengthening are driving forces in the search for potential counterexamples to the conjecture. Having property (T) for a group is characterised by the existence of a certain projection in the universal group C*-algebra of the group, known as the Kazhdan projection. It is this projection and its analogues in other completions of the group ring, which obstruct known methods of proof for the Baum-Connes conjecture.
In this talk, after providing background on the topic, I will introduce a generalisation of Kazhdan projections. Employing these projections we provide a link between the surjectivity of various versions of the Baum-Connes map and the L²-Betti numbers of the group. This is based on joint work with Kang Li and Piotr Nowak.
19.03.2021 Speaker: Artem Dudko (Institute of Mathematics, Polish Academy of Sciences)
Title: Finding order within the chaos: a brief introduction to Julia sets
Abstract: Informally speaking, the Julia set J(p) of a polynomial p(z) of a complex variable z is the set of points near which the iterates of p(z) behave chaotically. Despite being associated with chaos, Julia sets have a lot of beautiful structures. In this talk, I will give a brief introduction to Julia sets of polynomials and will try to explain why there is so much order within them.
12.03.2021 Speaker: Piotr Antoni Kozarzewski (Military University of Technology)
Title: Parametrised measures — examples & applications
Abstract: The aim of this talk is to introduce the classical notions of parametrised measures — Young Measures and DiPerna–Majda measures like in [2] or [1]. Thepossiilities given by this notions will be illustrated on Euler's equation of incompressible fluids, as well as on several elementary examples. I will also sketch several modern approaches to DiPerna–Majda measures following [3] and explain the topological obstacles lying in the background, solved in [4].
[1] J.–J. Alibert and G. Bouchitté, Non-uniform integrability and generalized Young measures, Journal of Convex Analysis, 4 (1997), 129–147.
[2] R. J. DiPerna and A. J. Majda, Oscillations and ioncentrations in weak solutions of the incompressible fluid equations, Communications in Mathematical Physics, 108 (1987), 667–689.
[3] A. Kałamajska, On Young measures controlling discontinuous functions, Journal of Convex Analysis, 13 (2006), 177–192.
[4] P. A. Kozarzewski, On certain compactification of an arbitrary subset of R^m and its applications to DiPerna–Majda measures theory, submitted, (2020).
05.03.2021 Speaker: Sven Raum (University of Stockholm, Department of Mathematics)
Title: Superrigidity for group operator algebras
Abstract: It is a classical problem to recover a discrete group from various rings or algebras associated with it, such as the integral group ring. By analogy, in an operator algebraic framework, we want to recover torsion-free groups from certain topological completions of the complex group ring, such as the reduced group C*-algebra. Groups for which this is possible are called C*-superrigid.
I will start this talk with a discussion of how a group can be recovered from its group rings, before I introduce the reduced group C*-algebras and describe the state-of-the-art in C*-superrigidity. I will end with a short account on other kinds of superrigidity for group operator algebras putting the subject into a bigger perspective.
26.02.2021 Speaker: Hanieh Keneshlou (Institute of Mathematics, Polish Academy of Sciences)
Title: The birational geometry of Hurwitz spaces
Abstract: The Hurwitz space parametrizes d-sheeted simply branched covers of the projective line by smooth curves of genus g. The study of Hurwitz spaces plays an important role in shedding light on the geometry of the moduli spaces of curves M_g. In this talk, I will survey our knowledge on the birational geometry of Hurwitz spaces and I will present some results in this direction.
22.01.2021 Speaker: Oskar Stachowiak (University of Warsaw, Faculty of Physics)
Title: Counting paths in directed graphs
Abstract: Graph theory is considered one of the oldest and most accessible branches of combinatorics, and has numerous natural connections to other areas of mathematics. In particular, directed graphs, or quivers, are fundamental tools in representation theory and in noncommutative topology. In my talk, I will focus on the specific problem in the combinatorics of finite directed graphs: how to maximize the number of all paths of a fixed length in certain classes of graphs. Based on joint work with Piotr M. Hajac.
15.01.2021 Speaker: Alessandra De Luca (University of Milan-Bicocca, Department of Mathematics and Applications)
Title: From the Monotonicity Formula to the Unique Continuation Property for elliptic problems
Abstract: In my talk, I will focus on the study of the unique continuation property for second-order elliptic equations. To this aim, an important tool is the monotonicity formula for the so-called Almgren's frequency function associated with the solution of the problem, which can be derived using a suitable Pohozaev-type identity. I will give you an idea at first dealing with the cases of harmonic functions and also of very general perturbed elliptic problems.
After that, I will present an approximation argument which I developed in order to prove the unique continuation property when the domain is highly non-smooth due to the presence of a crack and, as a consequence of this construction, I will show some results related to problems where the fractional laplacian is involved.
08.01.2021 Speaker: Marzena Śniegowska (Nicolaus Copernicus Astronomical Center & Center for Theoretical Physics, Polish Academy of Sciences)
Title: Black holes in a nutshell
Abstract: I will briefly mention about this year's Nobel prize in Physics and about amazing result of EHT collaboration which is the first direct visual evidence of a supermassive black hole's silhouette. However, most of my talk will be focused on black holes in a more general and rather simplified way. I will explain, from an observational astronomer's point of view, how we can explore this part of astrophysics.
18.12.2020 Speaker: Marco Gallo (University of Bari, Department of Mathematics)
Title: Climbing a mountain can really make my day? Some insights on variational methods and the search for normalized solutions
Abstract: The goal of the talk is to give some ideas of the variational methods used to solve partial differential equations, focusing on the tool of the Mountain Pass theorem. We will then move to the case of normalized solutions for nonlinear Schrödinger equations, giving a picture of some recent techniques involving both the geometry and the compactness for the Lagrangian formulation of the problem. Finally, we will highlight how these tools well suit the case of fractional nonlocal operators.
11.12.2020 Speaker: Mateusz Wasilewski (KU Leuven)
Title: Random quantum graphs
Abstract: The study of quantum graphs emerged from quantum information theory. One way to define them is to replace the space of functions on a vertex set of a classical graph with a noncommutative algebra and find a satisfactory counterpart of an adjacency matrix in this context. Another approach is to view undirected graphs as symmetric, reflexive relations and "quantize" the notion of a relation on a set. In this case, quantum graphs are operator systems and the definitions are equivalent. Doing this has some consequences already for classical graphs; viewing them as operator systems of a special type has already led to introducing a few new "quantum" invariants.
Motivated by developing the general theory of quantum graphs, I will take a look at random quantum graphs, having in mind that the study of random classical graphs is very fruitful. I will show how having multiple perspectives on the notion of a quantum graph is useful in determining the symmetries of these objects.
Joint work with Alexandru Chirvasitu.
4.12.2020 Speaker: Michele Zaccaron (University of Padova, Department of Mathematics)
Title: Domain perturbation theory: results in spectral shape optimization via a functional analytic approach
Abstract: In this talk, we consider eigenvalue problems for two second-order differential operators: the former is important in linear elasticity and it involves the Laplacian, one of the most known and studied operators. The latter arises in the theory of electromagnetism and it is deeply related to Maxwell's equations, involving the so-called "curl curl" operator.
Addressing the classical question "Can one hear the shape of a drum?", we will focus on the study of the dependence of the eigenvalues upon shape perturbation, i.e. the perturbation of the domain in which the PDE is set.
The talk will contain a first introductory part in which I will try to explain the possible motivations and issues that arise when dealing with such eigenvalue problems, presenting some known results in spectral shape optimization for the Dirichlet/Neumann/Steklov Laplacian.
Then a more technical part will follow showing some of the tools and techniques used in the functional analytic study of these differential problems in order to give the audience a flavour of the ideas and instruments in my area of research. Here we will make use of the curl curl operator as an example, showing also recent results regarding optimization of the eigenvalues under suitable constraints (fixed volume or perimeter).
27.11.2020 Speaker: Javier de Lucas Araujo (KMMF UW)
Title: Classification of finite-dimensional Lie algebras of Hamiltonian vector fields on the plane and applications
Abstract: In this talk, I will start by describing the fundamental properties of Lie systems. A Lie system is a system of non-autonomous differential equations whose general solution can be written as a function of a finite generic family of particular solutions and some constants. Sophus Lie laid down the fundamental results on the theory of Lie systems. In particular, I will explain the celebrated Lie--Scheffers theorem, which states that every Lie system amounts to a curve in a finite-dimensional Lie algebra of vector fields. Lie classified all finite-dimensional Lie algebras of vector fields on the real line and on the plane satisfying that their vector fields span a regular distribution. His results were not fully explained, which led to the posterior appearance of several false claims in the literature. This problem was finally fixed by Artemio Gonzalez, Niki Kamran, and Peter J. Olver towards the end of the XX century.
Based on the Gonzalez, Kamran, and Olver classification of finite-dimensional Lie algebras of vector fields on the plane, which I will present and analyse, I will describe which of such Lie algebras can be considered as Lie algebras of Hamiltonian vector fields relative to a symplectic structure. We shall also show some of their relevant physical applications and their use in the theory of Lie systems. Finally, we will provide easily implementable algebraic and geometric methods to check whether a finite-dimensional Lie algebra of vector fields can be considered as a Lie algebra of Hamiltonian vector fields relative to a symplectic structure.
20.11.2020 Speaker: Maciej Gałązka (MIM UW)
Title: Examples of ranks of polynomials on toric varieties.
Abstract: We introduce the notion of rank of a polynomial for the Veronese embedding. We generalize it to some toric varieties. We investigate some lower bounds for rank and check when they can determine it. We examine some examples.
13.11.2020 Speaker: Caterina Sportelli (Università degli Studi di Bari Aldo Moro)
Title: Gradient-type quasilinear elliptic systems: existence and multiplicity results via a variational approach
Abstract: The first traces of variational methods date back to the 17th century but one hundred more years were necessary for the formal statement of the new theory Calculus of Variations, namely the pioneer works of Euler and Lagrange, which studies the existence of minimum points.
Nowadays, a variational approach is indispensable for dealing with many nonlinear phenomena and suitable advanced techniques allow one to look also for critical levels of minimax type.
In this talk, I will discuss the existence of solutions for a class of coupled quasilinear elliptic systems of gradient type with coefficients which depend also on the solution itself.
Although classical variational approaches fail, I will discuss how to overcome the difficulties that arise by introducing a suitable Banach space and a generalized version of the classical Ambrosetti-
Rabinowitz Mountain Pass Theorem.
Finally, I will show that, under assumptions of symmetry, a multiplicity result can be stated, too.
06.11.2020 Speaker: Zofia Grochulska (MIM UW)
Title: Finding a replacement for diffeomorphisms.
Abstract: I will talk about a part of analysis which has a nonempty intersection with topology as it deals with homeomorphisms equipped with some notion of differentiability. We will try to learn if they behave like diffeomorphisms or not. In particular, I will talk about what is known about the so-called Ball-Evans question regarding the approximation of homeomorphisms by diffeomorphisms. The answer turns out to be non-trivial and important from the point of view of applications.
23.10.2020 Speaker: Francesco Esposito (University of Calabria, Department of Mathematics and Computer Science)
Title: The moving planes method of Aleksandrov-Serrin for some semilinear elliptic problems.
Abstract: The moving planes method is one of the most important techniques that have been used in recent years to establish some qualitative properties of positive solutions of nonlinear elliptic equations, like symmetry and monotonicity. In particular, the aim of this talk is to discuss the application of this technique to positive solutions of some semilinear elliptic problems under zero Dirichlet boundary conditions.
The first part of the talk is focused on the study of symmetry and monotonicity properties of classical solutions via the moving planes method of Aleksandrov and Serrin. The second part is dedicated to the description of a nice variant of this technique in the case of singular solutions.
16.10.2020 Speaker: Marcin Napiórkowski (KMMF UW)
Title: Hot topics in cold gases
Abstract: Since the first experimental realization of Bose-Einstein condensation in cold atomic gases in 1995 there has been a surge of activity in this field. Ingenious experiments have allowed us to probe matter close to zero temperature and reveal some of the fascinating effects quantum mechanics has bestowed on nature. It is a challenge for mathematical physicists to understand these various phenomena from first principles, that is, starting from the underlying many-body Schrödinger equation. In my talk, I shall explain some of the problems and results mathematicians (including myself) are working on.
09.10.2020 Speaker: Haonan Zhang (IST Austria)
Title: Convexity and concavity of trace functionals and why they matter
Abstract: In this talk I will give a brief introduction to the convexity and concavity of trace functionals involving trace and matrices. They play an important role in quantum information theory. Since Lieb's celebrated work in 1973 resolving the conjecture of Wigner-Yanase-Dyson, this topic has seen great progress. As an example, I will introduce a conjecture in quantum information theory of Audenaert-Datta (and a stronger one of Carlen-Frank-Lieb) in recent years, explain the connection with convexity/concavity of trace functionals, and show how to solve them in a simple way.
02.10.2020 Speaker: Eleonora Romano (University of Trento)
Title: An introduction to the Minimal Model Program and the case of surfaces
Abstract: The aim of this seminar is to give an introduction to the main problem in Birational Geometry, which is the birational classification of complex smooth projective varieties. To this end, we first introduce essential objects as divisors, cone of curves, and extremal contractions. Then, we will focus on the Minimal Model Program (MMP), by discussing the case of surfaces.
19.06.2020 Speaker: Kang Li (IMPAN)
Title: Kirillov's orbit method for the Baum-Connes conjecture for algebraic groups
Abstract: The orbit method for the Baum-Connes conjecture was first developed by Chabert and Echterhoff in the study of permanence properties for the Baum-Connes conjecture. Together with Nest they were able to apply the orbit method to verify the conjecture for almost connected groups and p-adic groups. In this talk, we will discuss how to prove the Baum-Connes conjecture for linear algebraic groups over local fields of positive characteristic along the same idea. It turns out that the unitary representation theory of unipotent groups plays an essential role in the proof. As an example, we will concentrate on the Jacobi group, which is the semi-direct product of the symplectic group with the Heisenberg group. It is well-known that the Jacobi group has Kazhdan's property (T), which is an obstacle to prove the Baum-Connes conjecture.
12.06.2020 Speaker: Arturo Martínez-Celis (IMPAN)
Title: The Lindelöf property in products
Abstract: A topological space is Lindelöf if every open cover has a countable subcover. Unlike the property of being compact, the Lindelöf property is not preserved under products; one can easily show that the Sorgenfrey Line (the real numbers with the topology generated by the half-open intervals [a,b)) is Lindelöf but the product with itself is not. On the other hand, compact spaces always have a Lindelöf product with any Lindelöf space. Spaces such that their product with every Lindelöf space is Lindelöf are called productively Lindelöf. In this talk we will discuss these properties, mainly in the metrizable setting, see some examples and counterexamples, and their relation with some old questions in Topology.
05.06.2020 Speaker: Jacek Krajczok (IMPAN)
Title: Tomita-Takesaki theory and locally compact quantum groups
Abstract: During the talk I will recall the notions of a von Neumann algebra and a weight. Later on, I will state the main results of the Tomita-Takesaki theory and present it in a couple of examples. In the second part of my talk I will show how this theory is used in the theory of locally compact quantum groups. In particular, I will discuss a relation between traciality of the Haar integrals and unimodularity of the dual quantum group.
29.05.2020 Speaker: Michał Miśkiewicz (MiM UW)
Title: Geometric PDEs – how standard things get hard
Abstract: Starting with examples of geometrically motivated partial differential equations, I will illustrate Hadamard's notion of a well-posed problem. Then I will discuss the harmonic map flow, which is a simple generalization of the classical heat equation to the setting of maps taking values in a given manifold. We will see how this geometric restriction causes fundamental problems in the analysis of solutions.
22.05.2020 Speaker: Jakub Siemianowski (IM PAN)
Title: Topological approach to elliptic PDEs
Abstract: I begin with recalling some topological tools and giving us an intuition to them. In particular, I present one way of generalizing Bolzano's intermediate value theorem in higher dimensions and its connections with other existential theorems like Brouwer's fixed point theorem. Finally, I show how to use this tools to solve systems of elliptic partial differential equations.
13.03.2020 Speaker: Jacopo Schino (IMPAN)
Title: Distributions or: how I learned to stop worrying and pretend they're functions
Abstract: Distributions are often known as "a generalization of functions", but what does it mean exactly? In this talk, I will define formally what a distribution is, with a particular emphasis on how some operations with distributions (e.g. differentiation or convolution) are somehow induced by, and therefore generalize, the same operations with functions (whence the concept of distributions as generalized functions). I will also show possible applications (mostly to PDE's) of the theory of distributions.
06.03.2020 Speaker: Klaudiusz Czudek (IM PAN)
Title: Random walks on the interval
Abstract: Fix two increasing homeomorphisms of the interval into itself and assign to them some probabilities. Pick a starting point from the interior of the interval and consider a random walk in which we decide where to go in the next step by selecting randomly a homeomorphism according to the assigned distribution. I will describe what is the statistical behavior of this random walk.
28.02.2020 Speaker: Boulos El Hilany (IM PAN)
Title: On polynomial maps having maximally-dimensional preimages
Abstract: The coordinates of points in the codomain of a complex polynomial map are polynomials in the coordinates of points in its domain. Given such a map (under some mild assumptions), the complex dimension of a generic point's preimage depends solely on the dimensions of the source and target spaces. This, however, does not prevent some polynomial maps bringing about a preimage whose dimension is higher than expected. Points in the target space producing such preimages that have the maximal dimension form the maximality set of a map. I aim to give a glimpse at the possible topological properties that a complex polynomial map can have and their relation to its discrete invariants, such as the supports, that is, the sets of exponent vectors of the monomials in the respective underlying polynomials appearing with non-zero coefficients. Given a complex polynomial map, I will present a description for the (non-)emptiness of its maximality set using the corresponding supports' configuration.
24.01.2020 Speaker: Asahi Tsuchida (IM PAN)
Title: Introduction to catastrophe theory
Abstract: The goal of this talk is to prove Thom’s elementary catastrophe theorem.
Starting from physical phenomenon, we look over singularity theory of smooth maps and their stability.
Several comments on related topics are given if it is possible.
17.01.2020 Speaker: Zofia Michalik (MIM UW)
Title: The issues with volatility in asset price models
Abstract: The famous Black-Scholes formula for option pricing, published in 1973, was a breakthrough in financial mathematics. Since then it has been widely used by practitioners, mainly due to its simpleness. One of the main drawbacks of the model is that it assumes that the volatility of the asset price process is deterministic, which does not reflect the reality. This motivates the need for a better model, in which the volatility of the asset price is itself stochastic. In my talk I will take a Black-Scholes model as a starting point and then present some of the stochastic volatility models and the mathematics behind them.
10.01.2020 Speaker: Fulgencio Lopez Serrano (IM PAN)
Title: Maximal equilateral sets
Abstract: A set is equilateral if every two points are the same distance apart. In the plane the vertices of an equilateral triangle are an equilateral set. We will present results about maximal equialteral sets in the euclidean space. If possible we present results for general Banach spaces.
13.12.2019 Speaker: Reza Mohadmmadpour (IM PAN)
Title: Approximation of the maximal Lyapunov exponent
Abstract: Let T : X → X be a discrete dynamical system. A skew product over T is a dynamical system F acting on the product space X × Y such that π ⚬ F = T, where π: X × Y → X is the projection on the first coordinate. We are interested in skew products that act linearly on the second coordinate, so Y needs to be a vector space. Such skew products are called linear cocycles.
In this talk, we will focus on the Lyapunov exponents of linear cocycles. In particular, we will show that the maximal Lyapunov exponent can be approximated by periodic points.
6.12.2019 Speaker: Maksymilian Grab (MIM UW)
Title: Invariant Theory for pedestrians
Abstract: As already pointed out by Felix Klein in the nineteenth century, group actions are ubiquitous in geometry. Modern complex algebraic geometry is no exception to this slogan: homogeneous spaces, toric geometry, constructions of moduli spaces -- to name just a few topics closely related to algebraic group actions and their quotients. On the other hand, the construction of a quotient of an affine group action in algebraic category is more subtle than just taking the evident space parametrizing orbits of the action.
My goal in this talk will be to motivate through examples and present the definition of a quotient in sense of Geometric Invariant Theory (GIT) for a reductive algebraic group action on an affine variety over the field of complex numbers. Time permits, the audience will be exposed to a simple example unveiling a bit of non-obvious relation between GIT and the birational classification of algebraic varieties. Along the way we may meet a Hilbert problem.
29.11.2019 Speaker: Michał Godziszewski (MIM UW)
Title: Forcing, large cardinals, and the multiverse of models of set theory
Abstract: Gödel's discovery of the incompleteness phenomenon and Cohen's proof that the the failure of the Continuum Hypothesis is consistent with ZFC showed that there exist many incompatible structures satisfying the basic axioms of set theory. Sophisticated techniques developed by set theorists over the course of the 20th century, notably Cohen's method of forcing, showed that many interesting properties of sets are not decided by ZFC, so that we can equally well consider both ZFC augmented by such a property or by its negation. In parallel various statements unprovable in ZFC have been considered as candidates for new axioms for mathematics.
An important class of the above are Large Cardinal Axioms (LC) that (usually) posit existence of infinite cardinal numbers that: are uncountable, share certain combinatorial properties with ω, reflect (to a certain degree) the structure of the entire universe of sets and can be regarded as strengthenings of ZFC-principles for generating new sets: usually interpreted in allowing for maximizing the universe's height. However, as it was first shown by A. Levy and R. Solovay, certain important and natural combinatorial problems, such as CH are independent of ZFC + LC. The method of forcing that was invented by P. Cohen to demonstrate the independence of CH from ZFC led eventually to another class of axiom-candidates (but it was by no means the purpose of the method).
The so-called Forcing Axioms (FA) are certain generalizations of the Baire Category Theorem. The first example of FA was isolated by D. Martin (hence its name: Martin's Axiom) from the study of the use of iterated forcing in R. Solovay's and S. Tennebaum's proof of consistency of Suslin's Hypothesis (SH)—arguably second most important (after CH) problem of set theory in the first half of the 20th century. Related ideas led to isolation of the notion of properness of forcing by S. Shelah in his study of Jensen's Forcing, leading to proving consistency of SH with GCH. What was the role of these axioms? FAs allowed for transforming forcing from a method of showing independence into a way of proving conditional theorems. Noteworthily, most of them settle the value of the continuum to be the second uncountable cardinal number, and some of FAs have been successfully used since then also outside set theory (e.g., in combinatorics, topology, measure theory, real analysis, and functional analysis). However, the main intrinsic reason for why they can be considered as candidates for axioms is that they can be interpreted in allowing for maximizing the width of the set-theoretic universe (in a certain technical sense), and hence are presumably closely related to the iterative conception of set, and they express certain absoluteness principles (called generic absoluteness).
If we believe that to each of these various axiomatic systems corresponds a mathematical universe satisfying them, then there is a multiverse of mathematical universes, each with its own version of mathematics. These different mathematical universes agree to a significant degree, especially with respect to the mathematics of the finite, but give different answers to questions of the higher infinite. The purpose of the talk is to introduce the basic tools of contemporary axiomatic set theory and illustrate the multiverse perspective on foundations of mathematics with some recent results of my joint work with V. Gitman, T. Meadows, and K. Williams, concerning the collection of countable, recursively saturated models of set theory.
22.11.2019 Speaker: Masha Vlasenko (IM PAN)
Title: What is a period?
Abstract: Periods are numbers arising as integrals of algebraic functions over domains described by polynomial equations or inequalities with
rational coefficients. They include all algebraic numbers but also transcendental numbers, such as π and many other important constants
of mathematics and physics. Representations of periods as integrals help to notice relations and prove identities among them. That is why
one can think of periods as a class of "handy" numbers.
An algebraic number naturally comes with the set of its Galois conjugates. These are the other solutions of its minimal polynomial
equation with rational coefficients. Can one generalize the notion of conjugate numbers to periods? Although the definition of periods looks
so simple, it will allow us to have a glance at some central ideas of modern arithmetic geometry.
15.11.2019 Speaker: Artem Dudko (IM PAN)
Title: How to sum up divergent series and why this is useful
Abstract: By definition, a divergent series is... divergent, so does not sum up in a usual sense. In the first part of the talk starting with simple examples I will show a few approaches to summing up divergent series. In the second part of the talk I will explain the Borel-Laplace summation procedure for divergent power series and will show some applications for solving difference and differential equations and for studying dynamical systems.
8.11.2019 Speaker: Adam Abrams (IM PAN)
Title: Mathematics in digital communication
Abstract: How can two people have a completely public conversation and both learn information that an eavesdropper does not? How can Gmail deliver messages securely, Netflix stream video efficiently, and Bitcoin transfer money reliably?
I will present several applications of mathematics to modern communication, including (depending on time) encryption, error correction, and compression. I will also discuss the main ideas behind distributed blockchain currencies.
25.10.2019 Speaker: Jacopo Schino (IM PAN)
Title: Partial differential equations: an overview of the classical theory
Abstract: Do you know why the surface of water keeps making circles even after the stone has sunk, but when you're talking to somebody you don't just hear his/her voice again and again? Ever wondered why a drop of milk spreads so fast in a glass of water, or why they bothered giving harmonic functions one specific name?
This Friday I'll try to answer these questions looking closely at some important PDE's: I'll explain their physical derivation, highlight analogies and differences among them, emphasizing their physical meanings, and show the diverse methods used to build solutions.
18.10.2019 Speaker: David Martí-Pete (IM PAN)
Title: Wandering domains in transcendental dynamics
Abstract: We will start by introducing the main concepts in the iteration of holomorphic functions in the complex plane, such as the Fatou and Julia sets. We will focus on the iteration of transcendental entire functions (you can think of the exponential function). Wandering domains are components of the Fatou set that are not eventually periodic. Such components are specific to the iteration of transcendental functions in one complex variable: polynomials and rational maps do not have wandering domains. We will discuss how to classify wandering domains, their relationship with the singular values of the function, and the different ways of constructing them that exist up to now.
11.10.2019 Speaker: Mariusz Tobolski (IM PAN)
Title: A (locally) trivial talk
Abstract: I will discuss the main ideas of noncommutative topology and use the definition of the local-triviality dimension to illustrate how do they work in practice. The local-triviality dimension generalizes the concept of a locally trivial principal bundle, which is pivotal in algebraic topology and fundamental in gauge field theories in physics.
4.10.2019 Speaker: Rami Ayoush (IM PAN)
Title: Fourier-analytic approach to the 2-wave cone condition
Abstract: During the talk I will discuss recent developments in the problem of estimating the Hausdorff dimension of measures satisfying PDEs as well as its connections to the theory of Fourier multipliers.
I will show how to prove a dimension estimate corresponding to a certain more restrictive — in the structural sense — version of the 2-wave cone condition (of Arroyo-Rabasa, De Philippis, Hirsch, and Rindler) in more general Fourier-analytic setting. The sketch of the proof will be based on the example of gradients from BV(ℝ³).
This is joint work with M. Wojciechowski.
14.06 Speaker: Alessandro Sisto (ETH)
Title: Studying groups using geometry
Abstract: Given a finitely generated group G, and a choice of finite generating set S, one can associate to (G, S) a metric space, called Cayley graph. The choice of S does not matter up to an equivalence relation called "quasi-isometry". A large part of geometric group theory is dedicated to studying the (large-scale) geometric properties of Cayley graphs and how they relate to the algebraic properties of groups. I will introduce all the relevant notions, and then focus on one of the main large-scale properties in geometric group theory, namely Gromov-hyperbolicity.
7.06.19 Speaker: Marcin Małogrosz (MiNI PW)
Title: Can one hear the shape of a quadrilateral?
Abstract: Although it is possible to hear the volume (1911 Weyl) and the measure of the boundary (1980 Ivrii) of a given domain, in general one can’t hear its shape (1964 Milnor for manifolds; 1992 Gordon, Webb and Wolpert for polygons). However if it is known in advance that the domain is a triangle (1990 Durso) or that is has certain symmetries and an analytic boundary (2010 Zelditch) then the sound allows to identify the domain up to isometries! During the talk I will shed some light on these issues (or make things much more obscure depending on your mathematical taste) and (hopefully) answer the title question.
31.05.19 Speaker: Adam Abrams (IM PAN)
Title: Coding sequences and the security problem for square billiards
Abstract: This talk will explore two problems related to billiards on a square table: (1) what sequences can arise from recording which side is involved in each successive bounce of a billiard ball, and (2) how easily can one prevent an “assassin” at one location from firing a billiard to hit a target elsewhere on the table? These problems display some surprising connections to computer science, to other dynamical systems, and to topology.
24.05.19 Speaker: Arkadiusz Mecel (MIM UW)
Title: Gröbner bases and the automaton property of Hecke-Kiselman algebras.
Abstract: Many algebraic structures are defined by generators and relations, starting from the symmetric group whose presentation is based on elementary transpositions and braid-type relations. Such constructions often involve graphs, as in the case of Coxeter groups. One benefit of this approach is that in the algebraic work we intuitively tend to work in the language of words, i.e. the elements of a free algebras, be it monoids or groups, or rings. In 1965 Buchberger considered the Gröbner basis as a set of multivariate polynomials that have desirable algorithmic properties. In the commutative setting, every set of polynomials can be transformed into a finite Gröbner basis. This process generalizes three familiar techniques: Gaussian elimination for solving linear systems of equations, the Euclidean algorithm for computing the greatest common divisor of two univariate polynomials, and the Simplex Algorithm for linear programming. In the world of non-commutative polynomials, however, things get complicated, as the process of obtaining the Grobner basis requires the choice of generators and an ordering on monomials, and may not terminate. In order to overcome this difficulty we often turn to structures based on automatons.
In the work with J. Okninski and M. Wiertel (both from University of Warsaw) we consider Hecke-Kiselman monoids introduced as a generalization of 0-Hecke algebras of Coxeter groups. These are semigroup algebras constructed in a very natural way from graphs. The combinatorics of these algebras is fairly simple to work with, yet quite a few elementary formulated problems appear as supringly difficult.
17.05.19 Speaker: Maria Donten-Bury (MIM UW)
Title: Resolution of singularities
Abstract: The aim of the talk is to show how one can deal with singularities of algebraic varieties. I will focus on the class of quotient singularities, i.e. singularities of quotients of smooth algebraic varieties by finite group actions, which provide a lot of interesting examples.
10.05.2019 Speaker: Carlos Perez-Sanchez (FUW)
Title: An introduction to tensor models
Abstract: The interest in generalizing random matrix models --- seen as 2D-random
geometry framework --- to higher dimensions led Ambjørn, Durhuus, and
Jonsson to introduce tensor models by the end of last century. With this
decade's findings on the missing "1/N-expansion", Gurau further propelled
tensor models. This talk is an introduction to their combinatorial,
topological, and, if time allows, physical aspects.
26.04.2019 Speaker: Jacopo Schino (IM PAN)
Title: A further step into variational methods
Abstract: Last year I showed how to use (the infinite-dimension version of) the Weierstrass Theorem to find a weak solution of a certain linear elliptic (i.e., time-independent) problem. For some nonlinearities, though, this method is ineffective. In this talk I will show how to use other tools to solve an elliptic problem whose nonlinearity is of power type, with the exponent being in a certain range.
12.04.2019 Speaker: Mitsuru Wilson (IM PAN)
Title: Quantum Groups in Action
Abstract: Quantum groups are a generalization of groups whose algebras of (continuous) functions are generalized to Hopf algebras. Important group theoretic notions such as group action extend to the Hopf algebra setting. The goal of my talk is to serve as an introduction to quantum groups, actions by quantum groups, and some of my recent results.
05.04.2019 Speaker: Welington Cordeiro (IM PAN)
Title: Chaos theory: expansivity and sensitivity to initial conditions
Abstract: The sensitivity of chaotic systems to initial conditions is sometimes called the Butterfly Effect. The idea is that a butterfly flapping its wings in a South American rainforest could, in principle, affect the weather in Texas. This idea was first publicized by meteorologist Edward Lorenz, who constructed a very crude model of the convection of the atmosphere when it is heated from below. A property stronger than sensitivity is expansivity. The dynamics of expansive systems may be very complicated, but it is quite well understood. In this talk, we will explore the dynamics of systems with intermediate
behavior between sensitivity to initial conditions and expansivity.
29.03.2019 Speaker: Adam Śpiewak (MiM UW)
Title: Fractal measures in dynamical systems
Abstract: Fractal measures are probability distributions having properties analogous to fractal sets (self-similarity/complicated structure at arbitrary small scales/non-integer dimension). Even very simple models are not yet fully understood, with major open problems being an area of active research. During the talk I will present several examples of such measures and discuss their properties. Emphasis will be placed on examples originating from dynamical systems, where they arise naturally as invariant or stationary measures.
22.03.2019 Speaker: Tomasz Dębiec
Title: Conserved quantities and regularity in fluid dynamics.
Abstract: Conserved or dissipated quantities, like energy or entropy, are at the heart of the study of many classes of time-dependent PDEs in connection with fluid mechanics. This is the case, for instance, for the Euler and Navier-Stokes equations, for systems of conservation laws, and for transport equations. In all these cases, a formally conserved quantity may no longer be constant in time for a weak solution at low regularity. In this talk we discuss the interplay between regularity and conservation of energy in the realm of ideal incompressible fluids.
15.03.2019 Speaker: Zuzanna Szymańska.
Title: Mathematical Modelling in Biology and Medicine. Can we cure cancer with calculus?
Abstract: Over the last 30 years there has been an intensive development of mathematical modelling in the biomedical sciences. Models developed in collaboration with biologists and physicians have repeatedly enabled the verification of existing and emerging new research hypotheses as well as facilitated the design of new experiments. There are many differences between both normal and cancer cells and between healthy and cancerous tissue. Some of these key differences concern properties of individual cells and how quickly they divide, migrate or even evade the normal process of cell death. Other properties are concerned with how a solid tumour spreads to secondary parts of the body through the processes called invasion and metastasis. At some point in the development of a solid tumour, cancer cells from the primary cancerous mass of cells migrate and invade the local tissue surrounding the tumour. This initial invasion of the local tissue is the first stage in the complex process of secondary spread where the cancer cells travel to other locations in the individual and set up new tumours called metastases. These secondary tumours are responsible for around 90% of all (human) deaths from cancer. Knowing precisely how cancer cells invade the local tissue would enable better treatment protocols to be developed and consequently better individualised patient care. Cancer invasion and spread is, by its nature, a complicated phenomenon involving many inter-related processes across a wide range of spatial and temporal scales. Therefore, the theoretical support from mathematical modelling, analysis, computational simulation and systems biology in understanding these processes is extremely necessary. The last decades have witnessed enormous advances in our understanding of the molecular basis of cell structure and function. Scientists have made impressive advances in elucidating the mechanisms mediating cell-signalling and its consequences for the control of gene expression, cell proliferation and cell motility. With the rapid development of experimental methods, huge amounts of genetic, proteomic, biochemical and visual data become available. At the same time, the development of mathematical and computational models of various aspects of cancer growth has significantly contributed to the "theoretical side" - mathematical and computational models constructed in collaboration with biologists and clinicians have repeatedly enabled the verification of existing and formulation of new research hypotheses, and also facilitated the design of new experiments. In my talk I will give a short overview of current state-of-the-art in mathematical modelling of cancer disease.
8.03.2019 Speaker: Janusz Czyż.
Title: What is an aftermath of the Polish Mathematical School?
Abstract: The Polish Mathematical School flourished between the world wars, with
Tarski and Sierpinski works on Cantor-Godel arithmetic, Banach's functional
analysis, Marian Smoluchowski's works in statistical physics, among others.
While the Second World War struck with destructive pover at the Polish
Mathematical School both figuratively and literally (Józef Marcinkiewicz was
the youngest victim among this group), some of the scientists and much of
their ideas survived and are still either classical (topology, Banach spaces
and algebras) or inspiring new developments (Łukasiewicz's 3-value logic,
Mycielski-Steihaus axiom of determination). This was commemorated by
conducting the International Congress of Mathematicians in Warsaw in 1983,
which was preceded by a problem session for Warsaw mathematicians, led by
Michael F. Atiyah.
1.03.2019 Speaker: Artem Dudko
Title: On the spectrum of the Grigorchuk group.
Abstract: The Grigorchuk group G is a group which was constructed in 1980 and solved several open problems in group theory. The spectrum of a group is a certain set containing important information about the group. In my talk, I will explain the notion of the spectrum of a group, introduce the Grigorchuk group G, and present a joint result with R. Grigorchuk, namely, the calculation of the spectrum of G. The talk is intended for non-specialists. I will give all necessary definitions and provide some examples.
25.01.2019 Speaker: Ignacio Vergara
Title: Unitarizable groups and the Dixmier problem.
Abstract: This will be an introductory talk on group representations on Hilbert spaces and the notion of unitarizability. A group is said to be unitarisable if every uniformly bounded representation is "similar" to a unitary representation. In 1950, Day and Dixmier proved (independently) that every amenable group is unitarizable. The converse remains open and it is known nowadays as the Dixmier problem.
My goal in this talk is to explain the previous paragraph in detail, without assuming any knowledge of group representations or amenability. If time allows, I will give a complete proof of the Day–Dixmier theorem.
11.01.2019 Speaker: Adam Abrams
Title: Introduction to Dynamical Systems
Abstract: Dynamical systems studies changes to systems over time. With such a broad description, almost anything can be presented as a dynamical system, and indeed mathematical dynamics relates a wide array of topics. Fortunately, there are some important definitions and results that can be made accessible to a wide range of mathematicians. This talk will cover several important notions in dynamics, such as minimality, mixing, and ergodicity. We will address these concepts mainly through instructive examples rather than through detailed proofs.
14.12.2018 Speaker: Janusz Czyż
Title: From Copernicus to the Polish School of Mathematics and Logic
Abstract: The Copernicean Revolution was an event both in astrophysics or physics and in
mathematics. Namely, Copernicus' pupil Rheticus in Cracov did initiate
counting multidigital trigonometric tables for which a logarithmic calculus
was needed. Also Copernicean Mercuroid defined in the ``de Revolutionibus...''
was a masterpiece in the art of approximation and the most advanced geometric
7.12.2018 Speaker: Tomasz Pełka
Title: Classifying planar rational cuspidal curves
Abstract: Let E be a closed algebraic curve on a complex projective plane; and assume that E is homeomorphic to a projective line. The classification of such curves, up to a projective equivalence, is a classical open problem with interesting counterparts in topology and symplectic geometry. The Coolidge-Nagata conjecture ('59), proved recently by Koras and Palka, asserts that every such curve is obtained from a line by some Cremona transformation.
The known curves can in fact be constructed inductively. I will sketch this nice picture during my talk, including, as an example, two newly discovered families of curves. But to prove this conjecture, one needs to study the complement of such curve. The simple topology of this affine surface makes the algebraic subtlety clearly visible. In the most difficult case when it is of log general type, the classical tools, such as the logarithmic Minimal Model Program applied to the minimal smooth completion (X,D), are not sufficient. The idea of Palka is to study the pair (X,(1/2)D) instead. This leads to the Negativity Conjecture, which asserts that the log Kodaira dimension of K+(1/2)D is negative. I will explain why this is the natural extension both of Coolidge-Nagata and some other, open problems concerning rigidity of (X,D). I will also report on our recent classification, up to a projective equivalence, of rational cuspidal curves satisfying this conjecture. They turn out to share certain unexpected properties. The aim of this talk is, on one hand, to advertise the log MMP modifications as a modern tool applicable in a much broader context, and on the other hand, to give new, elementary, but interesting examples of the rich geometry of planar curves and Cremona maps.
23.11.2018 Speaker: Piotr Hajac
Title: Operator algebras that one can see
Abstract: Operator algebras are the language of quantum mechanics just as much as differential geometry is the language of general relativity. Reconciling these two fundamental theories of physics is one of the biggest scientific dreams. It is a driving force behind efforts to geometrize operator algebras and to quantize differential geometry. One of these endeavours is noncommutatvive geometry, whose starting point is natural equivalence between commutative operator algebras (C*-algebras) and locally compact Hausdorff spaces. Thus noncommutative C*-algebras are thought of as quantum topological spaces, and are researched from this perspective. However, such C*-algebras can enjoy features impossible for commutative C*-algebras, forcing one to abandon the algebraic-topology based intuition. Nevertheless, there is a class of operator algebras for which one can develop new ("quantum") intuition. These are graph algebras, C*-algebras determined by oriented graphs (quivers). Due to their tangible hands-on nature, graphs are extremely efficient in unraveling the structure and K-theory of graph algebras. We will exemplify this phenomenon by showing a CW-complex structure of the Vaksman-Soibelman quantum complex projective spaces, and how it explains its K-theory.
(in the academic year 2018-19 the organizers were dr Tristan Bice,dr Michał Gaczkowski, dr Tatiana Shulman)
15.06.2018 Speaker: Jacopo Schino
Title: An introduction to variational methods.
Abstract: In this talk I'm going to present an introduction to variational methods, by which we mean looking for (weak) solutions to a certain PDE problem as critical points of a proper functional.
We'll go through a simple linear problem: I'll recall the new notion of solution and show how we get there, then I'll prove the existence of a solution by the infinite-dimensional version of Weierstrass Theorem and, if I have time, the uniqueness of such solution.
8.06.2018 Speaker: Reza Mohammadpour
Title: Lyapunov exponents Of cocycles.
Abstract: Lyapunov exponents tell us the rate of divergence of nearby trajectories – a key component of chaotic dynamics. For one
dimensional maps Lyapunov exponent at a point is a Birkhoff average of $log|f'|$ along the trajectory of this point. For a
typical point for an ergodic invariant measure it is equal to the average of $log|f'|$ with respect to this measure.
In this talk,we will give an introduction about Lyapunov exponents of cocycles,Moreover we will give a few examples which are related
to PDE,Smooth Dynamics and Probability.We deal with Lyapunov exponents of products of random i.i.d. 2x2 matrices of determinant
$\pm1$.We will see how lyapunov exponent gives information about the norm of growth of the matrices $A^{n}(x)$,Finally we will
discuss conditions in which the Lyapunov exponents are always positive.
25.05.2018 Speaker: Monika Szczepanowska
Abstract: Energies of submanifolds are an analytic tool for studying topological and geometric properties of embedded submanifolds. Interest in energies was rekindled in the nineties by pioneering works of Freedman, of He and Wang and of O'Hara, who used energies of curves to study knottedness of links in R^3. One of the most useful energies is the Menger curvature defined originally for curves in R^3, and which was studied in detail by many authors. The Menger curvature can be generalized to surfaces and to higher-dimensional submanifolds. The intuition behind the energy is that, if a submanifold has complicated topological behavior (for example the curve is knotted), then the energy should be large. In other words, small energy
should imply simple topology. We want to look at some basic definitions and intuitions in this field. For instance,
we shall review some properties of energies, and discuss difficulties and unexpected traps one encounters while trying to extend the definition of energy to higher dimensions. At the end, we will sketch a proof that there exists a positive constant C such that, for every smooth, closed and connected surface S in R^3 of total area 1, the genus g(S) and the energy E(S) of the surface S satisfy the inequality: g(S) less or equal to C times E(S).
11.05.2018 Speaker: Samuel Evington
Title: The C*-Algebra Petting Zoo.
Abstract: On a warm sunny Friday afternoon in Warsaw, we shall take a close look at some of the key examples of C*-algebras. In the process, we will learn a bit about the general theory of C*-algebras and why they are worth studying. I will then speak briefly about the dramatic recent progress in the classification of C*-algebras and my research in this area.
13.04.2018 Speaker: Marcin Małogrosz
Title: Stability analysis of stationary solutions of reaction-diffusion systems via the linearisation principle and spectral analysis of Schrodinger operators.
Abstract: Roughly speaking a steady state of a dynamical system is called stable if it attracts every trajectory starting from its sufficiently small neighbourhood. In the finite dimensional case (system of ordinary differential equations in the Euclidean space) the linearisation theorem states that stability of an equilibrium is equivalent to the stability of the Jacobian matrix at that equilibrium if the spectrum of the matrix is separated from the imaginary axis. On the other hand the Routh-Hurwitz theorem gives equivalent conditions for the stability of a matrix in terms of the signs of certain polynomial expressions of its entries. These two theorems make the finite dimensional case somewhat understood and allow to perform the stability analysis of steady states of concrete systems with the use of numerical computations. The situation is more complicated in the case of infinitely dimensional dynamical systems, specifically systems of partial-differential equations of reaction diffusion type. Although the linearisation principle extends with the Jacobian matrix being replaced by a Schrodinger operator, there is no counterpart of the Routh-Hurwitz theorem and only partial results concerning stability of such operators are known. We will discuss these matters in detail during the talk.
23.03.2018 Speaker: Antoni Kijowski
Title: On functions with the mean value property.
Abstract: I will discuss functions possessing the mean value property. Such class has been introduced in the metric measure spaces as one of possible definitions of harmonic functions in this setting. During the talk I will focus on the consequences which the property supports and present a large collection of examples to complete the analysis.
16.03.2018 Speaker: Damian Orlef
Title: How to study generic groups and why?
Abstract: I will talk about the Gromov density model, in which a group is obtained by specifying generators and then imposing a random set of relations between them.
The model is simple, yet produces groups with interesting geometric properties and provides testing grounds for various methods and conjectures. I will describe
some of the landscape, focusing later on left-orderability and property (T) (which leads to a connection with random graphs). No previous knowledge of the topic(s) is assumed.
26.01.2018 Speaker: Vasiliki Evdoridou (IM PAN)
Title: The escaping set of transcendental entire functions
Abstract: In this talk we will give an introduction to the iteration of transcendental entire functions focusing on the escaping set. The escaping set consists of points that go to infinity under iteration and it plays an important role in the area. After giving the definition and discussing some of its properties, we will look at the structure of the escaping set. In particular, we will present two specific examples of transcendental entire functions for which the structure of the escaping set differs significantly.
19.01.2018 Speaker: Konrad Aguilar (University of Denver).
Title: Topologies for the ideal space of C*-algebraic inductive limits.
Abstract: C*-algebras can be viewed as operator norm-closed
self-adjoint subalgebras of bounded operators on Hilbert spaces.
Because of this, the theory of C*-algebras benefits from a rich
representation theory. As the kernel of a representation is a norm
closed two-sided ideal of a C*-algebra, a major tool in this theory
comes in the form of topologies on ideals, where ideals are seen as
points. This begins with the Jacobson topology on primitive ideals,
which are ideals formed by the kernels of a non-zero irreducible
representations. From this, J.M.G. Fell developed a topology on all
norm closed two-sided ideals of a C*-algebra. Motivated in part by
this, we developed a new topology on the ideal spaces of C*-algebras
formed by inductive limits (C*-inductive limits). We then compare this
topology to the other topologies discussed. We also present that our
topology does agree with Fell's topology for the particular
C*-inductive limits called approximately finite-dimensional
C*-algebras (AF-algebras). As an application, we first provide a
metric topology on certain quotients of AF-algebras using the tools of
Noncommutative Metric Geometry, in particular M.A. Rieffel's compact
quantum metric spaces and F. Latremoliere's quantum Gromov-Hausdorff
propinquity. Next, we introduce sufficient conditions for when our
topology on ideals produces a continuous map from certain sets of
ideals to the associated space of quotients. An example of such a
continuous map is given by the Boca-Mundici AF-algebras. This shows
that the act of taking a quotient can be seen to be continuous at the
level of viewing ideals and quotients as points of topological spaces.
15.12.2017 Speaker: Tomasz Kochanek
Title: The Szlenk index and asymptotic geometry of Banach spaces.
Abstract: The notion of Szlenk index was introduced in 1968 by W. Szlenk in order to show that there is no universal Banach space in the class of all separable reflexive Banach spaces. Its origins stem from the Cantor-Bendixson index which is well-known in topology. Since the pioneering paper by Szlenk, his index and several similar ordinal indices have proven to be extremely useful tools in Banach space theory. During the talk, we will discuss some intuitions behind the Szlenk index and the Szlenk power type, and some of the most recent results on this topic. These involve connections with asymptotic geometry of Banach spaces and the theory of asymptotic structures, which is quite fundamental for understanding the current state of knowledge in the structural theory of Banach spaces.
8.12.2017 Speaker: Krzysztof Ziemiański
Title: Directed Algebraic Topology.
Abstract: I will talk about directed spaces; these are topological spaces with some additional structure, which can be used for modelling concurrent programs. I will define directed counterparts of classical topological invariants and present main problems which are investigated in this area.
1.12.2017 Speaker: Piotr Achinger
Title: Around monodromy.
Abstract: Do you feel that you are going around in circles and not getting anywhere? Things may not be as bad as they seem. You might be getting somewhere, but not realizing it because you aren't aware of your personal monodromy*.
In this lecture, I will provide a gentle overview of the concept of monodromy in the context of algebraic geometry, algebraic topology, differential equations, and number theory.
* (c) Nick Katz
24.11.2017 Speaker: Masha Vlasenko
Title: Formal groups and congruences.
Abstract: I will give a friendly introduction to the theory of formal group laws focusing on arithmetic questions such as integrality and local invariants.
10.11.2017 Speaker: Tristan Bice (IM PAN).
Title: <<-Increasing Approximate Units in C*-Algebras (joint work with Piotr Koszmider).
Abstract: It is well known that every C*-algebra has an increasing approximate unit w.r.t. the usual partial order on the positive unit ball. We consider the strict order << instead, where a << b means a = ab. Here again it is well known that every separable or sigma-unital C*-algebra has a <<-increasing approximate unit, but the general case remained unresolved. In this talk we outline our recent work showing that this extends to omega_1-unital C*-algebras but not, in general, to omega_2-unital C*-algebras. In particular, we consider C*-algebras defined from Kurepa/Canadian trees which are scattered and hence LF but not AF in the sense of Farah and Katsura. It follows that whether all separably representable LF-algebras are AF is independent of ZFC.
3.11.2017 Speaker: Safoura Zadeh (IM PAN).
Title: Isomorphisms between the left uniform compactification of locally compact groups.
Abstract: For a locally compact group $G$, let $C_{b}(G)$ be the space of all complex-valued, continuous and bounded functions on $G$ equipped with the sup-norm, and $LUC(G)$ be the subspace of $C_{b}(G)$ consisting of all functions $f$ such that the map $G\to C_b(G);x\mapsto l_xf$ is continuous, where $l_xf$ is the function defined by $l_xf(y)=f(xy)$, for each $y\in G$. The subspace $LUC(G)$ forms a unital commutative C*-algebra. We can induce a multiplication on the Gelfand spectrum of $LUC(G)$, $G^{LUC}$, with which $G^{LUC}$ forms a semigroup. In this talk, I study some properties of $G^{LUC}$, the so called right topological semigroup compactification of $G$. I also discuss the question of when the corona, $G^{LUC}\setminus G$, determines the underlying topological group $G$.
20.10.2017 Speaker: Mateusz Wasilewski (IM PAN).
Title: Non-commutative techniques in classical probability
Abstract: I will discuss two classical probabilistic objects -- random walks and birth-death processes -- using the language of operator theory. We will see how commutation relations between certain operators (related to the generators of the aforementioned processes) allow us to perform explicit computations; the combinatorial tools from free probability, such as non-crossing partitions, will appear naturally. If time permits, I will show how to make a transition from classical probability to quantum probability.
P.S. I may present an example involving Darth Vader and stormtroopers.
9.06.2017 Speaker: Olli Toivanen (IM PAN)
Title: Regularity in generalized Orlicz spaces
Abstract: A Lebesgue space $Lp(\Omega), \Omega \subset \Rn$, is the space of those functions f for which $\int_\Omega |f|^p\,dx < \infty$. This sort of an integrability condition can be generalized in various ways, such as saying "instead of integrating a power p, let's consider some other function of f", or "instead of a fixed power p, let's allow p to be p(x), or, a function of the point x \in \Omega". These approaches lead, respectively, to Orlicz spaces and to variable exponent Lebesgue and Sobolev spaces.
It is an interesting question whether these and other generalizations can be brought together and covered by a "super-generalization", and whether (say) the minimizers of that integral would have regularity (say Hölder continuity) with assumptions even remotely as good as those of the individual cases.
Somewhat surprisingly, this seems to be the case. I will speak on these generalized Orlicz or Musielak-Orlicz spaces, and on the various recent results in building a regularity theory on them. I'll start from "Hölder regularity of quasiminimizers under generalized growth conditions", by Harjulehto, Hästö, and myself; Calc. Var. PDEs, 56 (2), 2017.
2.06.2017 Speaker: Andrey Krutov (IMPAN)
Title: Introduction to the geometry of partial differential equations
Abstract: We will consider the basic material on the geometric approach to partial differential equations and symmetries, including an introductory part on the geometry of jet spaces.
26.05.2017 Speaker: Saeed Ghasemi (IMPAN)
Title: SAW*-algebras and sub-Stonean spaces
Abstract: SAW*-algebras are C*-algebras which are noncommutative analogous of sub-Stonean spaces (F-spaces) in topology i.e., spaces for which two disjoint open, σ-compact sets have disjoint closures. Many properties of sub-Stonean spaces were generalized to general SAW*-algebras. For example Pedersen showed that the corona algebras of sigma-unital C*-algebras are SAW*, which generalizes the fact that Cech-Stone remainders of locally compact σ-compact Hausdorff spaces are sub-Stonean spaces. I will talk about the continuous maps from products of compact spaces into sub-Stonean spaces. In particular, it is well-known that the are no injective continuous maps from the product of infinite compact spaces into a sub-Stonean space. I will present a generalization of this result to SAW*-algebras i.e., there are no surjective maps from SAW*-algebras onto C*-tensor products of two infinite-dimensional C*-algebras. This in particular answers a question of Simon Wassermann who conjectured that the Calkin algebra is essentially non-factorizable.
19.05.2017 Speaker: Karen Strung (IMPAN)
Title: Smale spaces and C*-algebras
Abstract: In this talk, I will describe the hyperbolic dynamical systems known as Smale spaces. These includes such well-known examples as the shifts of finite type, hyperbolic toral automorphisms and Anosov diffeomorphisms. Each Smale space gives rise to topological equivalence relations coming from the stable, unstable, and homoclinic relations. I will show how these can be used to construct C*-algebras and describe some of their structural properties.
12.05.2017 Speaker: Jarosław Mederski (IMPAN)
Title: Nonlinear Maxwell equations
Abstract: Our aim is to solve the system of Maxwell equations in the presence of nonlinear polarization in a bounded domain. A solution of the system describes propagation of electromagnetic fields in a nonlinear medium. We show that the problem leads to a semilinear equation involving the curl-curl operator. In the talk we present the functional setting and variational methods which allow to deal with the curl-curl operator and find ground state solutions. We recall the classical mountain pass theorem as well as recent generalized Nehari manifold approaches. At the end of my talk we discuss new directions of research and some open problems which seem to be important from the physical point of view and challenging from the mathematical side.
5.05.2017 Speaker: Iwona Skrzypczak (IMPAN)
Title: Approximation in anisotropic and non-reflexive Musielak-Orlicz spaces
Abstract: The talk will concern the generalization of the Sobolev spaces, namely the general Musielak-Orlicz spaces, where the norm is
governed by an integral of a general convex function, depending not only on the function, but also on the spacial variable.
The highly challenging part of analysis in the general Musielak-Orlicz spaces is giving a relevant structural condition implying approximation properties of the space. However, we are equipped not only with the weak-* and strong topology of the gradients, but also with the intermediate one, namely - the modular topology.
The brief presentation of the setting and ideas are planned to be clear even for those who are not used to the nonlinear analysis.
21.04.2017 Speaker: Lashi Bandara (University of Gothenburg)
Title: Functional calculus for bisectorial operators and applications to geometry
Abstract: The bounded holomorphic functional calculus for bisectorial operators can be thought of as an implicit Fourier theory in settings where the transform cannot be defined. It has been particularly useful in low-regularity situations such as Euclidean domains and in obtaining non-smooth perturbation estimates. The power of the tool lies in the fact that its boundedness can often be obtained via real-variable harmonic analysis methods. In this talk, I'll give an introduction to these operators, the functional calculus, its connection to harmonic analysis and more recent applications to geometry.
7.04.2017 Speaker: Paweł Józiak (IMPAN)
Title: Algebra meets probability.
Abstract: Many mathematical disciplines can have direct connections to other, but we are sometimes convinced that some of them are separated and the connection is indirect and distant. One such example could be the theory of probability and pure algebra -- the aim of the talk would be to convince that in fact algebra and probability are conversant to each other. During the talk, I'd like to present a purely algebraic/combinatorial proof of the central limit theorem (CLT), one of the building blocks of the modern probability theory. If time permits, I will also smuggle a bit of a modern branch of operator algebras, called the free probability theory (with its own CLT). No algebraic nor probabilistic prerequisites will be necessary, the talk is aimed to be understandable by general audience (e.g. undergraduate mathematics or physics students).
24.03.2017 Speaker: Marithania Silvero (IMPAN)
Title: A combinatorial approach to Khovanov homology
Abstract: Khovanov homology is a link invariant introduced in 2000 by Mikhail
Khovanov. This bigraded homology categorifies Jones polynomial and it has been
proved to detect the unknot. In this talk we present a new approach to extreme
Khovanov homology in terms of a specific graph constructed from the link
diagram. With this point of view, we pose a conjecture related to the
existence of torsion in extreme Khovanov homology and show some examples where
the conjecture holds.
Przepisz kod z obrazka
Odśwież obrazek
Odśwież obrazek |
5c591318ea1557ea | Edition: U.S. / Global
Published: March 18, 2004
Sir John A. Pople, a mathematician who became a chemist and won a Nobel Prize in 1998 for a computer tool that describes the dance of molecules in chemical reactions, died Monday at his daughter's home in Chicago. He was 78.
The cause of death was liver cancer, his family said.
Dr. Pople was among the first to realize the potential of computers in chemistry.
The behavior of all molecules is defined by the Schrödinger equation, the fundamental formula of quantum mechanics. But the equation is impossible to solve exactly except in the simplest cases.
In the 1960's, Dr. Pople developed methods for calculating approximate solutions, determining the orbits of electrons zipping around molecules. From the electron orbits, the computer program predicts properties of the molecules, including whether they are stable, which colors of light they will absorb or emit, and the pace of chemical reactions.
The work culminated in a program, Gaussian-70, published in 1970. That program and succeeding versions have become a common tool for chemists.
''It's literally thousands of chemists worldwide who are using the results of Pople's research,'' said Dr. Stuart W. Staley, a professor of chemistry at Carnegie-Mellon University in Pittsburgh, where Dr. Pople taught for many years. ''It's had a tremendous impact.''
In recent years, however, Dr. Pople was not among its users. In 1991 he left Gaussian Inc., a company set up to market the computer program. ''There were disagreements about how the company should grow, and so he parted ways with other founders of the company,'' said Michael J. Frisch, president of Gaussian and a former student of Dr. Pople.
When Dr. Pople helped found a competing company, Q-Chem, in 1993, Gaussian declined to license newer versions of its software to him.
Born on Oct. 31, 1925, in Burnham-on-Sea, a small town on the west coast of England, John Anthony Pople (pronounced POPE-el) was the first in his family to attend college, graduating with a bachelor's degree in mathematics from Cambridge University in 1946. He completed his doctoral degree at Cambridge in 1951 and continued working there through 1958.
He left Cambridge to head the basic physics division at the National Physical Laboratory in England, and in 1964 he became a professor of chemistry at the Carnegie Institute of Technology, now part of Carnegie-Mellon University. In 1993 he moved to Northwestern University. He remained a British citizen after moving to the United States, and last year he was knighted for his chemistry achievements.
Sir John's wife, Joy, died in 2002 after nearly 50 years of marriage. He is survived by his daughter, Hilary; three sons, Adrian, who lives in Ireland; Mark, of Houston; and Andrew, of Pittsburgh; 11 grandchildren; and a great-granddaughter.
Sir John's interest in the puzzles of physical chemistry, as opposed to abstract mathematics, dated from early in his career. His doctoral thesis, for instance, explored the structure of water.
''I had clearly changed from being a mathematician to a practicing scientist,'' he wrote in an autobiography on the Nobel Prize Web site. ''Indeed, I was increasingly embarrassed that I could no longer follow some of the more modern branches of pure mathematics, in which my undergraduate students were being examined.''
Photo: Sir John A. Pople (Photo by Associated Press, 1998) |
c1c9449e13c95d03 | Take the 2-minute tour ×
In many areas of mathematics (PDE, Algebra, combinatorics, geometry) when we have difficulty in coming with a solution to a problem we consider various notions of "generalized solutions". (There are also other reasons to generalize the notion of a solution in various contexts.)
I would like to collect a list of "generalized solutions" concepts in various areas of mathematics, hoping that looking at these various concepts side-by side can be useful and interesting.
Let me demonstrate what I mean by an example from graph theory: A perfect matching in a graph is a set of disjoint edges such that every vertex is included in precisely one edge. A fractional perfect matching is an assignment of non negative weights to the edges so that for every vertex, the sum of weights is 1. In combinatorics, moving from a notion described by a 0-1 solution for a linear programming problem to the solution over the reals is called LP relaxation of a problem and it is quite important in various contexts.
(There are, of course, useful papers or other resources on generalized solutions in specific areas. It will be useful to have links to those but not as a substitute for actual answers with some details.)
share|improve this question
Actually the scope of the answers is much larger than what I thought! (But I cannot formally define what was the more restricted scope I had in mind). – Gil Kalai Oct 8 '10 at 21:16
16 Answers 16
Partial Differential Equations (PDE) is a topic where generalizing the notion of solutions is a daily activity.
The most obvious generalization has been the notion of weak solutions, which means that a solution $u$ is not necessarily differentiable enough times for the derivatives involved in the equation to make sense; but an integration against test functions, followed by an integration by parts, cures the problem. The most known example is that of the Laplace equation $$\Delta u=f\qquad\hbox{over }\Omega,$$ where it is enough for $u$ to have locally integrable first-order derivatives, by rewriting the equation as a variational formulation (Dirichlet principle) $$\int_\Omega \nabla u\cdot\nabla vdx=-\int_\Omega fvdx$$ for every $v\in{\mathcal C}^1_c(\Omega)$ (subscript $c$ means compact support).
What is important in this process is to satisfy the rule
If $u$ has enough derivatives that the equation makes sense pointwise, then it is a weak solution if and only if it is a classical solution.
Let us mention in passing that in order to use the full strength of functional analysis and operator theory, this weak notion of solutions led to the birth of Sobolev spaces and Distribution theory (L. Schwartz).
This framework has been used for nonlinear equations and systems too, for instance for the Navier-Stokes, Euler, Schrödinger equations, ... An important question is whether this framework is accurate or not. By accurate, we mean that boundary and/or initial data yield a unique solution, which depends continuously on the data. This is the question of well-posedness. In many cases, functional analysis, sometimes associated to topological arguments, yield an existence theorem. A celebrated one is J. Leray's existence result to the Navier-Stokes equation of an incompressible fluid. However, uniqueness is often an other matter, a difficult one. For a $3$-dimensional fluid, the uniqueness to Navier-Stokes is a $1$M US Dollar open question.
Uniqueness is often (but not always) associated to regularity. In many situations, there are weak-strong uniqueness result, which state that if a classical, or a regular enough solution exists, then there does not exist any other weak solution (say in a class where we do have an existence result). It is an if-theorem, in the absence of an existence result of strong solutions. For elliptic and parabolic equations, the regularity theory is a topic of its own.
Whereas regularity is often expected in elliptic or parabolic equations and systems, it is not for hyperbolic ones, because we know that singularities do propagate, and that they can even be created in finite time thanks to nonlinear effects. Then the notion of weak solutions becomes meaningful, in that it translates in mathematical terms the physical notion of conserved quantities. It gives algebraic relation for the jump of the solution and its derivatives across discontinuities (Rankine-Hugoniot relations).
Finally, I like a lot the way the theory of nonlinear elliptic equations, and of Hamilton-Jacobi equations have develloped in the past decades. At the beginning, it was observed that the maximum principle, known for classical solutions, remains valid for weak ones. This suggested, when the nonlinearity is so strong that a variational formulation is not available, that the maximum principle itself be used to define a notion of viscosity solution. The idea is to test at $x_0$ the PDE with a test function $\phi$ being comparable to $u$ (either $\phi\le u$ or $\phi\ge u$ locally) and touching $u$ at $x_0$. This has been extremely powerfull.
share|improve this answer
In the third to last paragraph, you wrote: "Uniqueness is often (but not always) associated to uniqueness". Is that intentional? Perhaps one of the two uniquenesses ought to be regularity? – Willie Wong Oct 7 '10 at 17:23
Of course ! Thank you for careful reading. I correct immediately. – Denis Serre Oct 8 '10 at 6:38
Formal solutions to partial differential relations
Given a partial differential relation, that is, a subset $\mathcal{R} \subset J^k(\mathbb{R}^n, \mathbb{R}^m)$ of the space of $k$-jets of smooth maps $\mathbb{R}^n \to \mathbb{R}^m$, one can consider the space of smooth (say) maps $f$ from an $n$-manifold $N$ to an $m$-manifold $M$ such that $J^k(f) \in \mathcal{R}$, i.e. so that the $k$-jet of the function lies in the subspace $\mathcal{R}$ at each point. Call the space of such maps $\mathrm{Sol}_\mathcal{R}(N, M)$.
On the other hand, we can consider the bundle $J^k(N, M) \to N$ of $k$-jets of maps from $N$ to $M$, and the associated subbundle $\mathcal{R}(N, M) \to N$, and call the space of sections of this last bundle $\mathrm{FSol}_\mathcal{R}(N,M)$, the space of formal solutions. This space is far easier to analyse, for example because constructing sections of a bundle is a purely homotopy-theoretic problem.
Taking derivatives gives a comparison map $$\mathrm{Sol}_\mathcal{R}(N, M) \to \mathrm{FSol}_\mathcal{R}(N, M).$$ If $\mathcal{R}$ is open in $J^k(\mathbb{R}^n, \mathbb{R}^m)$ and the manifold $N$ is open, Gromov showed that the comparison map is a homotopy equivalence. In particular, if the space of formal solutions is non-empty, so is the space of actual solutions.
share|improve this answer
Moduli problem: find a good parametrization of geometric objects of some type; parametrization should form a collection equipped with some natural geometric structure, therefore being a geometric object in its own right. While naive "parameter space" is a set, in structured formulation it is replaced by a moduli space which classifies the geometric objects we started with. In the simplest case, the moduli problem is representable by a space in a usual sense, an object in more or less the same category in which the original geometric object was. For example a manifold or a scheme where the original objects were manifolds or schemes. With harder problems the moduli lead to more and more general kinds of objects. This motivated new types of spaces as stacks, higher stacks, derived stacks and so on.
It appears that starting with original geometric category, most of the generalized objects needed to solve the moduli problem live in some nice geometric subcategory (e.g. algebraic stacks) of the category of (possibly categorified) presheaves or sheaves on the original category, including higher versions like simplicial presheaves and so on. The original category embeds by the corresponding version of Yoneda embedding into the category of (pre)sheaves. The new ambient category of presheaves not only more generically has a solution to the moduli problem, but also has many other improved natural properties like closedness under limits.
Cohomology theories, various generalized cocycles and so on, generalized smoothness notions and so on, can also be accomodated after Yoneda embedding into a homotopy correct version of presheaf category, like in the emerging subject of derived geometry. In the original terms of non-generalized spaces, one would need to use all kinds of difficult and dirty technique to define and study the generalized notions, for example introducing various piecewise-continuous cocycles, multivalued or infinite-dimensional models and so on. Methods depending on Yoneda philosophy give rather universal setting to attack moduli problems and many other problems (like deformation theory), allowing to often eliminate construction of very elaborate but ad hoc modifications of original concepts. Inside the bigger category it may be easier to cut out some nice geometric subcategory of geometric spaces which include the solutions to the moduli problem than constructing some similar category in terms of original geometry. Of course, sometimes the difficult elementary models have their own specific strengths, which do not follow from the application of general methods.
share|improve this answer
Affine schemes:
Given any ring $R$, try to find a map from it into local ring $L$ which is initial among maps to local rings (i.e. any other map from $R$ into a local ring should factor through this one, followed by a map of local rings, i.e. one such that the preimage of the maximal ideal is the maximal ideal). Such a thing does not exist, unless $R$ is already local.
But if we allow $L$ to be a ring object living in a different topos than that of sets, then it exists: It is the local ring object living in $Sh(Spec R)$ given by the structure sheaf $\mathcal{O}_{Spec R}$ (see also my post here)
share|improve this answer
Given a set of polynomial diophantine equations, it is useful to study solutions in any ring, instead of just studying integer solutions. (This is the "functorial point of view" of a scheme over $\mathbb Z$.)
share|improve this answer
Well, then I'll start with the most obvious generalized solutions:
• weak solutions to PDEs
• Schwartz's generalized Functions aka Distributions,
• Colombeau's algebra(s) of generalized functions and
• various other kinds of generalized functions
• Quasi-Minima in functional analysis: A quasi-minimum of a functional $\mathcal{F}$ is a $u$ such that $\mathcal{F}u\leq Q\mathcal{F}v$ for all $v$ (with some constant $Q\geq 1$)
• Every solution of an polynomial equation within $\mathbb{C}$ can be a generalized solution if you're problem is something that has only real (maybe some geometric problem) or only integer or even only natural (maybe something from number theory) solutions. But considering all complex solution to your particular equation often gives a very elegant treatment of the problem.
share|improve this answer
Ideals in rings of integers of number fields arose as "ideal numbers"...
share|improve this answer
Grothendieck topologies (or: toposes as generalized spaces):
There is no topology on a general scheme which is e.g. fine enough to give back the cohomological dimensions expected from geometry, but with a more general notion of covering (or: of space) this works out.
share|improve this answer
Quotient "spaces":
While quotients, e.g. of group actions, in geometry often are degenerate, several generalized notions of quotient space help here: Sheaf quotients, Orbifolds, Algebraic Spaces, Stack quotients, Homotopy quotients, Non-commutative quotients, GIT quotients, ...
(similarly with moduli spaces)
share|improve this answer
Complex numbers arose as ideal solutions of polynomial equations with real coefficients, I guess.
share|improve this answer
I edited another answer of yours where you wrote "Polinomial". Are you doing it intentionally or is there any other reason? Thanks – Unknown Oct 7 '10 at 16:17
No, I'm just careless :-) If you clean up those things I have no objections - thanks! – Peter Arndt Oct 8 '10 at 0:21
Generalized Eigenvector
I'm surprised no one has yet mentioned the first example an undergraduate is likely to see. Suppose $A$ is a linear map from a finite dimensional vector space $V$ to $V$, with eigenvalue $\lambda$. Any nonzero vector in $\text{ker}(A-\lambda I)^k$ for some $k\ge1$ is called a generalized eigenvector for eigenvalue $\lambda$. These are used in proving the existance of the Jordan Canonical Form.
share|improve this answer
In linear algebra (linear inverse problems) one generalizes the notion of a solution of a linear operator equation $Ax=y$ to
1. "best approximation" if there is no solution, i.e. minimizing the functional $\|Ax-y\|$,
2. "Minimum-norm solution" if there is a subspace of solutions, i.e. taking that solution of $Ax=y$ which has minimal norm,
3. both (if the best approximation is not unique) leading to the Moore-Penrose inverse.
share|improve this answer
Virtual knots
Louis Kauffman generalized the knots by introducing virtual crossings, virtual knots and virtual Reidemeister moves. He obtained some interesting developments in knot theory.
share|improve this answer
One of the most fruitful notion of generalized solution in optimization and combinatorics is linear programming relaxation. Quoting from the wikipedia article: In mathematics, the linear programming relaxation of a 0-1 integer program is the problem that arises by replacing the constraint that each variable must be 0 or 1 by a weaker constraint, that each variable belong to the interval [0,1].
share|improve this answer
A form of "generalized solution" which I saw in various areas like for combinatorial optimization problems, for diophanine equations, for computational complexity purposes, and others is "statistical physics relaxation". You regard your original problem as a "temperature 0" case of a more general problem and try to gain insight on the original problem based on statistical-physics insights for the generalized problem. I am not sure what is the general recipe for this apprach and I will be happy to see an edited version with further explanation and links.
share|improve this answer
I'll mention a recent paper by Baez and Stay on 'Algorithmic thermodynamics', arxiv.org/abs/1010.2067 which contains results about randomness and complexity, depending on a temperature parameter, in which, to quote, "the randomness described by Chaitin and Tadaki then arises as the infinite-temperature limit." – David Roberts Dec 6 '10 at 12:23
I think that such an example is the use of the Residue theorem to calculate contour integrals. You use complex analysis to solve a problem in real analysis.
share|improve this answer
Your Answer
|
89d55740763ec8af | Molecular orbital theory : definition of Molecular orbital theory and synonyms of Molecular orbital theory (English)
definition - Molecular orbital theory
definition of Wikipedia
Advertizing ▼
Molecular orbital theory
In chemistry, molecular orbital (MO) theory is a method for determining molecular structure in which electrons are not assigned to individual bonds between atoms, but are treated as moving under the influence of the nuclei in the whole molecule.[1] In this theory, each molecule has a set of molecular orbitals, in which it is assumed that the molecular orbital wave function ψj may be written as a simple weighted sum of the n constituent atomic orbitals χi, according to the following equation:[2]
\psi_j = \sum_{i=1}^{n} c_{ij} \chi_i.
The cij coefficients may be determined numerically by substitution of this equation into the Schrödinger equation and application of the variational principle. This method is called the linear combination of atomic orbitals (LCAO) approximation and is used in computational chemistry. An additional unitary transformation can be applied on the system to accelerate the convergence in some computational schemes. Molecular orbital theory was seen as a competitor to valence bond theory in the 1930s, before it was realized that the two methods are closely related and that when extended they become equivalent.
Molecular orbital theory was developed, in the years after valence bond theory had been established (1927), primarily through the efforts of Friedrich Hund, Robert Mulliken, John C. Slater, and John Lennard-Jones.[3] MO theory was originally called the Hund-Mulliken theory.[4] The word orbital was introduced by Mulliken in 1932.[4] By 1933, the molecular orbital theory had become accepted as a valid and useful theory.[5] According to German physicist and physical chemist Erich Hückel, the first quantitative use of molecular orbital theory was the 1929 paper of Lennard-Jones.[6] The first accurate calculation of a molecular orbital wavefunction was that made by Charles Coulson in 1938 on the hydrogen molecule.[7] By 1950, molecular orbitals were completely defined as eigenfunctions (wave functions) of the self-consistent field Hamiltonian and it was at this point that molecular orbital theory became fully rigorous and consistent.[8] This rigorous approach is known as the Hartree–Fock method for molecules although it had its origins in calculations on atoms. In calculations on molecules, the molecular orbitals are expanded in terms of an atomic orbital basis set, leading to the Roothaan equations.[9] This led to the development of many ab initio quantum chemistry methods. In parallel, molecular orbital theory was applied in a more approximate manner using some empirically derived parameters in methods now known as semi-empirical quantum chemistry methods.[9]
Molecular orbital (MO) theory uses a linear combination of atomic orbitals (LCAO) to represent molecular orbitals involving the whole molecule. These are often divided into bonding orbitals, anti-bonding orbitals, and non-bonding orbitals. A molecular orbital is merely a Schrödinger orbital that includes several, but often only two nuclei. If this orbital is of the type in which the electron(s) in the orbital have a higher probability of being between nuclei than elsewhere, the orbital will be a bonding orbital, and will tend to hold the nuclei together. If the electrons tend to be present in a molecular orbital in which they spend more time elsewhere than between the nuclei, the orbital will function as an anti-bonding orbital and will actually weaken the bond. Electrons in non-bonding orbitals tend to be in deep orbitals (nearly atomic orbitals) associated almost entirely with one nucleus or the other, and thus they spend equal time between nuclei or not. These electrons neither contribute nor detract from bond strength.
Molecular orbitals are further divided according to the types of atomic orbitals combining to form a bond. These orbitals are results of electron-nucleus interactions that are caused by the fundamental force of electromagnetism. Chemical substances will form a bond if their orbitals become lower in energy when they interact with each other. Different chemical bonds are distinguished that differ by electron configuration (electron cloud shape) and by energy levels.
MO theory provides a global, delocalized perspective on chemical bonding. For example, in the MO theory for hypervalent molecules it is unnecessary to invoke a major role for d-orbitals, whereas valence bond theory normally uses hybridization with d-orbitals to explain hypervalency. In MO theory, any electron in a molecule may be found anywhere in the molecule, since quantum conditions allow electrons to travel under the influence of an arbitrarily large number of nuclei, as long as permitted by certain quantum rules. Although in MO theory some molecular orbitals may hold electrons that are more localized between specific pairs of molecular atoms, other orbitals may hold electrons that are spread more uniformly over the molecule. Thus, overall, bonding (and electrons) are far more delocalized (spread out) in MO theory, than is implied in valence bond (VB) theory. This makes MO theory more useful for the description of extended systems.
An example is that in the MO picture of benzene, composed of a hexagonal ring of 6 carbon atoms. In this molecule, 24 of the 30 total valence bonding electrons are located in 12 σ (sigma) bonding orbitals, which are located mostly between pairs of atoms (C-C or C-H), similar to the valence bond picture. However, in benzene the remaining 6 bonding electrons are located in 3 π (pi) molecular bonding orbitals that are delocalized around the ring. Two are in an MO, which has equal contributions from all 6 atoms. The other two orbitals have vertical nodes at right angles to each other. As in the VB theory, all of these 6 delocalized pi electrons reside in a larger space that exists above and below the ring plane. All carbon-carbon bonds in benzene are chemically equivalent. In MO theory this is a direct consequence of the fact that the 3 molecular pi orbitals form a combination that evenly spreads the extra 6 electrons over 6 carbon atoms.[10]
In molecules such as methane, the 8 valence electrons are found in 4 MOs that are spread out over all 5 atoms. However, it is possible to approximate the MOs with 4 localized orbitals similar in shape to sp3 hybrid orbitals predicted by VB theory. This is often adequate for σ (sigma) bonds, but it is not possible for the π (pi) orbitals. However, the delocalized MO picture is more appropriate for ionization and spectroscopic predictions. Upon ionization of methane, a single electron is taken from the MO, which surrounds the whole molecule, weakening all 4 bonds equally. VB theory would predict that one electron is removed for an sp3 orbital, resulting in the need for resonance between four valence bond structures, each of which with a one-electron bond.
As in benzene, in substances such as beta carotene, chlorophyll, or heme, some electrons the π (pi) orbitals are spread out in molecular orbitals over long distances in a molecule, giving rise to light absorption in lower energies (visible colors), a fact that is observed. This and other spectroscopic data for molecules are better explained in MO theory, with an emphasis on electronic states associated with multicenter orbitals, including mixing of orbitals premised on principles of orbital symmetry matching. The same MO principles also more naturally explain some electrical phenomena, such as high electrical conductivity in the planar direction of the hexagonal atomic sheets that exist in graphite. In MO theory, "resonance" (a mixing and blending of VB bond states) is a natural consequence of symmetry. For example, in graphite, as in benzene, it is not necessary to invoke the sp2 hybridization and resonance of VB theory, in order to explain electrical conduction. Instead, MO theory simply recognizes that some electrons in the graphite atomic sheets are completely delocalized over arbitrary distances, and reside in very large molecular orbitals that cover an entire graphite sheet, and some electrons are thus as free to move and conduct electricity in the sheet plane, as if they resided in a metal.
See also
1. ^ Daintith, J. (2004). Oxford Dictionary of Chemistry. New York: Oxford University Press. ISBN 0-19-860918-3.
2. ^ Licker, Mark, J. (2004). McGraw-Hill Concise Encyclopedia of Chemistry. New York: McGraw-Hill. ISBN 0-07-143953-6.
3. ^ Coulson, Charles, A. (1952). Valence. Oxford at the Clarendon Press.
4. ^ a b "Spectroscopy, Molecular Orbitals, and Chemical Bonding" (pdf) (Press release). Nobel Lectures, Chemistry 1963-1970. Elsevier Publishing Company. 1972. http://nobelprize.org/nobel_prizes/chemistry/laureates/1966/mulliken-lecture.pdf.
5. ^ Hall, George G. Lennard-Jones Paper of 1929 "Foundations of Molecular Orbital Theory.". Advances in Quantum Chemistry 22. DOI:10.1016/S0065-3276(08)60361-5. ISBN 978-0-12-034822-0. ISSN 0065-3276. http://www.quantum-chemistry-history.com/LeJo_Dat/LJ-Hall1.htm Lennard-Jones Paper of 1929.
6. ^ Hückel, Erich (1934). "Theory of free radicals of organic chemistry". Trans. Faraday Soc. 30: 40–52. DOI:10.1039/TF9343000040.
7. ^ Coulson, C.A. (1938), "Self-consistent field for molecular hydrogen", Mathematical Proceedings of the Cambridge Philosophical Society 34 (2): 204–212, DOI:10.1017/S0305004100020089
8. ^ Hall, G.G. (7 August 1950). "The Molecular Orbital Theory of Chemical Valency. VI. Properties of Equivalent Orbitals" (pdf). Proc. Roy. Soc. A 202 (1070): 336–344. DOI:10.1098/rspa.1950.0104. http://rspa.royalsocietypublishing.org/content/202/1070/336.full.pdf+html.
9. ^ a b Jensen, Frank (1999). Introduction to Computational Chemistry. John Wiley and Sons. ISBN 978-0-471-98425-2.
10. ^ Introduction to Molecular Orbital Theory - Imperial College London
External links
All translations of Molecular orbital theory
sensagent's content
• definitions
• synonyms
• antonyms
• encyclopedia
Dictionary and translator for handheld
⇨ New : sensagent is now available on your handheld
Advertising ▼
sensagent's office
Shortkey or widget. Free.
Windows Shortkey: sensagent. Free.
Vista Widget : sensagent. Free.
Webmaster Solution
Try here or get the code
Business solution
Improve your site content
Add new content to your site from Sensagent by XML.
Crawl products or adds
Get XML access to reach the best products.
Index images and define metadata
Get XML access to fix the meaning of your metadata.
Please, email us to describe your idea.
The English word games are:
○ Anagrams
○ Wildcard, crossword
○ Lettris
○ Boggle.
English dictionary
Main references
Most English definitions are provided by WordNet .
English Encyclopedia is licensed by Wikipedia (GNU).
The SensagentBox are offered by sensAgent.
Change the target language to find translations.
last searches on the dictionary :
4715 online visitors
computed in 0.110s
I would like to report:
section :
a spelling or a grammatical mistake
a copyright violation
an error
a missing statement
please precise:
Company informations
My account
Advertising ▼ |
4a2f960bc9d786b9 | Thursday, May 3, 2012
Density Matrices and Density Operators
John von Neumann at Los Alamos in the 1940's
In Phyisics graduate school, 1st year 2nd semester, when they students take Quantum Mechanics, they are taught the important subject of Density Operators and Matrices. This may be over the head of most of you my dear readers, it is a bit over mine (simply because I haven't studied it in detail ... yet), but we're basically talking about linear algebra. At a minimum to understand this important subject that describes our real world, 5 semesters of College Math (Calculus I-IV + Transforms), Linear Algebra, and Wavefunctions should have been studied first. Then it's easy. :-)
The operative sentence is this:
"Just as the
Schrödinger equation describes how pure states evolve in time, the von Neumann equation (also known as Liouville-von Neumann equation) describes how a density operator evolves in time. In fact, the two equations are equivalent, in the sense that either can be derived from the other."
from Wikipedia:
density matrix is a matrix that describes a quantum system in a mixed state, a statistical ensemble of several quantum states. (In contrast, a pure state is described by a singlestate vector). The density matrix is the quantum-mechanical analogue to a phase-space probability measure (probability distribution of position and momentum) in classicalstatistical mechanics.
Mixed states arise in situations where there is classical uncertainty, i.e., when the experimenter does not know which particular states are being manipulated. (This should not be confused with quantum uncertainty, which dictates that even if the experimenter knows which states are being manipulated, the results of some measurements cannot be predicted.) Examples include a system in thermal equilibrium (at finite temperatures) or a system with an uncertain or randomly-varying preparation history (so one does not know which pure state the system is in). Also, if a quantum system has two or more subsystems that are entangled, then each subsystem must be treated as a mixed state even if the complete system is in a pure state. The density matrix is also a crucial tool in quantum decoherence theory.
The density matrix is a representation of a linear operator called the density operator. (The close relationship between matrices and operators is a basic concept in linear algebra.) In practice, the terms "density matrix" and "density operator" are often used interchangeably. Both matrix and operator are self-adjoint (or Hermitian), positive semi-definite, of traceone, and may be infinite-dimensional.[1] The formalism was introduced by John von Neumann[2] (and independently but less systematically by Lev Landau and Felix Bloch in 1927).[3][4][non-primary source needed]
[edit]Pure and mixed states
In quantum mechanics, a quantum system is represented by a state vector (or ket| \psi \rangle . A quantum system with a state vector | \psi \rangle is called a pure state. However, it is also possible for a system to be in a statistical ensemble of different state vectors: For example, there may be a 50% probability that the state vector is | \psi_1 \rangle and a 50% chance that the state vector is | \psi_2 \rangle . This system would be in a mixed state. The density matrix is especially useful for mixed states, because any state, pure or mixed, can be characterized by a single density matrix.
A mixed state is different from a quantum superposition. In fact, a quantum superposition of pure states is another pure state, for example | \psi \rangle = (| \psi_1 \rangle + | \psi_2 \rangle)/\sqrt{2} .
[edit]Example: Light polarization
An example of pure and mixed states is light polarization. Photons can have two helicities, corresponding to two orthogonal quantum states, |R\rangle (right circular polarization) and |L\rangle(left circular polarization). A photon can also be in a superposition state, such as (|R\rangle+|L\rangle)/\sqrt{2} (vertical polarization) or (|R\rangle-|L\rangle)/\sqrt{2} (horizontal polarization). More generally, it can be in any state \alpha|R\rangle+\beta|L\rangle, corresponding to linearcircular, or elliptical polarization. If we pass (|R\rangle+|L\rangle)/\sqrt{2} polarized light through a circular polarizer which allows either only |R\rangle polarized light, or only |L\rangle polarized light, intensity would be reduced by half in both cases. This may make it seem like half of the photons are in state |R\rangle and the other half in state |L\rangle. But this is not correct: Both |R\rangle and |L\rangle photons are partly absorbed by a vertical linear polarizer, but the (|R\rangle+|L\rangle)/\sqrt{2} light will pass through that polarizer with no absorption whatsoever.
However, unpolarized light (such as the light from an incandescent light bulb) is different from any state like \alpha|R\rangle+\beta|L\rangle (linear, circular, or elliptical polarization). Unlike linearly or elliptically polarized light, it passes through a polarizer with 50% intensity loss whatever the orientation of the polarizer; and unlike circularly polarized light, it cannot be made linearly polarized with any wave plate. Indeed, unpolarized light cannot be described as any state of the form \alpha|R\rangle+\beta|L\rangle. However, unpolarized light can be described perfectly by assuming that each photon is either | R \rangle with 50% probability or | L \rangle with 50% probability. The same behavior would occur if each photon was either vertically polarized with 50% probability or horizontally polarized with 50% probability.
Therefore, unpolarized light cannot be described by any pure state, but can be described as a statistical ensemble of pure states in at least two ways (the ensemble of half left and half right circularly polarized, or the ensemble of half vertically and half horizontally linearly polarized). These two ensembles are completely indistinguishable experimentally, and therefore they are considered the same mixed state. One of the advantages of the density matrix is that there is just one density matrix for each mixed state, whereas there are many statistical ensembles of pure states for each mixed state. Nevertheless, the density matrix contains all the information necessary to calculate any measurable property of the mixed state.
Where do mixed states come from? To answer that, consider how to generate unpolarized light. One way is to use a system in thermal equilibrium, a statistical mixture of enormous numbers of microstates, each with a certain probability (the Boltzmann factor), switching rapidly from one to the next due to thermal fluctuations. Thermal randomness explains why an incandescent light bulb, for example, emits unpolarized light. A second way to generate unpolarized light is to introduce uncertainty in the preparation of the system, for example, passing it through a birefringent crystal with a rough surface, so that slightly different parts of the beam acquire different polarizations. A third way to generate unpolarized light uses an EPR setup: A radioactive decay can emit two photons traveling in opposite directions, in the quantum state (|R,L\rangle+|L,R\rangle)/\sqrt{2}. The two photons together are in a pure state, but if you only look at one of the photons and ignore the other, the photon behaves just like unpolarized light.
More generally, mixed states commonly arise from a statistical mixture of the starting state (such as in thermal equilibrium), from uncertainty in the preparation procedure (such as slightly different paths that a photon can travel), or from looking at a subsystem entangled with something else.
[edit]Mathematical description
The state vector | \psi \rangle of a pure state completely determines the statistical behavior of a measurement. As an example, take an observable quantity, and let A be the associatedobservable operator that has a representation on the Hilbert space \mathcal{H} of the quantum system. For any real-valued function F defined on the real numbers,[5] suppose that F(A) is the result of applying F to the outcome of a measurement. The expectation value of F(A) is
\langle \psi | F(A) | \psi \rangle\, .
Now consider a mixed state prepared by statistically combining two different pure states | \psi \rangle and |\phi\rangle , with the associated probabilities p and 1 − p, respectively. The associated probabilities mean that the preparation process for the quantum system ends in the state |\psi\rangle with probability p and in the state |\phi\rangle with probability 1 − p.
It is not hard to show that the statistical properties of the observable for the system prepared in such a mixed state are completely determined. However, there is no state vector |\xi\rangle which determines this statistical behavior in the sense that the expectation value of F(A) is
\langle \xi | F(A) | \xi \rangle \, .
Nevertheless, there is a unique operator ρ such that the expectation value of F(A) can be written as
\operatorname{tr}[\rho F(A)]\, ,
where the operator ρ is the density operator of the mixed system. A simple calculation shows that the operator ρ for the above example is given by
\rho = p | \psi\rangle \langle \psi | + (1-p) | \phi\rangle \langle \phi |\,.
For a finite dimensional function space, the most general density operator is of the form
\rho = \sum_j p_j |\psi_j \rang \lang \psi_j|
where the coefficients pj are non-negative and add up to one. This represents a statistical mixture of pure states. If the given system is closed, then one can think of a mixed state as representing a single system with an uncertain preparation history, as explicitly detailed above; or we can regard the mixed state as representing an ensemble of systems, i.e. large number of copies of the system in question, where pj is the proportion of the ensemble being in the state \textstyle |\psi_j \rang . An ensemble is described by a pure state if every copy of the system in that ensemble is in the same state, i.e. it is a pure ensemble. If the system is not closed, however, then it is simply not correct to claim that it has some definite but unknown state vector, as the density operator may record physical entanglements to other systems.
Consider a quantum ensemble of size N with occupancy numbers n1n2,...,nk corresponding to the orthonormal states \textstyle |1\rang,...,|k\rang, respectively, where n1+...+nk = N, and, thus, the coefficients pj = nj /N. For a pure ensemble, where all N particles are in state \textstyle |i\rang , we have nj = 0, for all j ≠ i, from which we recover the corresponding density operator \textstyle\rho = |i\rang\lang i|. However, the density operator of a mixed state does not capture all the information about a mixture; in particular, the coefficients pj and the kets ψj are not recoverable from the operator ρ without additional information. This non-uniqueness implies that different ensembles or mixtures may correspond to the same density operator. Such equivalent ensembles or mixtures cannot be distinguished by measurement of observables alone. This equivalence can be characterized precisely. Two ensembles ψ, ψ' define the same density operator if and only if there is a matrix U with
i.e., U is unitary and such that
| \psi_i'\rangle \sqrt {p_i'} = \sum_{j} u_{ij} | \psi_j\rangle \sqrt {p_j}.
This is simply a restatement of the following fact from linear algebra: for two square matrices M and NM M* = N N* if and only if M = NU for some unitary U. (See square root of a matrix for more details.) Thus there is a unitary freedom in the ket mixture or ensemble that gives the same density operator. However if the kets in the mixture are orthonormal then the original probabilities pj are recoverable as the eigenvalues of the density matrix.
In operator language, a density operator is a positive semidefinitehermitian operator of trace 1 acting on the state space. A density operator describes a pure state if it is a rank one projection. Equivalently, a density operator ρ is a pure state if and only if
\; \rho = \rho^2,
i.e. the state is idempotent. This is true regardless of whether H is finite dimensional or not.
Geometrically, when the state is not expressible as a convex combination of other states, it is a pure state. The family of mixed states is a convex set and a state is pure if it is anextremal point of that set.
It follows from the spectral theorem for compact self-adjoint operators that every mixed state is an infinite convex combination of pure states. This representation is not unique. Furthermore, a theorem of Andrew Gleason states that certain functions defined on the family of projections and taking values in [0,1] (which can be regarded as quantum analogues of probability measures) are determined by unique mixed states. See quantum logic for more details.
Let A be an observable of the system, and suppose the ensemble is in a mixed state such that each of the pure states \textstyle |\psi_j\rang occurs with probability pj. Then the corresponding density operator is:
\rho = \sum_j p_j |\psi_j \rang \lang \psi_j| .
The expectation value of the measurement can be calculated by extending from the case of pure states (see Measurement in quantum mechanics):
\lang A \rang = \sum_j p_j \lang \psi_j|A|\psi_j \rang = \operatorname{tr}[\rho A],
where tr denotes trace. Moreover, if A has spectral resolution
A = \sum_i a_i |a_i \rang \lang a_i| = \sum _i a_i P_i,
where P_i = |a_i \rang \lang a_i|, the corresponding density operator after the measurement is given by:
\; \rho ^' = \sum_i P_i \rho P_i.
Note that the above density operator describes the full ensemble after measurement. The sub-ensemble for which the measurement result was the particular value ai is described by the different density operator
\rho_i' = \frac{P_i \rho P_i}{\operatorname{tr}[\rho P_i]}.
This is true assuming that \textstyle |a_i\rang is the only eigenket (up to phase) with eigenvalue ai; more generally, Pi in this expression would be replaced by the projection operator into theeigenspace corresponding to eigenvalue ai.
The von Neumann entropy S of a mixture can be expressed in terms of the eigenvalues of \rho or in terms of the trace and logarithm of the density operator \rho. Since \rho is a positive semi-definite operator, it has a spectral decomposition such that \rho= \sum_i \lambda_i |\varphi_i\rangle\langle\varphi_i| where |\varphi_i\rangle are orthonormal vectors. Therefore the entropy of a quantum system with density matrix \rho is
S = -\sum_i \lambda_i \ln \,\lambda_i = -\operatorname{tr}(\rho \ln \rho)\quad.
Also it can be shown that
S\left(\rho=\sum_i p_i\rho_i\right)= H(p_i) + \sum_i p_iS(\rho_i)
when \rho_i have orthogonal support, where H(p) is the Shannon entropy. This entropy can increase but never decrease with a projective measurement, however generalised measurements can decrease entropy [6][7]. The entropy of a pure state is zero, while that of a proper mixture always greater than zero. Therefore a pure state may be converted into a mixture by a measurement, but a proper mixture can never be converted into a pure state. Thus the act of measurement induces a fundamental irreversible change on the density matrix; this is analogous to the "collapse" of the state vector, or wavefunction collapse. Perhaps counterintuitively, the measurement actually decreases information by erasing quantum interference in the composite system—cf. quantum entanglement and quantum decoherence.
(A subsystem of a larger system can be turned from a mixed to a pure state, but only by increasing the von Neumann entropy elsewhere in the system. This is analogous to how the entropy of an object can be lowered by putting it in a refrigerator: The air outside the refrigerator's heat-exchanger warms up, gaining even more entropy than was lost by the object in the refrigerator. See second law of thermodynamics. See Entropy in thermodynamics and information theory.)
[edit]The Von Neumann equation for time evolution
Just as the Schrödinger equation describes how pure states evolve in time, the von Neumann equation (also known as Liouville-von Neumann equation) describes how a density operator evolves in time (in fact, the two equations are equivalent, in the sense that either can be derived from the other.) The von Neumann equation dictates that[8][9]
i \hbar \frac{\partial \rho}{\partial t} = [H,\rho]~,
where the brackets denote a commutator.
Note that this equation only holds when the density operator is taken to be in the Schrödinger picture, even though this equation seems at first look to emulate the Heisenberg equation of motion in the Heisenberg picture, with a crucial sign difference:
\frac{dA^{(H)}}{dt}=-\frac{i}{\hbar}[A^{(H)},H] ~,
where A^{(H)}(t) is some Heisenberg picture operator; but in this picture the density matrix is not time-dependent, and the relative sign ensures that the time derivative of the expected value \langle A \rangle comes out the same as in the Schrödinger picture.
Taking the density operator to be in the Schrödinger picture makes sense, since it is composed of 'Schrödinger' kets and bras evolved in time, as per the Schrödinger picture. If the Hamiltonian is time-independent, this differential equation can be easily solved to yield
\rho(t) = e^{-i H t/\hbar} \rho(0) e^{i H t/\hbar}.
[edit]"Quantum Liouville", Moyal's equation
The density matrix operator may also be realized in phase space. Under the Wigner map, the density matrix transforms into the equivalent Wigner function,
W(x,p)\stackrel{\mathrm{def}}{=}\frac{1}{\pi\hbar}\int_{-\infty}^\infty \psi^*(x+y)\psi(x-y)e^{2ipy/\hbar}\,dy ~.
The equation for the time-evolution of the Wigner function is then the Wigner-transform of the above von Neumann equation,
\frac{\partial W(q,p,t)}{\partial t} = -\{\{W(q,p,t) , H(q,p )\}\}~,
where H(q,p) is the Hamiltonian, and { { •,• } } is the Moyal bracket, the transform of the quantum commutator.
The evolution equation for the Wigner function is then analogous to that of its classical limit, the Liouville equation of classical physics. In the limit of vanishing Planck's constant ħ,W(q,p,t) reduces to the classical Liouville probability density function in phase space.
The classical Liouville equation can be solved using the method of characteristics for partial differential equations, the characteristic equations being Hamilton's equations. The Moyal equation in quantum mechanics similarly admits formal solutions in terms of quantum characteristics, predicated on the ∗−product of phase space, although, in actual practice, solution-seeking follows different methods.
[edit]Composite Systems
The joint density matrix of a composite system of two systems A and B is described by \rho_{AB} . Then the subsystems are described by their reduced density operator.
\operatorname{tr}_B is called partial trace over system B. If A and B are two distinct and independent systems then \rho_{AB}=\rho_{A}\otimes\rho_{B} which is a product state.
[edit]C*-algebraic formulation of states
It is now generally accepted that the description of quantum mechanics in which all self-adjoint operators represent observables is untenable.[10][11] For this reason, observables are identified to elements of an abstract C*-algebra A (that is one without a distinguished representation as an algebra of operators) and states are positive linear functionals on A. However, by using the GNS construction, we can recover Hilbert spaces which realize A as a subalgebra of operators.
Geometrically, a pure state on a C*-algebra A is a state which is an extreme point of the set of all states on A. By properties of the GNS construction these states correspond toirreducible representations of A.
The states of the C*-algebra of compact operators K(H) correspond exactly to the density operators and therefore the pure states of K(H) are exactly the pure states in the sense of quantum mechanics.
The C*-algebraic formulation can be seen to include both classical and quantum systems. When the system is classical, the algebra of observables become an abelian C*-algebra. In that case the states become probability measures, as noted in the introduction.
[edit]See also
[edit]Notes and references
1. ^ Fano, Ugo (1957), "Description of States in Quantum Mechanics by Density Matrix and Operator Techniques", Reviews of Modern Physics 29: 74–93, Bibcode 1957RvMP...29...74F,doi:10.1103/RevModPhys.29.74.
2. ^ von Neumann, John (1927), "Wahrscheinlichkeitstheoretischer Aufbau der Quantenmechanik", Göttinger Nachrichten 1: 245–272.
3. ^ Landau, L. D. (1927), "Das Dämpfungsproblem in der Wellenmechanik", Zeitschrift für Physik 45 (5–6): 430–441, Bibcode 1927ZPhy...45..430Ldoi:10.1007/BF01343064
4. ^ Landau, L. D., and Lifshitz, E. M. (1977), Quantum Mechanics, Non-Relativistic Theory: Volume 3, Oxford: Pergamon Press, pp. 41, ISBN 0-08-017801-4
5. ^ Technically, F must be a Borel function
6. ^ Nielsen, Michael; Chuang, Isaac (2000), Quantum Computation and Quantum InformationCambridge University PressISBN 978-0-521-63503-5. Chapter 11: Entropy and information, Theorem 11.9, "Projective measurements cannot decrease entropy"
7. ^ Everett, Hugh (1973), "The Theory of the Universal Wavefunction (1956) Appendix I. "Monotone decrease of information for stochastic processes"", The Many-Worlds Interpretation of Quantum Mechanics, Princeton Series in Physics, Princeton University Press, pp. 128–129, ISBN 978-0-691-08131-1
8. ^ The theory of open quantum systems, by Breuer and Petruccione, p110.
9. ^ Statistical mechanics, by Schwabl, p16.
10. ^ See appendix, Mackey, George Whitelaw (1963), Mathematical Foundations of Quantum Mechanics, Dover Books on Mathematics, New York: Dover PublicationsISBN 978-0-486-43517-6
11. ^ Emch, Gerard G. (1972), Algebraic methods in statistical mechanics and quantum field theoryWiley-InterscienceISBN 978-0-471-23900-0
No comments: |
3c883aa235e06d8e |
February 28, 2006 by James N. Gardner
Originally published in the International Journal of Astrobiology May 2005. Reprinted on February 28, 2006.
Goal 7 of the NASA Astrobiology Roadmap states: “Determine how to recognize signatures of life on other worlds and on early Earth. Identify biosignatures that can reveal and characterize past or present life in ancient samples from Earth, extraterrestrial samples measured in situ, samples returned to Earth, remotely measured planetary atmospheres and surfaces, and other cosmic phenomena.” The cryptic reference to “other cosmic phenomena” would appear to be broad enough to include the possible identification of biosignatures embedded in the dimensionless constants of physics. The existence of such a set of biosignatures—a life-friendly suite of physical constants—is a retrodiction of the Selfish Biocosm (SB) hypothesis. This hypothesis offers an alternative to the weak anthropic explanation of our indisputably life-friendly cosmos favored by (1) an emerging alliance of M-theory-inspired cosmologists and advocates of eternal inflation like Linde and Weinberg, and (2) supporters of the quantum theory-inspired sum-over-histories cosmological model offered by Hartle and Hawking. According to the SB hypothesis, the laws and constants of physics function as the cosmic equivalent of DNA, guiding a cosmologically extended evolutionary process and providing a blueprint for the replication of new life-friendly progeny universes.
The notion that we inhabit a universe whose laws and physical constants are fine-tuned in such a way as to make it hospitable to carbon-based life is an old idea (Gardner, 2003). The so-called “anthropic” principle comes in at least four principal versions (Barrow and Tipler, 1988) that represent fundamentally different ontological perspectives. For instance, the “weak anthropic principle” is merely a tautological statement that since we happen to inhabit this particular cosmos it must perforce by life-friendly or else we would not be here to observe it. As Vilenkin put it recently (Vilenkin, 2004), “the ‘anthropic’ principle, as stated above, hardly deserves to be called a principle: it is trivially true.” By contrast, the “participatory anthropic principle” articulated by Wheeler and dubbed “it from bit” (Wheeler, 1996) is a radical extrapolation from the Copenhagen interpretation of quantum physics and a profoundly counterintuitive assertion that the very act of observing the universe summons it into existence.
All anthropic cosmological interpretations share a common theme: a recognition that key constants of physics (as well as other physical aspects of our cosmos such as its dimensionality) appear to exhibit a mysterious fine-tuning that optimizes their collective bio-friendliness. Rees noted (Rees, 2000) that virtually every aspect of the evolution of the universe—from the birth of galaxies to the origin of life on Earth—is sensitively dependent on the precise values of seemingly arbitrary constants of nature like the strength of gravity, the number of extended spatial dimensions in our universe (three of the ten posited by M-theory), and the initial expansion speed of the cosmos following the Big Bang. If any of these physical constants had been even slightly different, life as we know it would have been impossible:
The [cosmological] picture that emerges—a map in time as well as in space—is not what most of us expected. It offers a new perspective on a how a single “genesis event” created billions of galaxies, black holes, stars and planets, and how atoms have been assembled—here on Earth, and perhaps on other worlds—into living beings intricate enough to ponder their origins. There are deep connections between stars and atoms, between the cosmos and the microworld…. Our emergence and survival depend on very special “tuning” of the cosmos—a cosmos that may be even vaster than the universe that we can actually see.
As stated recently by Smolin (Smolin, 2004), the challenge is to provide a genuinely scientific explanation for what he terms the “anthropic observation”:
The anthropic observation: Our universe is much more complex than most universes with the same laws but different values of the parameters of those laws. In particular, it has a complex astrophysics, including galaxies and long lived stars, and a complex chemistry, including carbon chemistry. These necessary conditions for life are present in our universe as a consequence of the complexity which is made possible by the special values of the parameters.
There is good evidence that the anthropic observation is true. Why it is true is a puzzle that science must solve.
It is a daunting puzzle indeed. The strangely (and apparently arbitrarily) biophilic quality of the physical laws and constants poses, in Greene’s view, the deepest question in all of science (Greene, 2004). In the words of Davies (Gardner, 2003), it represents “the biggest of the Big Questions: why is the universe bio-friendly?”
Modern History of Anthropic Reasoning
Modern statements of the cosmological anthropic principle date from the publication of a landmark book by Henderson in 1913 entitled The Fitness of the Environment (Henderson, 1913). Henderson’s book was an extended reflection on the curious fact that there are particular substances present in the environment—preeminently water—whose peculiar qualities rendered the environment almost preternaturally suitable for the origin, maintenance, and evolution of organic life. Indeed, the strangely life-friendly qualities of these materials led Henderson to the view that “we were obliged to regard this collocation of properties in some intelligible sense a preparation for the process of planetary evolution…. Therefore the properties of the elements must for the present be regarded as possessing a teleological character.”
Thoroughly modern in outlook, Henderson dismissed this apparent evidence that inanimate nature exhibited a teleological character as indicative of divine design or purpose. Indeed, he rejected the notion that nature’s seemingly teleological quality was in any way inconsistent with Darwin’s theory of evolution through natural selection. On the contrary, he viewed the bio-friendly character of the inanimate natural environment as essential to the optimal operation of the evolutionary forces in the biosphere. Absent the substrate of a superbly “fit” inanimate environment, Henderson contended, Darwinian evolution could never have achieved what it has in terms of species multiplication and diversification.
The mystery of why the physical qualities of the inanimate universe happened to be so oddly conducive to life and biological evolution remained just that for Henderson—an impenetrable mystery. The best he could do to solve the puzzle was to speculate that the laws of chemistry were somehow fine-tuned in advance by some unknown cosmic evolutionary mechanism to meet the future needs of a living biosphere:
Henderson’s iconoclastic vision was far ahead of its time. His potentially revolutionary book was largely ignored by his contemporaries or dismissed as a mere tautology. Of course there should be a close match-up between the physical requirements of life and the physical world that life inhabits, contemporary skeptics pointed out, since life evolved to survive the very challenges presented by that pre-organic world and to take advantage of the biochemical opportunities it offered.
While lacking broad influence at the time, Henderson’s pioneering vision proved to be the precursor to modern formulations of the cosmological anthropic principle. One of the first such formulations was offered by British astronomer Fred Hoyle. A storied chapter in the history of the principle is the oft-told tale of Hoyle’s prediction of the details of the triple-alpha process (Mitton 2005). This prediction, which seems to qualify as the first falsifiable implication to flow from an anthropic hypothesis, involves the details of the process by which the element carbon (widely viewed as the essential element of abiotic precursor polymers capable of autocatalyzing the emergence of living entities) emerges through stellar nucleosynthesis. As noted by Livio (Livio, 2003):
Carbon features in most anthropic arguments. In particular, it is often argued that the existence of an excited state of the carbon nucleus is a manifestation of fine-tuning of the constants of nature that allowed for the appearance of carbon-based life. Carbon is formed through the triple-alpha process in two steps. In the first, two alpha particles form an unstable (lifetime ~10-16s)8Be. In the second, a third alpha particle is captured, via 8Be(α,γ)12C. Hoyle argued than in order for the 3α reaction to proceed at a rate sufficient to produce the observed cosmic carbon, a resonant level must exist in 12C, a few hundred keV about the 8Be+4He threshold. Such a level was indeed found experimentally.
Other chapters in the modern history of the anthropic principle are treated comprehensively by Barrow and Tipler (Barrow and Tipler, 1988) and will not be revisited here.
The New Urgency of Anthropic Investigation
Two recent developments have imparted a renewed sense of urgency to investigations of the anthropic qualities of our cosmos. The first is the discovery that the value of dark energy density is exceedingly small but not quite zero—an apparent happenstance, unpredictable from first principles, with profound implications for the bio-friendly quality of our universe. As noted recently by Goldsmith (Goldsmith, 2004):
A relatively straightforward calculation [based on established principles of theoretical physics] does yield a theoretical value for the cosmological constant, but that value is greater than the measured one by a factor of about 10120—probably the largest discrepancy between theory and observation science has ever had to bear.
If the cosmological constant had a smaller value than that suggested by recent observations, it would cause no trouble (just as one would expect, remembering the happy days when the constant was thought to be zero). But if the constant were a few times larger than it is now, the universe would have expanded so rapidly that galaxies could not have endured for the billions of years necessary to bring forth complex forms of life.
The second development is the realization that M-theory—arguably the most promising contemporary candidate for a theory capable of yielding a deep synthesis of relativity and quantum physics—permits, in Bjorken’s phrase (Bjorken, 2004), “a variety of string vacuua, with different standard-model properties.”
M-theorists had initially hoped that their new paradigm would be “brittle” in the sense of yielding a single mathematically unavoidable solution that uniquely explained the seemingly arbitrary parameters of the Standard Model. As Susskind has put it (Susskind, 2003):
The world-view shared by most physicists is that the laws of nature are uniquely described by some special action principle that completely determines the vacuum, the spectrum of elementary particles, the forces and the symmetries. Experience with quantum electrodynamics and quantum chromodynamics suggests a world with a small number of parameters and a unique ground state. For the most part, string theorists bought into this paradigm. At first it was hoped that string theory would be unique and explain the various parameters that quantum field theory left unexplained.
This hope has been dashed by the recent discovery that the number of different solutions permitted by M-theory (which correspond to different values of Standard Model parameters) is, in Susskind’s words, “astronomical, measured not in millions or billions but in googles or googleplexes.” This development seems to deprive our most promising new theory of fundamental physics of the power to uniquely predict the emergence of anything remotely resembling our universe. As Susskind puts it, the picture of the universe that is emerging from the deep mathematical recesses of M-theory is not an “elegant universe” but rather a Rube Goldberg device, cobbled together by some unknown process in a supremely improbable manner that just happens to render the whole ensemble fit for life. In the words of University of California theoretical physicist Steve Giddings, “No longer can we follow the dream of discovering the unique equations that predict everything we see, and writing them on a single page. Predicting the constants of nature becomes a messy environmental problem. It has the complications of biology.”[1]
Two Contemporary Restatements of the Weak Anthropic Principle: Eternal Inflation Plus M-Theory and Many-Worlds Quantum Cosmology
There have been two principal approaches to the task of enlisting the weak anthropic principle to explain the mysteriously small (and thus bio-friendly) value of the density of dark energy and the apparent happenstance by which our bio-friendly universe was selected from the enormously large “landscape” of possible solutions permitted by M-theory, only a tiny fraction of which correspond to anything resembling the Standard Model prevalent in our cosmos.
Eternal Inflation Meets M-Theory
The first approach, favored by Susskind (Susskind, 2003). Linde (Linde, 2002), Weinberg (Weinberg, 1999), and Vilenkin (Vilenkin, 2004) among others, overlays the model of eternal inflation with the key assumption that M-theory-permitted solutions (corresponding to different values of Standard Model parameters) and dark energy density values will vary randomly from bubble universe to bubble universe within an eternally expanding ensemble variously termed a multiverse or a meta-univers. Generating a life-friendly cosmos is simply a matter of randomly reshuffling the set of permissible parameters and values a sufficient number of times until a particular Big Bang yields, against odds of perhaps a googleplex-to-one, a permutation that just happens to possess the right mix of Standard Model parameters to be bio-friendly.
Sum-Over-Histories Quantum Cosmological Model
The second approach invokes a quantum theory-derived sum-over-histories cosmological model inspired by Everett’s “many worlds” interpretation of quantum physics. This approach, which has been prominently embraced by Hawking (Hawking and Hertog, 2002), was summarized as follows by Hogan (Hogan, 2004):
In the original formulation of quantum mechanics, it was said that an observation collapsed a wavefunction to one of the eignestates of the observed quantity. The modern view is that the cosmic wavefunction never collapses, but only appears to collapse from the point of view of observers who are part of the wavefunction. When Schrödinger’s cat lives or dies, the branch of the wavefunction with the dead cat also contains observers who are dealing with a dead cat, and the branch with the live cat also contains observers who are petting a live one.
Although this is sometimes called the “Many Worlds” interpretation of quantum mechanics, it is really about having just one world, one wavefunction, obeying the Schrödinger equation: the wavefunction evolves linearly from one time to the next based on its previous state.
Anthropic selection in this sense is built into physics at the most basic level of quantum mechanics. Selection of a wavefunction branch is what drives us into circumstances in which we thrive. Viewed from a disinterested perspective outside the universe, it looks like living beings swim like salmon up their favorite branches of the wavefunction, chasing their favorite places.
Hawking and Hertog (Hawking and Hertog, 2002) have explicitly characterized this “top down” cosmological model as a restatement of the weak anthropic principle:
We have argued that because our universe has a quantum origin, one must adopt a top down approach to the problem of initial conditions in cosmology, in which histories that contribute to the path integral, depend on the observable being measured. There is an amplitude for empty flat space, but it is not of much significance. Similarly, the other bubbles in an eternally inflating spacetime are irrelevant. They are to the future of our past light cone, so they don’t contribute to the action for observables and should be excised by Ockham’s razor. Therefore, the top down approach is a mathematical formulation of the weak anthropic principle. Instead of starting with a universe and asking what a typical observer would see, one specifies the amplitude of interest.
Critique of Contemporary Restatements of the Weak Anthropic Principle
Apart from the objections on the part of those who oppose in principle any use of the anthropic principle in cosmology, there are at least three reasons why both the Hawking/Hogan and the Susskind/Linde/Weinberg restatements of the weak anthropic principle are objectionable.
First, both approaches appear to be resistant (at the very least) to experimental testing. Universes spawned by Big Bangs other than our own are inaccessible from our own universe, at least with the experimental techniques currently available to science. So too are quantum wavefunction branches that we cannot, in principle, observe. Accordingly, both approaches appear to be untestable—perhaps untestable in principle. For this reason, Smolin recently argued (Smolin, 2004) “not only is the Anthropic Principle not science, its role may be negative. To the extent that the Anthropic Principle is espoused to justify continued interest in unfalsifiable theories, it may play a destructive role in the progress of science.”
Second, both approaches violate the mediocrity principle. The mediocrity principle, a mainstay of scientific theorizing since Copernicus, is a statistically based rule of thumb that, absent contrary evidence, a particular sample (Earth, for instance, or our particular universe) should be assumed to be a typical example of the ensemble of which it is a part. The Susskind/Linde/Weinberg approach, in particular, flouts this principle. Their approach simply takes refuge in a brute, unfathomable mystery—the conjectured lucky roll of the dice in a crap game of eternal inflation—and declines to probe seriously into the possibility of a naturalistic cosmic evolutionary process that has the capacity to yield a life-friendly set of physical laws and constants on a nonrandom basis.
Third, both approaches extravagantly inflate the probabilistic resources required to explain the phenomenon of a life-friendly cosmos. (Think of a googleplex of monkeys typing away randomly until one of them, by pure chance, accidentally composes a set of equations that correspond to the Standard Model.) This should be a hint that something fundamental is being overlooked and that there may exist an unknown natural process, perhaps functionally akin in some manner to terrestrial evolution, capable of effecting the emergence and prolongation of physical states of nature that are, in the abstract, vanishingly improbable.
The Darwinian Precedent
Hogan (Hogan, 2004) has analogized the quantum theory-inspired sum-over-histories version of the weak anthropic principle to Darwinian theory:
This blending of empirical cosmology and fundamental physics is reminiscent of our Darwinian understanding of the tree of life. The double helix, the four-base codon alphabet and the triplet genetic code for amino acids, any particular gene for a protein in a particular organism—all are frozen accidents of evolutionary history. It is futile to try to understand or explain these aspects of life, or indeed any relationships in biology, without referring to the way the history of life unfolded. In the same way that (in Dobzhansky’s phrase), “nothing in biology makes sense except in the light of evolution,” physics in these models only makes sense in the light of cosmology.
Ironically, Hogan misses the key point that neither the branching wavefunction nor the eternal inflation-plus-M-theory versions of the weak anthropic principle hypothesize the existence of anything corresponding to the main action principle of Darwin’s theory: natural selection. Both restatements of the weak anthropic principle are analogous, not to Darwin’s approach, but rather to a mythical alternative history in which Darwin, contemplating the storied tangled bank (the arresting visual image with which he concludes The Origin of Species), had confessed not a magnificent obsession with gaining an understanding of the mysterious natural processes that had yielded “endless forms most beautiful and most wonderful,” but rather a smug satisfaction that of course the earthly biosphere must have somehow evolved in a just-so manner mysteriously friendly to humans and other currently living species, or else Darwin and other humans would not be around to contemplate it.
Indeed, the situation that confronts cosmologists today is reminiscent of that which faced biologists before Darwin propounded his revolutionary theory of evolution through natural selection. Darwin confronted the seemingly miraculous phenomenon of a fine-tuned natural order in which every creature and plant appeared to occupy a unique and well-designed niche. Refusing to surrender to the brute mystery posed by the appearance of nature’s design, Darwin masterfully deployed the art of metaphor[2] to elucidate a radical hypothesis—the origin of species through natural selection—that explained the apparent miracle as a natural phenomenon.
A significant lesson drawn from Darwin’s experience is important to note at this point. Answering the question of why the most eminent geologists and naturalists had, until shortly before publication of The Origin of Species, disbelieved in the mutability of species, Darwin responded that this false conclusion was “almost inevitable as long as the history of the world was thought to be of short duration.” It was geologist Charles Lyell’s speculations on the immense age of Earth that provided the essential conceptual framework for Darwin’s new theory. Lyell’s vastly expanded stretch of geological time provided an ample temporal arena in which the forces of natural selection could sculpt and reshape the species of Earth and achieve nearly limitless variation.
The central point for purposes of this paper is that collateral advances in sciences seemingly far removed from cosmology (complexity theory and evolutionary theory among them) can help dissipate the intellectual limitations imposed by common sense and naïve human intuition. And, in an uncanny reprise of the Lyell/Darwin intellectual synergy, it is a realization of the vastness of time and history that gives rise to the novel theoretical possibility to be discussed subsequently. Only in this instance, it is the vastness of future time and future history that is of crucial importance. In particular, sharp attention must be paid to the key conclusion of Wheeler: most of the time available for life and intelligence to achieve their ultimate capabilities lie in the distant cosmic future, not in the cosmic past. As Tipler (Tipler, 1994) has stated, “Almost all of space and time lies in the future. By focusing attention only on the past and present, science has ignored almost all of reality. Since the domain of scientific study is the whole of reality, it is about time science decided to study the future evolution of the universe.” The next section of this paper describes an attempt to heed these admonitions.
The Selfish Biocosm Hypothesis
In a paper published in Complexity (Gardner, 2000), I first advanced the hypothesis that the anthropic qualities which our universe exhibits might be explained as incidental consequences of a cosmic replication cycle in which the emergence of a cosmologically extended biosphere could conceivably supply two of the logically essential elements of self-replication identified by von Neumann (von Neumann, 1948): a controller and a duplicating device. The hypothesis proposed in that paper was an attempt to extend and refine Smolin’s conjecture (Smolin, 1997) that the majority of the anthropic qualities of the universe can be explained as incidental consequences of a process of cosmological replication and natural selection (CNS) whose utility function is black hole maximization. Smolin’s conjecture differs crucially from the concept of eternal inflation advanced by Linde (Linde, 1998) in that it proposes a cosmological evolutionary process with a specific and discernible utility function—black hole maximization. It is this aspect of Smolin’s conjecture rather than the specific utility function he advocates that renders his theoretical approach genuinely novel.
As demonstrated previously (Rees, 1997; Baez, 1998), Smolin’s conjecture suffers from two evident defects: (1) the fundamental physical laws and constants do not, in fact, appear to be fine-tuned to favor black hole maximization and (2) no mechanism is proposed corresponding to two logically required elements of any von Neumann self-replicating automaton: a controller and a duplicator.[3] The latter are essential elements of any replicator system capable of Darwinian evolution, as noted by Dawkins (Gardner, 2000) in a critique of Smolin’s conjecture:
Note that any Darwinian theory depends on the prior existence of the strong phenomenon of heredity. There have to be self-replicating entities (in a population of such entities) that spawn daughter entities more like themselves than the general population.
Theories of cosmological eschatology previously articulated (Kurzweil, 1999; Wheeler, 1996; Dyson, 1988) predict that the ongoing process of biological and technological evolution is sufficiently robust and unbounded that, in the far distant future, a cosmologically extended biosphere could conceivably exert a global influence on the physical state of the cosmos. A related set of insights from complexity theory (Gardner, 2000) indicates that the process of emergence resulting from such evolution is essentially unbounded.
A synthesis of these two sets of insights yielded the two key elements of the Selfish Biocosm (SB) hypothesis. The essence of that synthesis is that the ongoing process of biological and technological evolution and emergence could conceivably function as a von Neumann controller and that a cosmologically extended biosphere could, in the very distant future, function as a von Neumann duplicator in a hypothesized process of cosmological replication.
In a paper published in Acta Astronautica (Gardner, 2001) I suggested that a falsifiable implication of the SB hypothesis is that the process of the progression of the cosmos through critical epigenetic thresholds in its life cycle, while perhaps not strictly inevitable, is relatively robust. One such critical threshold is the emergence of human-level and higher intelligence, which is essential to the eventual scaling up of biological and technological processes to the stage at which those processes could conceivably exert a global influence on the state of the cosmos. Four specific tests of the robustness of the emergence of human-level and higher intelligence were proposed.
In a subsequent paper published in the Journal of the British Interplanetary Society (Gardner, 2002) I proposed that an additional falsifiable implication of the SB hypothesis is that there exists a plausible final state of the cosmos that exhibits maximal computational potential. This predicted final state appeared to be consistent with both the modified ekpyrotic cyclic universe scenario (Khoury, Ovrut, Seiberg, Steinhardt, and Turok, 2001; Steinhardt and Turok, 2001) and with Lloyd’s description (Lloyd, 2000) of the physical attributes of the ultimate computational device: a computer as powerful as the laws of physics will allow.
Key Retrodiction of the SB Hypothesis: A Life-Friendly Cosmos
The central assertions of the SB hypothesis are: (1) that highly evolved life and intelligence play an essential role in a hypothesized process of cosmic replication and (2) that the peculiarly life-friendly laws and physical constants that prevail in our universe—an extraordinarily improbable ensemble that Pagels dubbed the cosmic code (Pagels, 1983)—play a cosmological role functionally equivalent to that of DNA in an earthly organism: they provide a recipe for cosmic ontogeny and a blueprint for cosmic reproduction. Thus, a key retrodiction of the SB hypothesis is that the suite of physical laws and constants that prevail in our cosmos will, in fact, be life-friendly. Moreover—and alone among the various cosmological scenarios offered to explain the phenomenon of a bio-friendly universe—the SB hypothesis implies that this suite of laws and constants comprise a robust program that will reliably generate life and advanced intelligence just as the DNA of a particular species constitutes a robust program that will reliably generate individual organisms that are members of that particular species. Indeed, because the hypothesis asserts that sufficiently evolved intelligent life serves as a von Neumann duplicator in a putative process of cosmological replication, the biophilic quality of the suite emerges as a retrodicted biosignature of the putative duplicator and duplication process within the meaning of Goal 7 of the NASA Astrobiology Roadmap, which provides in pertinent part:
Does this retrodiction qualify as a valid scientific test of the validity of the SB hypothesis? I propose that it may, provided two additional qualifying criteria are satisfied:
• The underlying hypothesis must enjoy consilience[4] with mainstream scientific paradigms and conjectural frameworks (in particular, complexity theory, evolutionary theory, M-theory, and theoretically acceptable conjectures by mainstream cosmologists concerning the feasibility, at least in principle, of “baby universe” fabrication); and
• The retrodiction must be augmented by falsifiable predictions of phenomena implied by the SB hypothesis but not yet observed.
Retrodiction as a Tool for Testing Scientific Hypotheses
There is a lively literature debating the propriety of employing retrodiction as a tool for testing scientific hypotheses (Cleland, 2002; Cleland, 2001; Gee, 1999; Oldershaw, 1988). Oldershaw (Oldershaw, 1988) has discussed the use of falsifiable retrodiction (as opposed to falsifiable prediction) as a tool of scientific investigation:
A second type of prediction is actually not a prediction at all, but rather a “retrodiction.” For example, the anomalous advance of the perihelion of Mercury had been a tiny thorn in the side of Newtonian gravitation long before general relativity came upon the scene. Einstein found that his theory correctly “predicted,” actually retrodicted, the numerical value of the perihelion advance. The explanation of the unexpected result of the Michelson-Morley experiment (constancy of the velocity of light) in terms of special relativity is another example.
As he went on to note, “Retrodictions usually represent falsification tests; the theory is probably wrong if it fails the test, but should not necessarily be considered right if it passes the test since it does not involve a definitive prediction.” Despite their legitimacy as falsification tests of hypotheses, falsifiable retrodictions are qualitatively inferior to falsifiable predictions, in Oldershaw’s view:
But, in the final analysis, only true definitive predictions can justify the promotion of a theory from being viewed as one of many plausible hypotheses to being recognized as the best available approximation of how nature actually works. A theory that cannot generate definitive predictions, or whose definitive predictions are impossible to test, can be regarded as inherently untestable.”
A less sympathetic view concerning the validity of retrodiction as a scientific tool was offered by Gee (Gee, 1999), who dismissed the legitimacy of all historical hypotheses on the ground that “they can never be tested by experiment, and so they are unscientific…. No science can ever be historical.” This viewpoint, in turn, has been challenged by Cleland (Cleland, 2001) who contends that “when it comes to testing hypotheses, historical science is not inferior to classical experimental science” but simply exploits the available evidence in a different way:
There [are] fundamental differences in the methodology used by historical and experimental scientists. Experimental scientists focus on a single (sometimes complex) hypothesis, and the main research activity consists in repeatedly bringing about the test conditions specified by the hypothesis, and controlling for extraneous factors that might produce false positives and false negatives. Historical scientists, in contrast, usually concentrate on formulating multiple competing hypotheses about particular past events. Their main research efforts are directed at searching for a smoking gun, a trace that sets apart one hypothesis as providing a better causal explanation (for the observed traces) than do the others. These differences in methodology do not, however, support the claim that historical science is methodologically inferior, because they reflect an objective difference in the evidential relations at the disposal of historical and experimental researchers for evaluating their hypotheses.
Cleland’s approach has the merit of preserving as “scientific” some of the most important hypotheses advanced in such historical fields of inquiry as geology, evolutionary biology, cosmology, paleontology, and archaeology. As Cleland has noted (Cleland, 2002):
Experimental research is commonly held up at the paradigm of successful (a.k.a.good) science. The role classically attributed to experiment is that of testing hypotheses in controlled laboratory settings. Not all scientific hypotheses can be tested in this manner, however. Historical hypotheses about the remote past provide good examples. Although fields such as paleontology and archaeology provide the familiar examples, historical hypotheses are also common in geology, biology, planetary science, astronomy, and astrophysics. The focus of historical research is on explaining existing natural phenomena in terms of long past causes. Two salient examples are the asteroid-impact hypothesis for the extinction of the dinosaurs, which explains the fossil record of the dinosaurs in terms of the impact of a large asteroid, and the “big-bang” theory of the origin of the universe, which explains the puzzling isotropic three-degree background radiation in terms of a primordial explosion. Such work is significantly different from making a prediction and then artificially creating a phenomenon in a laboratory.
In a paper presented to the 2004 Astrobiology Science Conference (Cleland, 2004), Cleland extended this analytic framework to the consideration of putative biosignatures as evidence of the past or present existence of extraterrestrial life. Acknowledging that “because biosignatures represent indirect traces (effects) of life, much of the research will be historical (vs. experimental) in character even in cases where the traces represent recent effects of putative extant organisms,” Cleland concluded that it was appropriate to employ the methodology that characterizes successful historical research:
Successful historical research is characterized by (1) the proliferation of alternative competing hypotheses in the face of puzzling evidence and (2) the search for more evidence (a “smoking gun”) to discriminate among them.
From the perspective of the evidentiary standards applicable to historical science in general and astrobiology in particular, the key retrodiction of the SB hypothesis—that the fundamental constants of nature that comprise the Standard Model as well as other physical features of our cosmos (included the number of extended physical dimensions and the extremely low value of dark energy) will be collectively bio-friendly—appears to constitute a legitimate scientific test of the hypothesis. Moreover, within the framework of Goal 7 of the NASA Astrobiology Roadmap, the retrodicted biophilic quality of our universe appears, under the SB hypothesis, to constitute a possible biosignature.
Caution Regarding the Use of Retrodiction to Test the SB Hypothesis
Because the SB hypothesis is radically novel and because the use of falsifiable retrodiction as a tool to test such an hypothesis creates at least the appearance of a “confirmatory argument resemble[ing] just-so stories (Rudyard Kipling’s fanciful stories, e.g., how leopards got their spots)” (Cleland, 2001), it is important (as noted previously) that two additional criteria be satisfied before this retrodiction can be considered a legitimate test of the hypothesis:
• The SB hypothesis must generate falsifiable predictions as well as falsifiable retrodictions; and
• The SB hypothesis must be consilient with key theoretical constructs in such “adjoining” area of scientific investigation as M-theory, cosmogenesis, complexity theory, and evolutionary theory.
As argued at length elsewhere (Gardner, 2003), the SB hypothesis is both consilient with central concepts in these “adjoining” fields and fully capable of generating falsifiable predictions.
Concluding Remarks
In his book The Fifth Miracle (Davies, 1999) Davies offered this interpretation of NASA’s view that the presence of liquid water on an alien world was a reliable marker of a life-friendly environment:
An emerging consensus among mainstream physicists and cosmologists is that the particular universe we inhabit appears to confirm what Smolin calls the “anthropic observation”: the laws and constants of nature seem to be fine-tuned, with extraordinary precision and against enormous odds, to favor the emergence of life and its byproduct, intelligence. As Dyson put it eloquently more than two decades ago (Dyson, 1979):
Why this should be so remains a profound mystery. Indeed, the mystery has deepened considerably with the recent discovery of the inexplicably tiny value of dark energy density and the realization that M-theory encompasses an unfathomably vast landscape of possible solutions, only a minute fraction of which correspond to anything resembling the universe that we inhabit.
Confronted with such a deep mystery, the scientific community ought to be willing to entertain plausible explanatory hypotheses that may appear to be unconventional or even radical. However, such hypotheses, to be taken seriously, must:
• be consilient with the key paradigms of “adjoining” scientific fields,
• generate falsifiable predictions, and
• generate falsifiable retrodictions.
The SB hypothesis satisfies these criteria. In particular, it generates a falsifiable retrodiction that the physical laws and constants that prevail in our cosmos will be biophilic—which they are.
Baez, J. 1998 on-line commentary on The Life of the Cosmos (available at
Barrow, J. and Tipler, F. 1988 The Anthropic Cosmological Principle, Oxford University Press.
Bjorken, J. 2004 “The Classification of Universes,” astro-ph/0404233.
Cleland, C. 2001 “Historical science, experimental science, and the scientific method,” Geology, 29, pp. 978-990.
Cleland, C. 2002 “Methodological and Epistemic Differences Between Historical Science and Experimental Science,” Philosophy of Science, 69, pp. 474-496.
Cleland, C. 2004 “Historical Science and the Use of Biosignatures,” unpublished summary of presentation abstracted in International Journal of Astrobiology, Supplement 2004, p. 119.
Davies, P. 1999 The Fifth Miracle, Simon & Schuster.
Dyson, F. 1979 Disturbing the Universe, Harper & Row.
Dyson, F. 1988 Infinite in All Directions, Harper & Row.
Gardner, J. 2000 “The Selfish Biocosm: Complexity as Cosmology,” Complexity, 5, no. 3, pp. 34-45..
Gardner, J. 2001 “Assessing the Robustness of the Emergence of Intelligence: Testing the Selfish Biocosm Hypothesis,” Acta Astronautica, 48, no. 5-12, pp. 951-955.
Gardner, J. 2002 “Assessing the Computational Potential of the Eschaton: Testing the Selfish Biocosm Hypothesis,” Journal of the British Interplanetary Society 55, no. 7/8, pp. 285-288.
Gardner, J. 2003 Biocosm, Inner Ocean Publishing.
Gee, H. 1999 In Search of Deep Time, The Free Press.
Goldsmith, D. 2004 “The Best of All Possible Worlds,” Natural History, 5, no. 6, pp. 44-49.
Greene, B. 2004 The Fabric of the Cosmos, Knopf.
Hawking, S. and Hertog, T. 2002 “Why Does Inflation Start at the Top of the Hill?” hep-th/0204212.
Henderson, L. 1913 The Fitness of the Environment, Harvard University Press.
Hogan, C. 2004 “Quarks, Electrons, and Atoms in Closely Related Universes,” astro-ph/0407086.
Khoury, J., Ovrut, B. A., Seiberg, N., Steinhardt, P., and Turok, N. 2001 “From Big Crunch to Big Bang,” hep-th/0108187.
Kurzweil, R. 1999 The Age of Spiritual Machines, Viking.
Linde, A. 2002 “Inflation, Quantum Cosmology and the Anthropic Principle,” hep-th/0211048.
Linde, A.1998 “The Self-Reproducing Inflationary Universe,” Scientific American, 9(20), pp. 98-104.
Livio, M. 2003 “Cosmology and Life,” astro-ph/0301615.
Lloyd, S. 2000 “Ultimate Physical Limits to Computation,” Nature, 406, pp. 1047-1054.
Mitton, S. 2005 Conflict in the Cosmos: Fred Hoyle’s Life in Science, Joseph Henry Press.
Oldershaw, R. 1988 “The new physics: physical or mathematical science?” American Journal of Physics, 56(12).
Pagels, H. 1983 The Cosmic Code, Bantam.
Rees, M. 1997 Before the Beginning, Addison Wesley.
Rees, M. 2000 Just Six Numbers, Basic Books.
Smolin, L. 1997 The Life of the Cosmos, Oxford University Press.
Smolin, L. 2004 “Scientific Alternatives to the Anthropic Principle,” hep-th/0407213.
Steinhardt, P. and Turok, N. 2001 “Cosmic Evolution in a Cyclic Universe,” hep-th/0111098.
Susskind, L. 2003 “The Anthropic Landscape of String Theory,” hep-th/0302219.
Tipler, F. 1994 The Physics of Immortality, Doubleday.
Vilenkin, A. 2004 “Anthropic predictions: The Case of the Cosmological Constant,” astro-ph/0407586.
von Neumann, J. 1948 “On the General and Logical Theory of Automata.”
Weinberg, S. 21 October 1999 “A Designer Universe?” New York Review of Books.
Wheeler, J. 1996 At Home in the Universe, AIP Press.
Wilson, E. O. 1998 “Scientists, Scholars, Knaves and Fools,” American Scientist, 86, pp. 6-7.
[2] The metaphor furnished by the familiar process of artificial selection was Darwin’s crucial stepping stone. Indeed, the practice of artificial selection through plant and animal breeding was the primary intellectual model that guided Darwin in his quest to solve the mystery of the origin of species and to demonstrate in principle the plausibility of his theory that variation and natural selection were the prime movers responsible for the phenomenon of speciation.
[3] Both defects were emphasized by Susskind in a recent on-line exchange with Smolin which appears at Smolin has argued that his CNS hypothesis has not been falsified on the first ground (Smolin, 2004) but conceded that his conjecture lacks any hypothesized mechanism that would endow the putative process of proliferation of black-hole-prone universes with a heredity function:
As Smolin noted in the same paper, it is crucial that such a mechanism exist in order to avoid the conclusion that each new universe’s set of physical laws and constants would constitute a merely random sample of the vast parameter space permitted by the extraordinarily large “landscape” of M-theory-allowed solutions:
It is important to emphasize that the process of natural selection is very different from a random sprinkling of universes on the parameter space P. This would produce only a uniform distribution prandom(p). To achieve a distribution peaked around the local maxima of a fitness function requires the two conditions specified. The change in each generation must be small so that the distribution can “climb the hills” in F(p) rather than jump around randomly, and so it can stay in the small volume of P where F(p) is large, and not diffuse away. This requires many steps to reach local maxima from random starts, which implies that long chains of descendants are needed.
[4] Wilson has identified consilience as one of the “diagnostic features of science that distinguishes it from pseudoscience” (Wilson, 1998):
© 2005 James N. Gardner. Reprinted with permission. |
49132e39938b228d | Theoretical chemistry
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Theoretical chemistry seeks to provide explanations to chemical and physical observations. Should the properties derived from the quantum theory give a good account of the above mentioned phenomena, we derive consequences using the same theory. Should the derived consequences fall too far from the experimental evidence, we go to a different theory. G. Lewis proposed that chemical properties originated from the electrons of the atom's valence shell, ever since the theoretical chemistry has dealt with modelling of the outer electrons of interacting atoms or molecules in a reaction. Theoretical chemistry includes the fundamental laws of physics Coulomb's law, Kinetic energy, Potential energy, the Virial Theorem, Planck's Law, Pauli exclusion principle and many others to explain but also predict chemical observed phenomena. The term quantum chemistry which comes from Bohr's quantized model of electron in the atom, applies to both the time independent Schrödinger and the time dependent Dirac formulations.
In general one has to distinguish, theoretical approach (theory level such as Hartree-Fock (HF), Coupled cluster, Relativistic, etc.) from mathematical formalism, plane wave, spherical harmonics, Bloch wave periodic potential. Methods that solve iteratively the energies (Eigenvalues) of stationary state waves in a potential include Restricted Hartree-Fock (RHF), Multi-configurational self-consistent field (CASSCF or MCSCF) but the theory pertains to Schroedinger. Related areas in theoretical chemistry include the mathematical characterization of bulk materials (e.g. the study of electronic band structure in solid state physics) using the theory of Electronic band structure in a Periodic Crystal Lattice. Different theoretical approaches are Molecular mechanics and Topology. The study of the applicability of well established mathematical theories to chemistry is crucial to metals (i.e. topology in the study of small bodies explains the elaborate electronic structures of clusters). This later area of theoretical chemistry originates from the so-called mathematical chemistry. Time Dependent Quantum Molecular Dynamics,[1] is a modern approach to the interaction of light with molecules that vibrate and drive reactions in a desired direction.
Time independent or non-relativistic quantum chemistry is the most widely used formalism of quantum mechanics to solve electronic problems in chemistry. This part of theoretical chemistry may be broadly divided into electronic structure, dynamics, and statistical mechanics. The relativistic quantum chemistry Dirac equation on the other hand explains electron phenomena in heavy atoms with complex electronic interactions, i.e. spin-orbit coupling and relativistic corrections observed for heavy elements such as Re, Os, Ir, Pt, Au, Hg and Pb. Both relativistic quantum chemistry and non-relativistic quantum chemistry are used to solve the problem of predicting chemical reactivity which depends on electrons.
Some chemical theoreticians Car-Parrinello apply molecular dynamics to provide a thorough bridge between the electronic phenomena and the displacement phenomena, this includes properties within organized systems. Currently, many experimental chemists are using Hybrid Gradient Corrected Density Functionals (e.g. B3LYP) to explain the magnetic properties of metals with unpaired electrons; however, a rigorous theoretical examination of this, shows a misuse of the DFT approach, as the electronic spin appears only in Dirac time dependent equations.[citation needed] One way to avoid a full 4e Dirac calculation, is to use TD-DFT method which includes several electronic states for the "same ground geometry". This approach leads to overemphasize on the orbital part wave function to deduce the electronic spin properties, without considering the spin equations, or that the geometry of excited and ground state differ.
Theoretical attempts on chemical problems go back to before 1926, but until the formulation of the Schrödinger equation by the Austrian physicist Erwin Schrödinger in that year, the techniques available were rather crude and approximated. Currently, much more sophisticated theoretical approaches, based on Quantum Field Theory and Non-equilibrium Green's Function Theory are very popular. Green's Function theory provides a much closer explanation of electronic transitions than the Hartree-Fock formalism.
In order to explain an observable one has to choose the "appropriate level of theory". For example, some theoretical methods (DFT) may not be appropriate to solve magnetic coupling or electron transitions properties. Instead, there are serious reports like Multireference configuration interaction (MRCI), which accurately and thoroughly explain the observed phenomena by means of the fundamental interactions. Major components include quantum chemistry, the application of quantum mechanics to the understanding of valence, molecular dynamics, statistical thermodynamics and theories of electrolyte solutions, reaction networks, polymerization, catalysis, molecular magnetism and spectroscopy.
Branches of theoretical chemistry [edit]
Quantum chemistry
The application of quantum mechanics or fundamental interactions to chemical and physico-chemical problems. Spectroscopic and magnetic properties are between the most frequently modelled.
Computational chemistry
The application of computer codes to chemistry, involving approximation schemes such as Hartree–Fock, post-Hartree–Fock, density functional theory, semiempirical methods (such as PM3) or force field methods. Molecular shape is the most frequently predicted property. Computers can also predict vibrational spectra and vibronic coupling, but also acquire and Fourier transform Infra-red Data into frequency information. The comparison with predicted vibrations supports the predicted shape.
Molecular modelling
Methods for modelling molecular structures without necessarily referring to quantum mechanics. Examples are molecular docking, protein-protein docking, drug design, combinatorial chemistry. The fitting of shape and electric potential are the driving factor in this graphical approach.
Molecular dynamics
Application of classical mechanics for simulating the movement of the nuclei of an assembly of atoms and molecules. The rearrangement of molecules within an ensemble is controlled by Van der Waals forces and promoted by temperature.
Molecular mechanics
Modelling of the intra- and inter-molecular interaction potential energy surfaces via a sum of interaction forces. Classic mechanics Hooke's law for stretching, bending and torsions are used to predict the shape of known and new molecules.
Mathematical chemistry
Discussion and prediction of the molecular structure using mathematical methods without necessarily referring to quantum mechanics. Topology is a branch of mathematics that allows to predict properties of flexible finite size bodies like clusters.
Theoretical chemical kinetics
Theoretical study of the dynamical systems associated to reactive chemicals, the activated complex and their corresponding differential equations.
Cheminformatics (also known as chemoinformatics)
The use of computer and informational techniques, applied to crop information to solve problems in the field of chemistry.
Closely related disciplines [edit]
Historically, the major field of application of theoretical chemistry has been in the following fields of research:
• Atomic physics: The discipline dealing with electrons and atomic nuclei.
• Molecular physics: The discipline of the electrons surrounding the molecular nuclei and of movement of the nuclei. This term usually refers to the study of molecules made of a few atoms in the gas phase. But some consider that molecular physics is also the study of bulk properties of chemicals in terms of molecules.
• Physical chemistry and chemical physics: Chemistry investigated via physical methods like laser techniques, scanning tunneling microscope, etc. The formal distinction between both fields is that physical chemistry is a branch of chemistry while chemical physics is a branch of physics. In practice this distinction is quite vague.
• Many-body theory: The discipline studying the effects which appear in systems with large number of constituents. It is based on quantum physics – mostly second quantization formalism – and quantum electrodynamics.
Hence, the theoretical chemistry discipline is sometimes seen[by whom?] as a branch of those fields of research. Nevertheless, more recently, with the rise of the density functional theory and other methods like molecular mechanics, the range of application has been extended to chemical systems which are relevant to other fields of chemistry and physics like biochemistry, condensed matter physics, nanotechnology or molecular biology.
Bibliography [edit]
• Attila Szabo and Neil S. Ostlund, Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory, Dover Publications; New Ed edition (1996) ISBN 0-486-69186-1, ISBN 978-0-486-69186-2
• D.J. Tannor, V. Kazakov and V. Orlov, Control of Photochemical Branching: Novel Procedures for Finding Optimal Pulses and Global Upper Bounds, in Time Dependent Quantum Molecular Dynamics, J. Broeckhove and L. Lathouwers, eds., 347-360 (Plenum, 1992)
Quotations [edit]
The deepest part of Theoretical Chemistry must end up in Quantum Mechanics.
R. P. Feynman, [2]
Linus Pauling[citation needed]
References [edit]
1. ^ 1
2. ^ The Feynman Lectures on Physics |
15683b3452755dcc |
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
Wheeler's delayed choice experiment is a variant of the classic double slit experiment for photons in which the detecting screen may or may not be removed after the photons had passed through the slits. If removed, there are lenses behind the screen refocusing the optics to reveal which slit the photon passed through sharply. How must this experiment be interpreted?
• Does the photon only acquire wave/particle properties only at the moment of measurement, no matter how delayed it is?
• Can measurements affect the past retrocausally?
• What was the history of the photon before measurement?
• What are the beables before the decision was made?
share|cite|improve this question
A piece of friendly advice (not a criticism): if you are pursuing further insight into quantum mechanics, even just as a hobby, I would encourage you to abandon the "wave/particle duality" framework for thinking about it, at your soonest possible convenience. It really doesn't add any explanatory power, and it doesn't give you any help in understanding the actual mathematical formulation which does have explanatory power. As far as I'm concerned, this idea is a historical relic of the initial total confusion over what was going on with atoms and photons. – Niel de Beaudrap Oct 16 '11 at 11:28
The actual meaning of the colloquial phrase of photons "acquiring particle properties" or "acting like a particle" is really nothing more than saying that photons interact locally and in discrete packages, despite being described much of the time by a spatially distributed wave-function.
Photons, when they are left to travel freely, travel as waves. (The same is true of electrons and other matter/antimatter particles.) But photons can be absorbed by electrons, such as those in light detectors or photoplates; and despite the fact that the wave-function of the photon may be distributed across more than one such detector or more than one cell of the plate, we find that the photon is always absorbed at only one location.
In the old days of quantum mechanics, one would say that the photon "acted like a wave through the slit, and like a particle at the plate". What one would say nowadays is that the photon evolved according to the Schrödinger equation until
• it is interrupted by a measurement device, at which point the wavefunction collapses and gives a definite outcome for whatever that measurement device is measuring; or
• until it has interacted in an uncontrolled (but consistent) manner with enough of the world around it that it decoheres, in which case it ends up being in a probabilistic mixture of states which are stable under that interaction.
This may sound quite similar to the "wave/particle duality" way of saying things, but in practice it gives you a much better shot at understanding how a photon or electron will actually behave when you get your hands on the mathematics.
(Incidentally, the question of "when something counts as a measurement device" is one which is still an open topic at a fundamental level, even though in practical terms we know enough to predict the outcomes of most experiments. A great number of physicists also believe that measurement is in some sense a special case of decoherence. This is all part of understanding the Measurement Problem of quantum mechanics.)
As for the "history" and the "beables" prior to measurement, or prior to the decision of whether to make the measurement or not, these are questions of the interpretation of quantum mechanics. There is no commonly-agreed-upon answer. But the short story is that — no matter how long it took for you to decide whether or not to measure — if you don't measure, the trajectory of the photon is still described by the Schrödinger equation, and you can still cause different "possible paths" to interfere with one another (e.g. in a sum-over-histories description of the evolution of the particle).
share|cite|improve this answer
The concept of "evolution relative to the Schroedinger equation" is an insightful means of considering your questions via a holistic interpretation of the reality to which most of modern physics seems to point. One should recognize that interacting with a measurement device is another aspect of interacting with "the world". This concept of the photon as a wave "interacting with the world" over "many paths" simultaneously is a much more significant element of determining the final outcome of all the interactions in which what we are taught to think of as a photon is involved than a semi-classical interpretation of the photon as a particle some of the time and a wave some of the time might suggest. (What we call a photon is, fundamentally, no more than our perception of the localization of a collection of properties associated with specific, quantum fields via a mode of "bundling" that interact in a pre-defined manner with particles with specific properties.)
"The world" exists in a multi-dimensional framework that includes time. "The world" evolves in time, as must all experiments performed in "the world" that we detect as beings made of matter. The perception of a wave function as being uniquely linked to one photon that is somehow in one position in some temporal interval associated with measurement in a detector may be one cause of the questions that have been posed. Stop thinking of photons as highly localized in space and time in the same manner that one might think of a little ball as being highly localized in space and time when what is called a photon is, in fact, better conceived as a disperse, wave-like field that interacts with localized objects called atoms that we have learned to use to make what we call "particle detectors" (using a concept based on classical thinking).
The stuff that we use to interact with the photon in a detector is in a complex form that we call "an atom". Because it is in this complex, atomic system, it is bound by rules that are defined by quantum physics relative to energy states and other particle properties. The photon in free space (in the classic, quantum description of what physics calls a "photon") is a free agent until it "gets mixed up" with the particle "crowd" that comprises the atom. The atom has a great deal of mass relative to the "photon", and it has a great deal of power to produce what we perceive as localized phenomenon in time and space, because the atom is, due to its mass (and, formally, momentum), a relatively localized phenomenon.
Re-think the photon crudely as electro-magnetic energy in the environment (manifested in quantum fields) that is bundled by an atom due to the atom's relatively high momentum, which forces its probability wave to be relatively localized. Think of the electro-magnetic energy of what we call a "photon" as being highly interactive with "the world" until it has sufficiently interacted with the particle detector's energy "bundling" atoms to produce a result. At that point, we get some output data from the machine.
Because of the highly interactive nature of the photon with the world around it given its "many paths" aspect, we may find the results a bit surprising if we are too used to digging a rut in the same logical path by forming what one author described as "cog-webs" that define the photon as a particle some of the time. If we think of detection of a photon by a measurement device as a means of localizing ("bundling") electro-magnetic energy that is disperse in the environment (that follows "many paths" that overlap with other photons' "many paths"), and think of the speed of light as the rate at which electro-magnetic energy can be localized by atoms (with mass) to produce quantized changes in energy in things called atoms that we can use to detect the electro-magnetic energy in the environment, then it might come as no surprise that we begin to gather data about the environment that exceeds our expectations in some three-dimensional, directional sense in a given, laboratory experiment.
If we use an electro-magnetic energy source in a particular position sending energy in a more or less directional manner to provide most of that electro-magnetic energy being put into a given environment in which an experiment is occurring, it should come as no surprise that we detect information that is biased relative to a specific directional thumb-print in space and time, because most of the energy we gather carries a certain amount of information due to its point of origin, and objects in the related path. Because we are gathering electro-magnetic energy from the environment, should we expect all of the information reflected in what we call "a signal" to originate from one direction? If electro-magnetic energy bundling has a speed associated with it, any changes occurring during the "bundling" process that manipulate the directions from which energy can be gathered into a "bundle" will, most likely, be reflected in the results produced by the "bundling" process.
Don't trap your mind in a logical framework around a specific, laboratory experimental set-up that amounts to a temporal, Rube Goldberg machine that may be more likely to hide the reality that we are observing than to reveal it if it manifests an anthropic inclination to localize source and detector (and thus disperse energy) in a directional sense based on our inclination to perceive cause and effect associated with spatial momentum in interactions involving large chunks of matter due to our experience with a universe in which entropy generally increases as time passes by biasing an experiment to investigate photons as particles, and, what a shock, that generates results that suggest that a photon is not just a wave, but a form of energy that can be localized by atoms after following many paths. Be VERY CAREFUL how you link relativity to quantum physics with what are still defined by many as mass-less particles. The "speed of light" is an extremely classical term premised on ancient, anthropic perspective. (Feynman even dared to conceive of instantaneous action at a distance relative to electro-magnetic theory, a concept which, in his original formulation, he later described in negative terms in his Nobel lecture.)
What if Galileo had stepped onto a hillside that was distant from another on which stood a friend with a lantern and a shielding cloak, and conceptualized his experiment as an attempt to measure the rate at which electro-magnetic energy seeped from the environment into his eyes (comprised of atoms with mass and momentum) to form a bundled "quantum" of light energy that would cause the measuring neurons in his retina to fire and transmit a signal to the optical center of his brain localizing the bundle of energy in a specific area of his visual field after he gave the high sign to his distant buddy to remove the shield that blocked the lantern's light? Would we still think that there was something called a photon with a specific speed as it flew through space-time, or would we perceive what we call a "photon" as a bundle of properties associated with energy that is localized by atoms with mass (and momentum), that we once found it easy, given science's classical pedigree, to think of as a little ball flying through space?
If a particle is free in space and not interacting with others in an atom, it is not required to be "quantized". Quantum physic's particle properties are associated with unique quantum fields. The concept of a field was created to explain transmission of energy between objects lacking a physical connection. Fully embrace the concept of waves and fields, and by-pass the "wave-particle duality" perspective relative to light as something that is sometimes one and sometimes the other and the strong, directional perspectives that accompany it. Your other questions should fade in the process.
share|cite|improve this answer
Your Answer
|
b78731d1f1640e06 | Binding Energy
1. The graph of potential energy of two nucleons shows a minima at 100 MeV,but the binding energy of a deuterium nucleus is close to just 1 MeV.
Since binding energy is the energy required to rip apart the nucleus ,hypothetically,should the two values not be same?
2. jcsd
3. ChrisVer
ChrisVer 2,375
Gold Member
Can you show the graph?
In general the binding energy of two nucleons depends on the nucleon type (eg nn and pp are not possible).... For having a bound state you need E<0.
4. Please see the attachment.
Attached Files:
5. ChrisVer
ChrisVer 2,375
Gold Member
I guess that this graph wants to describe the fact that the nucleons will prefer existing within the nuclei radius, rather than going too far away or falling on each other
6. No, they don't have to be the same. Look at the hydrogen atom, for instance. The electron's binding energy is 13.6 eV while the minimum energy of the coulomb potential is -∞.
1 person likes this.
7. mfb
Staff: Mentor
The minimum potential is just a lower bound for the binding energy - and you would need particles of "infinite" mass to approach this value. The real nucleons will form some (3-dimensional) wave-function similar to the electron, and have an energy corresponding to a solution of the Schrödinger equation (ignoring relativistic effects).
Have something to add?
Draft saved Draft deleted
Similar discussions for: Binding Energy |
8925d10b0325346d | Sign up ×
Given a delta function $\alpha\delta(x+a)$ and an infinite energy potential barrier at $[0,\infty)$, calculate the scattered state, calculate the probability of reflection as a function of $\alpha$, momentum of the packet and energy. Also calculate the probability of finding the particle between the two barriers.
I start by setting up the standard equations for the wave function:
$$\begin{align}\psi_I &= Ae^{ikx}+Be^{-ikx} &&\text{when } x<-a, \\ \psi_{II} &= Ce^{ikx}+De^{-ikx} &&\text{when } -a<x<0\end{align}$$
The requirement for continuity at $x=-a$ means
Then the requirement for specific discontinuity of the derivative at $x=-a$ gives
$$ik(-Ce^{-ika}+De^{ika}+Ae^{-ika}-Be^{ika}) = -\frac{2m\alpha}{\hbar^2}(Ae^{-ika}+Be^{ika})$$
At this point I set $A = 1$ (for a single wave packet) and set $D=0$ to calculate reflection and transmission probabilities. After a great deal of algebra I arrive at
$$\begin{align}B &= \frac{\gamma e^{-ika}}{-\gamma e^{ika} - 2ike^{ika}} & C &= \frac{2e^{-ika}}{\gamma e^{-ika} - 2ike^{-ika}}\end{align}$$
(where $\gamma = -\frac{2m\alpha}{\hbar^2}$) and so reflection prob. $R=\frac{\gamma^2}{\gamma^2+4}$ and transmission prob. $T=\frac{4}{\gamma^2+4}$.
Here's where I run into the trouble of figuring out the probability of finding the particle between the 2 barriers. Since the barrier at $0$ is infinite the only leak could be over the delta function barrier at $-a$. Would I want to use the previous conditions but this time set $A=1$ and $C=D$ due to the total reflection of the barrier at $0$ and then calculate $D^*D$?
share|cite|improve this question
Hi Hippie_Eater, and welcome to Physics Stack Exchange! Excellent question :-) I hope you don't mind that I made some of the equations display style to aid readability. – David Z Sep 20 '12 at 17:33
Thank you, that's much better - I am still polishing my TeK-Fu so I hope I'll be making it look all sexyfine like this in the futue. – Hippie_Eater Sep 20 '12 at 17:37
2 Answers 2
up vote 0 down vote accepted
Hints to the question(v5):
1. OP correctly imposes two conditions because of the delta function potential at $x=-a$, but OP should also impose the boundary condition $\psi(x\!=\!0)=0$ because of the infinite potential barrier at $x\geq 0$.
2. There is zero probability of transmission because of the infinite potential barrier at $x\geq 0$. (Recall that transmission would imply that the particle could be found at $x\to \infty$, which is impossible.)
3. Hence there is a 100 percent probability of reflection, cf. the unitarity of the $S$-matrix. See also this Phys.SE answer.
4. As OP writes, away from the two obstacles, one has simply a free solution to the time-independent Schrödinger equation, namely a linear combination of the two oscillatory exponentials $e^{\pm ikx}$. This solution is non-normalizable over a non-compact interval $x\in ]-\infty,0]$.
5. To make the wave function normalizable, let us truncate space for $x< -K$, where $K>0$ is a very large constant. So now $x\in [-K,0]$. One may then define and calculate the probability $P(-a \leq x\leq 0)$ of finding the particle between the two barriers via the usual probabilistic interpretation of the square of the wave function.
6. If we now let the truncation parameter $K\to \infty$, then we can deduce without calculation that this probability $P(-a \leq x\leq 0)\to 0$ goes to zero.
share|cite|improve this answer
I updated the answer. – Qmechanic Sep 20 '12 at 21:14
The probability of finding a particle in an interval $a<x<b$ is given by the integral $$\int_a^b \psi^* \psi \, dx ,$$ assuming that your wave function is properly normalised.
So in your case, you should calculate $$\frac{\int_{-a}^0 \psi_{II}^* \psi_{II} \,dx}{\int_{-\infty}^{-a} \psi_{I}^* \psi_{I} \,dx+\int_{-a}^0 \psi_{II}^* \psi_{II} \,dx} . $$
The numerator is the region you are interested in, the denominator takes care of the normalisation so that the probability will come out between 0 and 1. I'll leave it to you to calculate the integrals.
share|cite|improve this answer
Thank you, I do have a tendency to over-complicate things. But that raises the question, what conditions should I use to figure out $A, B, C, D$? Presumably it would be the standard two regarding the continuity in of $\psi$ in $-a$ and discontinuity of $\psi'$ in $-a$ but I think I'll need more than that? Am I correct in thinking that the barrier at $0$ reflects completely and thus $C=D$? – Hippie_Eater Sep 20 '12 at 20:58
OK, so you have four unknowns ($A,B,C,D$). You already have two conditions, you need two more. As #Qmechanic states, one of the conditions should be $\psi(0)=0$. The other can be got from normalisation ($\int_{-\infty}^0 \psi^* \psi \, dx =1$). – Mistake Ink Sep 20 '12 at 21:06
Note that the scattering wave function $\psi(x)$ is not normalizable. – Qmechanic Sep 20 '12 at 23:52
Your Answer
|
b0d0a0bc94951709 | The Physics Behind Schrödinger's Cat Paradox
Google honors the physicist today with a Doodle. We explain the science behind his famous paradox.
Erwin Schrödinger, one of the fathers of quantum mechanics, is famed for a number of important contributions to physics, especially the Schrödinger equation, for which he received the Nobel Prize in Physics in 1933.
His feline paradox thought experiment has become a pop culture staple, but it was Erwin Schrödinger's work in quantum mechanics that cemented his status within the world of physics.
The Nobel prize-winning physicist would have turned 126 years old on Monday and to celebrate, Google honored his birth with a cat-themed Doodle, which pays tribute to the paradox Schrödinger proposed in 1935 in the following theoretical experiment.
A cat is placed in a steel box along with a Geiger counter, a vial of poison, a hammer, and a radioactive substance. When the radioactive substance decays, the Geiger detects it and triggers the hammer to release the poison, which subsequently kills the cat. The radioactive decay is a random process, and there is no way to predict when it will happen. Physicists say the atom exists in a state known as a superposition—both decayed and not decayed at the same time.
Until the box is opened, an observer doesn't know whether the cat is alive or dead—because the cat's fate is intrinsically tied to whether or not the atom has decayed and the cat would, as Schrödinger put it, be "living and dead ... in equal parts" until it is observed. (More physics: The Physics of Waterslides.)
In other words, until the box was opened, the cat's state is completely unknown and therefore, the cat is considered to be both alive and dead at the same time until it is observed.
"If you put the cat in the box, and if there's no way of saying what the cat is doing, you have to treat it as if it's doing all of the possible things—being living and dead—at the same time," explains Eric Martell, an associate professor of physics and astronomy at Millikin University. "If you try to make predictions and you assume you know the status of the cat, you're [probably] going to be wrong. If, on the other hand, you assume it's in a combination of all of the possible states that it can be, you'll be correct."
Immediately upon looking at the cat, an observer would immediately know if the cat was alive or dead and the "superposition" of the cat—the idea that it was in both states—would collapse into either the knowledge that "the cat is alive" or "the cat is dead," but not both.
Schrödinger developed the paradox, says Martell, to illustrate a point in quantum mechanics about the nature of wave particles.
"What we discovered in the late 1800s and early 1900s is that really, really tiny things didn't obey Newton's Laws," he says. "So the rules that we used to govern the motion of a ball or person or car couldn't be used to explain how an electron or atom works."
At the very heart of quantum theory—which is used to describe how subatomic particles like electrons and protons behave—is the idea of a wave function. A wave function describes all of the possible states that such particles can have, including properties like energy, momentum, and position.
"The wave function is a combination of all of the possible wave functions that exist," says Martell. "A wave function for a particle says there's some probability that it can be in any allowed position. But you can't necessarily say you know that it's in a particular position without observing it. If you put an electron around the nucleus, it can have any of the allowed states or positions, unless we look at it and know where it is."
|
ec828588c561357a | The Astounding Link Between the P≠NP Problem and the Quantum Nature of Universe
With some straightforward logic, one theorist has shown that macroscopic quantum objects cannot exist if P≠NP, which suddenly explains one of the greatest mysteries in physics
The paradox of Schrodinger’s cat is a thought experiment dreamed up to explore one of the great mysteries of quantum mechanics—why we don’t see its strange and puzzling behaviour in the macroscopic world.
The paradox is simple to state. It involves a cat, a flask of poison and a source of radiation; all contained within a sealed box. If a monitor in the box detects radioactivity, the flask is shattered, releasing the poison and killing the cat.
The paradox comes about because the radioactive decay is a quantum process and so in a superposition of states until observed. The radioactive atom is both decayed and undecayed at the same time.
But that means the cat must also be in a superposition of alive and dead states until the box is open and the system is observed. In other words, the cat must be both dead and alive at the same time.
Nobody knows why we don’t observe these kinds of strange superpositions in the macroscopic world. For some reason, quantum mechanics just doesn’t work on that scale. And therein lies the mystery, one of the greatest in science.
But that mystery may now be solved thanks to the extraordinary work of Arkady Bolotin at Ben-Gurion University in Israel. He says the key is to think of Schrodinger’s cat as a problem of computational complexity theory. When he does that, it melts away.
First some background. The equation that describes the behaviour of quantum particles is called Schrodinger’s equation. It is relatively straightforward to solve for simple systems such as a single quantum particle in a box and predicts that these systems exist in a quantum superposition of states.
In principle, it ought to be possible to use Schrödinger’s equation to describe any object regardless of its size, perhaps even the universe itself. This equation predicts that the system being modelled exists in a superposition of states, even though this is never experienced in our macroscopic world.
The problem is that the equation says nothing about how large an object needs to be before it obeys Newtonian mechanics rather than the quantum variety.
Now Bolotin thinks he knows why there is a limit and where it lies. He says there is an implicit assumption when physicists say that Schrödinger’s equation can describe macroscopic systems. This assumption is that the equations can be solved in a reasonable amount of time to produce an answer.
That’s certainly true of simple systems but physicists well know that calculating the quantum properties of more complex systems is hugely difficult. The world’s most powerful supercomputers cough and splutter when asked to handle systems consisting of more than a few thousand quantum particles.
That leads Bolotin to ask a perfectly reasonable question. What if there is no way to solve Schrödinger’s equation for macroscopic systems in a reasonable period of time? “If it were so, then quantum theoretical constructions like “a quantum state of a macroscopic object” or “the wave function of the universe” would be nothing more than nontestable empty abstractions,” he says.
He then goes on to prove that this is exactly the case, with one important proviso: that P ≠ NP. Here’s how he does it.
His first step is to restate Schrödinger’s equation as a problem of computational complexity. For a simple system, the equation can be solved by an ordinary computer in a reasonable time, so it falls into class of computational problems known as NP.
Bolotin then goes on to show that the problem of solving the Schrödinger equation is at least as hard or harder than any problem in the NP class. This makes it equivalent to many other head-scratchers such as the travelling salesman problem. Computational complexity theorists call these problems NP-hard.
What’s interesting about NP-hard problems is that they are mathematically equivalent. So a solution for one automatically implies a solution for them all. The biggest question in computational complexity theory (and perhaps in all of physics, if the computational complexity theorists are to be believed), is whether they can be solved in this way or not.
The class of problems that can be solved quickly and efficiently is called P. So the statement that NP-hard problems can also be solved quickly and efficiently is the famous P=NP equation.
But since nobody has found such a solution, the general belief is that they cannot be solved in this way. Or as computational complexity theorists put it: P ≠ NP. Nobody has yet proved this, but most theorists would bet their bottom dollar that it is true.
Schrödinger’s equation has a direct bearing on this. If the equation can be quickly and efficiently solved in all cases, including for vast macroscopic states, then it must be possible to solve all other NP-hard problems in the same way. That is equivalent to saying that P=NP.
But if P is not equal to NP, as most experts believe, then there is a limit to the size the quantum system can be. Indeed, that is exactly what physicists observe.
Bolotin goes on to flesh this out with some numbers. If P ≠ NP and there is no efficient algorithm for solving Schrödinger’s equation, then there is only one way of finding a solution, which is a brute force search.
In the travelling salesman problem of finding the shortest way of visiting a number of cities, the brute force solution involves measuring the length of all permutations of routes and then seeing which is shortest. That’s straightforward for a small number of cities but rapidly becomes difficult for large numbers of them.
Exactly the same is true of Schrödinger’s equation. It’s straightforward for a small number of quantum particles but for a macroscopic system, it becomes a monster.
Macroscopic systems are made up of a number of constituent particles about equal to Avogadro’s number, which is 10^24.
So the number of elementary operations needed to exactly solve this equation would be equal to 2^10^24. That’s a big number!
To put it in context, Bolotin imagines a computer capable of solving it over a reasonable running time of, say, a year. Such a computer would need to execute each elementary operation on a timescale of the order of 10^(-3x10^23) seconds.
This time scale is so short that it is difficult to imagine. But to put it in context, Bolotin says there would be little difference between running such a computer over one year and, say, one hundred billion years (10^18 seconds), which is several times longer than the age of the universe.
What’s more, this time scale is considerably shorter than the Planck timescale, which is roughly equal to 10^-43 seconds. It’s simply not possible to measure or detect change on a scale shorter than this. So even if there was a device capable of doing this kind of calculating, there would be no way of detecting that it had done anything.
“So, unless the laws of physics (as we understand them today) were wrong, no computer would ever be able to execute [this number of] operations in any reasonable amount time,” concludes Bolotin.
In other words, macroscopic systems cannot be quantum in nature. Or as Bolotin puts it: “For anyone living in the real physical world (of limited computational resources) the Schrodinger equation will turn out to be simply unsolvable for macroscopic objects.”
That’s a fascinating piece of logic in a remarkably clear and well written paper. It also raises an interesting avenue for experiment. Physicists have become increasingly skilled at creating conditions in which ever larger objects demonstrate quantum behaviour.
The largest quantum object so far—a vibrating silicon springboard —contained around 1 trillion atoms (10^12), significantly less than Avogadro’s number. But Bolotin’s work suggests a clear size limit.
So in theory, these kinds of experiments provide a way to probe the computational limits of the universe. What’s needed, of course, is a clear prediction from his theory that allows it to be tested experimentally.
There is also a puzzle. There are well known quantum states that do contain Avogadro’s number of particles: these include superfluids, supeconductors, lasers and so on. It would be interesting to see Bolotin’s treatment of these from the point of view of computational complexity.
In these situations, all the particles occupy the same ground state, which presumably significantly reduces the complexity. But by how much? Does his approach have anything to say about how big these states can become?
Beyond that, the questions come thick and fast. What of the transition between quantum and classical states—how does that happen in terms of computational complexity? What of the collapse of stars, which are definitely classical objects, into black holes, which may be quantum ones?
And how does the universe decide whether a system is going to be quantum or not? What is the mechanism by which computational complexity exerts its influence over nature? And so on…
The computational complexity theorist Scott Aaronson has long argued that the most interesting problems in physics are intricately linked with his discipline. And Bolotin’s new work shows why. It’s just possible that computational complexity theory could be quantum physics’ next big thing.
Ref: arxiv.org/abs/1403.7686 : Computational Solution to Quantum Foundational Problems
Follow the Physics arXiv Blog by hitting the Follow button below, on Twitter at @arxivblog and now also on Facebook |
486be6cd99690a6e | [shroh-ding-er, shrey-; Ger. shrœ-ding-uhr]
Schrödinger, Erwin, 1887-1961, Austrian theoretical physicist. He was educated at Vienna, taught at Breslau and Zürich, and was professor at the Univ. of Berlin (1927-33), fellow of Magdalen College, Oxford (1933-36), and professor at the Univ. of Graz (1936-38), the Dublin Institute for Advanced Studies (1940-57), and the Univ. of Vienna (1957-61). Schrödinger is known for his mathematical development of wave mechanics (1926), a form of quantum mechanics (see quantum theory), and his formulation of the wave equation that bears his name. The Schrödinger equation is the most widely used mathematical tool of the modern quantum theory. For this work he shared the 1933 Nobel Prize in Physics with P. A. M. Dirac.
See studies by C. W. Kilmister, ed. (1987) and W. J. Moore (1989).
The Schrödinger's Cat Trilogy is a trilogy of novels by Robert Anton Wilson. The trilogy (1980-81) consists of The Universe Next Door, The Trick Top Hat, and The Homing Pigeons, each taking place in a series of separate and slightly distinct universes. Wilson is also co-author of The Illuminatus! Trilogy, and Schrödinger's Cat is a sequel of sorts to the earlier trilogy, re-using several of the same characters and carrying on many of the themes of the earlier work.
The one-volume edition currently in print is significantly shorter than the original three-volume edition.
The name Schrödinger's Cat comes from a famous thought experiment in quantum mechanics. This book series is not to be confused with In Search of Schrödinger's Cat, a popular science book about quantum theory.
Taking place in Unistat, which is the novel's parallel to the United States, the novels have intertwining plots involving a wide array of characters, including:
• Epicene Wildeblood, a.k.a. Mary Margaret Wildeblood, a transsexual who throws great parties
• Frank Dashwood, president of Orgasm Research
• Markoff Chaney, a midget prankster
• Hugh Crane, a.k.a. Cagliostro the Great, a mystic and magician
• Furbish Lousewart V, author and President of Unistat
• Marvin Gardens, author and cocaine addict
• Eve Hubbard, scientist and alternate President of Unistat
Series summary
In The Universe Next Door, the President of Unistat is Furbish Lousewart V; in that universe, a terrorist organization known as Purity of Essence, named after General Ripper's obsession in the film Dr. Strangelove, threatens to detonate nuclear devices in major cities all over Unistat. Also mirroring Dr. Strangelove, Unistat has an automated device that will send nuclear missiles to Russia in the event of such an attack. Russia has a similar device to bomb China, and so on.
In The Trick Top Hat, President Hubbard, a woman, promotes a scientific approach to the improvement of life, offering rewards to anyone who can design a robot to do their job or develop methods to prolong life. Eventually Unistat becomes a Utopia. She makes the whole law system into three different laws: victimless crimes, which have no punishment; monetary crimes or some such thing, which involve debt and payment; and serious crimes, such as murder, which result in being sent to Hell, a place like jail but not quite. It's encased in laser shielding and is like a primitive world all its own. It is, in fact, the State of Mississippi. The original Pocket Books edition of "The Trick Top Hat" contains many passages, some sexually explicit, that are not included in later editions, including the Dell softcover. Much of this material first appeared in Wilson's earlier novel, The Sex Magicians, published as porn by Sheffield House in 1973.
The third has President Kennedy, and is titled The Homing Pigeons. It has very little to do with the President, though at the end keeps switching universes, some of which contain President Kennedy, others which contain President Lousewort, and still others in which Hubbard is the president. Like "The Trick Top Hat," "The Homing Pigeons" also has material in the Pocket Books edition that is not in later editions. Unlike "The Trick Top Hat," however, the material that was cut out did not contain particularly sexually explicit content.
The main plots throughout these books are many. One follows Markoff Chaney, a midget, and his pranks played on the world that continuously screws him over. Most of his pranks are played on Dr. Dashwood, of Orgasm Research. However, the most important plot line follows the path of one Hugh Crane which may or may not be this Universe's Hagbard Celine; a character that is an obvious representation of Wilson himself.
Another follows an "Ithyphallic Eidolon", removed from a male-to-female transsexual named Epicene (or Mary Margaret) Wildebloode. She puts it on display on her mantelpiece, and it gets stolen. It passes through the vicinity of almost every character in the series at least once.
There are dozens of conspiracy theories, strange loops, satire and paranoia included within those pages. In addition, there are numerous references to other works and occasional outright appropriation of characters from them (including cameos by Captain Ahab and Lemuel Gulliver, among others). In addition, a great many of the character names are either puns (Bertha van Ation, Juan Tootrego) or references to historical personages (Blake Williams refers to the poet William Blake, Francis Dashwood's name refers to Sir Francis Dashwood).
Tanstagi, an acronym standing for There Ain't No Such Thing As Government Interference, is the motto of the Invisible Hand Society, an originally fictional organization invented in the Schrödinger's Cat Trilogy. The acronym was deliberately intended as a reference to Robert A. Heinlein's TANSTAAFL principle.
The Tanstagi principle is meant to imply that the invisible hand of the free market applies to government as well. In other words, contrary to traditional ideas of laissez-faire capitalism, government interference in the free market is impossible, since governments are inextricably a part of the market as a whole. Thus, true laissez-faire conditions are impossible, since the government will always affect the market. An example of this is the defense industry: since the government is the single biggest customer of this industry, it logically follows that this sector of the free market is inextricably tied up with government interference.
While it was first introduced in a novel, people claiming to be members or know of chapters of the Invisible Hand Society have occasionally appeared in editorial pages and on the Internet.
Language and invented slang
The Schrödinger's Cat trilogy is a fictional story, with much interpersonal dialogue between characters. This dialogue frequently makes use of slang words invented by the author. These words are all used in the expression of natural human actions which are typically taboo to speak about in modern western culture. These words are not invented words, but make use of the names of politically notable persons. Examples include "Potter Stewarting", an expression used as a substitute for a common obscenity that refers to the act of copulation, and "Burgering", referring to the act of voiding one's bowels (most likely referring to Chief Justice of the United States from 1969-1986, Warren E. Burger).
Publication details
• Wilson, Robert (1988). Schrodinger's Cat Trilogy. New York: Dell Publishing.
Search another word or see schrödingeron Dictionary | Thesaurus |Spanish
Copyright © 2015, LLC. All rights reserved.
• Please Login or Sign Up to use the Recent Searches feature |
ef64924c34b2fde6 | Frank Znidarsic 2
##Znidarsic, Franck P.E, PART 2:
# Online preview of historic of “Zero Point Technologies” : (Open the original pages links to see the numerous pictures and videos)
(see picture on original webpage) This author conducts a cryogenic zero point energy experiment. The results were published by Hal Fox in “New Energy News vol 5 pg #19.
In the mid 1970s Ray Frank (the owner and president of the Apparatus Engineering Company one of my former employers) assigned this author the task of building a ground monitoring relay. In an effort to complete this assignment, I began to experiment with coils, current transformers, and magnetic amplifiers. I succeeded in developing the device. We sold many hundreds of them to many mining companies. I applied the knowledge that I gained to the design of an electronic levitational device. To my dismay, I discovered that no combination of electrical coils would induce a gravitational field.
In the mid 1980’s, a friend, Tom C. Frank, gave me the book, ” THE QUEST FOR ABSOLUTE ZERO” by Kurt Mendelssohn. In his book, Mendelssohn disclosed that the relationship between the forces changed at cryogenic temperatures. This was the clue that I needed. Things began to come together. In 1989, I wrote my first book on the subject, “Elementary Antigravity” . This book caught the eye of Ronald Madison , a far sighted manager at the Pennsylvania Electric Company (my past employer). In 1991, Ron persuaded me to go to Texas and visit with Dr. Harold Puthoff . Puthoff’s work is based on the ideas of Andrei Sakharov. My work is based of the work of Kurt Mendelssohn. It is truly astounding that Puthoff and I, each following separate paths, independently arrived at almost the same conclusions.
Prior to meeting Puthoff, I knew that the relationship between the forces changed in condensed cryogenic systems. Puthoff explained that the phenomena that I had discovered was the zero point interaction. Zero point interactions have now been discovered other non-cryogenic condensed systems. Zero point phenomena are exhibited in cold fusion cells and in the gravitomagnetic effects produced by rotating superconductors.
In 1989 this author came out with his first book “Elementary Antigravity”. In chapter 10 of this book the relationship between the forces within cryogenic systems was examined. In 1996 an intermediate version of this material was published in the “Journal of New Energy” Vol 1, No 2. The remainder of this chapter is essentially a rewrite of these two works. As then, the study of cryogenic phenomena is very instructive. A better understanding of all zero point technologies can be gained through the study of cryogenic phenomena.
” Zero point energy is the energy that remains after a substance is cooled to absolute zero. ” Dr. Hal Puthoff
It is well known that superconductors offer no resistance to electrical currents. Less well known, but even more amazing, are the low temperature superfluids. These fluids flow without friction. Once set into motion, they never slow down. Quantum interactions are limited to atomic distances in normal substances. In superconductors and superfluids quantum interactions are observed on a macroscopic scale. The normal interaction of the magnetic and electric field is very different in a superconductor. In normal conductors changing fields are required to induce other fields. In superconductors static fields can also induce other fields.1At the root of these effects lies a dramatic change in the permittivity and permeability of a superconductor (electron condensation). Ordinarily this change only effects the electromagnetic field. This author has developed techniques to coerce the gravitational and nuclear forces to participate in the condensation. A new look at the electromagnetic effects will lead to a deeper understanding of the the zero point interaction.
The electric field of an isolated electric charge is that of a monopole. Dr. George Mathous, instructor at the Indiana University at Pennsylvania, commonly described the electric field of an isolated charge by stating, “The field drops off with the square of the distance and does not saturate.” In lay words this means that the field diverges outward and extends to infinity.
It is instructive to look at the range and the strength of the electric field. The range of the field associated with an isolated electric charge is infinite. Isolation requires resistance. The electrical resistance of a superconductor is zero. No isolated charges can exist within a superconductor. The infinite permittivity of a superconductor confines the electric field within the superconductor. No leakage flux escapes. The maximum range of the electric field equals the length of the superconductor.
Mangetic flux lines surround atoms and nucleons. The length of the shortest of these flux lines is measured in Fermi meters. The magnetic permeability of a superconductor is zero. All magnetic flux lines are expelled. This phenomena is known as the Meissner effect. The minimum range of the magnetic field equals the circumference of the superconductor.
The infinite permittivity and permeability of superconductors also effects the quantum forces. The quantum forces normally have a very short range of interaction. This range is confined to atomic dimensions. Quantum interactions are observed on a macroscopic scale in superconductors and superfluids. Superconductors only accept currents that are integer multiples of one another. Superfluid helium will spin in a small cup only at certain rotational speeds. These low temperature phenomena vividly demonstrate that the range of the quantum interaction has increased to macroscopic dimensions. Pick the link to hear a quote from the lecture at the University of Illinois. File type wave. September 1999 ) A main point. Pick to view a chart that shows how the range of force interaction changes in a vibrationally reinforced condensate. )
Moses Chan and Eun-Seong Kim cooled and compressed helium and discovered a new phase of matter. They produced the first supersolid. 8The results surprised the physics community. Parts of the solid mass passed through other parts of the solid mass without friction. Friction is produced by the interaction of the electrical forces that bind solids together. These electrical forces are known as Columbic potentials. The individual Columbic potentials of normal matter become smoothed out in a supersolid. The electrical forces act in unity to produce a smooth frictionless surface. Nuclear fusion is regulated by the Columbic potential of the nucleus. The Columbic potential of the nucleus is also smoothed out in certain Bose condensates. Nuclear fusion can progress by unusual routes in these condensates.
This author attended the American Nuclear Society’s Low Level Nuclear Reaction Conference in June of 1997. The conference was held at the Marriot World Trade Center in Orlando. At that conference James Patterson presented his new composite beads. These beads reduce the radioactivity of nuclear waste. George Miley described his discovery of heavy element transmutations within the CETI beads. The discovery of heavy element transmutations has given the field a new name. It is no longer called “cold fusion.” The process is now called ” low level nuclear transmutation “.
The nucleus is surrounded by a strong positive charge. This charge strongly repels other nucleons. Conventional wisdom has it that the only way to get two nucleons to fuse together is to overcome this repulsive effect. In hot fusion scientists have been attempting (for 50 years now) to fuse nucleons together by hurtling them at each other a high speed. The nucleons must obtain at least ten thousand electron volts of velocity to overcome the electrostatic barrier. The process of surmounting the repulsive electrostatic barrier is akin to traveling swiftly over a huge speed bump. In the case of the speed bump, a loud crash will be produced. In the case of the electrostatic barrier, gamma and X-rays are produced. In conventional hot fusion huge quantities radiation are given off by this process. If the Patterson cell worked by this conventional process, everyone near it would have been killed.
… During the conference in Orlando, Professor Heirich Hora (left), Emeritus Professor of The University of New South Wales, presented his theory of how the electrostatic barrier was being overcome. Hora said that the repulsive positive charges of the nucleons were “screened” by a negatively charged electron cloud. 2
Dr. Hora’s theory can not explain the lack of radiation. In his model the nucleons must still pass over the electrostatic potential barrier. When they do a high energy signatures will be produced.
If the range of the strong nuclear force increased beyond the electrostatic potential barrier a nucleon would feel the nuclear force before it was repelled by the electrostatic force. Under this situation nucleons would pass under the electrostatic barrier without producing any radiation. Could this author’s original idea that electron condensations increase the range of the nuclear foces be correct?
Since the Orlando conference several two new things have come to light.
1. It is now known that John J. Ruvalds discovered high temperature thin film nickel hydrogen superconductors.
Light water cold fusion cells (the CETI cell) are thin film nickel hydrogen structures.
Patent nuber 4, 043, 809 states: ” High temperature superconductors and method
ABSTRACT: This invention comprises a superconductive compound having the formula: Ni1-x Mx Zy wherein M is a metal which will destroy the magnetic character of nickel (preferably copper, silver or gold); Z is hydrogen or deuterium, x is 0.1 to 0.9; and y, correspondingly, 0.9 to 0.1, and method of conducting electric current with no resistance at relatively high temperature of T>1° K comprising a conductor consisting essentially of the superconducting compound noted above.”
This patent was issued on August 23, 1977 long before cold fusion was discovered. The bulk of the nickel hydrogen material becomes superconductive at cryogenic temperatures, however, this author believes that small isolated areas of superconductivity exist within the material at room temperature.
2. F. Celani, A. Spallone, P. Tripodi, D. Di Gioacchino, S. Pace, INFN Laboratori Nazionali di Frascati, via E.Fermi 40, 00044 Frascati (Italy) discovered superconductivity in palladium deuterium systems.
“…………Awire segment (1/4 of total, the most cathodic) showed a very low resistance behavior in some tests (corresponding to R/Ro values much less than 0.05 and in a case less than 0.01)………………”
It appears that the palladium deuterium structure is a room temperature superconductor. Heavy water cold fusion cells are constructed of palladium impregnated with deuterium (deuterium is heavy hydrogen).
3. Superconductors have no need to be negative, New Scientist, issue 2498, May 2005
” Now physicist Julian Brown of the University of Oxford is arguing that protons can also form pairs and sneak through the metallic lattice in a similar manner. In theory, the protons should superconduct, he says. ”
It’s now known that cold fusion cells contain small superconductive regions.34 Nuclear reactions proceed in these regions after thermal energy is added to the system. The thermal vibrations invite protons to participate in the condensation. The permeability and the permittivity of condensation now affects the nuclear forces. This author contends that the range of the strong nuclear force extends beyond the range of the electrostatic potential barrier. The increase in range allows nuclear transmutations to take place without radiation.
The Griggs Machine and Potapov’s Yusmar device have been claimed to produce anomalous energy. The conditions inside of a cavitational bubble are extreme and can reach 10 of thousands degrees C at pressures of 100 million atmospheres or more.5These horrific pressures and temperatures are still several orders a magnitude to small to drive a conventional hot fusion reaction.
In January of 1998 P. Mohanty and S.V. Khare, of Maryland College, reported: Sonoluminesence as a Cooperative Many Body Phenomenon , Physical Review Letters Vol 80 #1 January 1998.
“……..The long range phase correlation encompassing a large number of component atoms results in the formation of a macroscopic quantum coherence……”
A superconductor is a macroscopic quantum coherence. This author believes that the condensed plasma within a cavitation bubble is superconductive. Cavitational implosions produce extreme shock . This shock invites the nuclear force to participate in the condensation. The range of the nuclear force is increased. Nuclear transmutations proceed within the condensation.
It was believed, during the first half of the 20th Century, that antigravity would be discovered shortly. This never happened. By the second half of the 20th Century, mainstream scientists believed that the that the unification of gravity and electromagnetism could only be obtained at very high energies. These energies would forever be beyond the reach of man’s largest accelerators. Antigravity was relegated to the dreams of cranks.
In 1955 Major Donald E. Keyhole wrote in the “The Flying Saucer Conspiracy” Page 252-254
“Even after Einstein’s announcement that electricity, magnetism, and gravity were all manifestations of one force, few people had fully accepted that thought that we might someday neutralize gravity. … I still had no idea how such a G-field could be created.”
In the last decade of the 20th Century, Podkletnov applied mechanical shock to a superconductor. A gravitational anomaly was produced.6Znidarsic wrote that the vibrational stimulation of a Bose condensate adjoins the gravitational field with the condensate. The gravitational force is then effected by the permittivity and permeability of the condensate. The range of the gravitational interaction decreases by the same order of magnitude that the range of the nuclear force has increased. The strength of the gravitational field within the condensate greatly increases. Gravitomagnetic flux lines are expelled. This takes place at low energies. It has to do with the path of the quantum transition. The relationship will be qualified in later chapters. Hopefully this idea will someday be universally recognized and antigravity will finally become a reality.
On July 12, 1998 the University of Buffalo announced its discovery: CARBON COMPOSITES SUPERCONDUCT AT ROOM TEMPERATURE
“LAS VEGAS — Materials engineers at the University at Buffalo have made two discoveries that have enabled carbon-fiber materials to superconduct at room temperature.
The related discoveries were so unexpected that the researchers at first thought that they were mistaken.
Led by Deborah D.L. Chung, Ph.D., UB professor of mechanical and aerospace engineering, the engineers observed negative electrical resistance in carbon-composite materials, and zero resistance when these materials were combined with others that are conventional, positive resistors…..
This finding of negative resistance flies in the face of a fundamental law of physics: Opposites attract.Chung explained that in conventional systems, the application of voltage causes electrons — which carry a negative charge — to move toward the high, or positive end, of the voltage gradient.
But in this case, the electrons move the other way, from the plus end of the voltage gradient to the minus end………………..”In this case, opposites appear not to attract,” said Chung. The researchers are studying how this effect could be possible.”…………………..A patent application has been filed on the invention. Previous patents filed by other researchers on negative resistance have been limited to very narrow ranges of the voltage gradient.
In contrast, the UB researchers have exhibited negative resistance that does not vary throughout the entire gamut of the voltage gradient.”7
Electrical engineers know that when electrons “move toward the high, or positive end, of the voltage gradient” power is produced. Have the University of Buffalo scientists discovered how to produce electricity directly from a zero point process?
The range of the electric and magnetic fields are strongly effected by a superconducting Bose condensate. An element of shock invites nuclear and gravitational participation. The shock produces vibration . The vibration lowers the elasticity of the space within the condensate. The reduced stiffness is expressed in several ways. The range of the natural forces tend towards the length of the superconductor. The strength of the forces varies inversly with their range. The constants of the motion tend toward the electromagnetic. The effect of the vibration is qualified in Chapters 10 and 11.
The development or reduced to practice zero point technologies will be of great economic and social importance.
1. K. Mendelssohn. ” THE QUEST FOR ABSOLUTE ZERO ” McGraw-Hill, New York, 1966
2. Hora, Kelly Patel, Prelas, Miley, and Tompkims; Physics Letters A, 1993, 138-143 Screening in cold fusion derived from D-D reactions
Dr. George Miley’s “Swimming Electron Theory” is based on the idea that electron clusters (a form of condensation) exist between the metallic surfaces of cold fusion electrodes.
3. Cryogenic phenomena are commonly associated with the spin pairing of electrons. The Chubb – Chubb theory points out the fact that electrons pair in the cold fusion process.
4. A. G. Lipson, et al., ” Generation of the Products of DD Nuclear Fusion in High-Temperature Superconductors YBa2Cu3O7-x Near the Superconducting Phase Transition,” Tech. Phys., 40 (no. 8), 839 (August 1995).
5. “Can Sound Drive Fusion in a Bubble” Robert Pool, Science vol 266, 16 Dec 1994
6. “A Possibility of Gravitational Force Shielding by Bulk YBa2Cu3O7- x Superconductor” , E. Podkletnov and R. Nieman Physica C 203 (1992) pp 441-444.
“Tampere University Technology report” MSU-95 chem, January 1995
“Gravitoelectric-Electric Coupling via Superconductivity. ” Torr, Douglas G. and Li, Ning Foundations of physics letters. AUG 01 1993 Vol6 # 4, Page 371
7. Companies that are interested in technical information on the invention should contact the UB Office of Technology Transfer at 716-645-3811 or by e-mail at
“Apparent negative electrical resistance in carbon fiber composites”, Shoukai Wang, D.D.L. Chung ; The Journal “Composites”, September 1999
8. “Probing Question: What is a supersolid?”, May 13, 2005
# CHAPTER #5, GENESIS. A version of this chapter was published in “Infinite Energy” Vol 1, #5 & #6 1996.
– 1 – Edward P. Tryon, NATURE VOL 246, December 14, 1973.
– 3 – “…the Universe is, in fact, spherical…” Lisa Melton, SCIENTIFIC AMERICAN, July 2000, page 15
– 4 – Fritz Zwicky proposed that 90% of the matter in the universe is “dark” in 1933. He came to this conclusion from the study of clusters of galaxies.
Vera Rubin confirmed that 90% of the universe’s matter is composed of the so called “dark matter” from her study of the rotational speeds of galaxies in 1977.
Solgaila and Taytler found that the universe contains 1/4 proton of ordinary matter and 8 protons of dark matter per cubic meter. ” New Measurements of Ancient Deuterium Boost the Baryon Density of the Universe “, Physics Today, August 1996, Page 17
– 5 – A satellite gyroscope experiment actually measured the gravitomagnetic field. Schwinger “Einstein’s Legacy” Page 218, The Scientific American Library.
– 6 – “Inertial is a much-discussed topic. The Graneaus give a beautiful account in which they relate theories of inertia back to those of Mach at the beginning of the century. They view inertia as the result of the gravitational interaction of all particles in the universe on the body in question…”
Professor John O’ M. Bockris Journal of Scientific Exploration, Spring 1995
– 7 – The genesis process may currently be creating particles in inter-steller space. “Energy from Particle Creation in Space” Cold Fusion (12/94) No. 5, page 12; Wolff, Milo 6. Paul Davies and John Gribbin, “THE MATTER MYTH” Touchstone publishing 1992.
– 8 – Hal Puthoff, PHYSICAL REVIEW A, March 1989 ; Hal Puthoff, D.C. Cole, PHYSICAL REVIEW E, August 1993. Hal Puthoff, OMNI, “Squeezing Energy From a Vacuum” 2/91
Puthoff manufactured very dense plasma while working with Jupiter Technologies 1990. “Compendium of Condensed-Charge Technology Documentation” Internal report, Jupiter Technologies 1989. A patent on the process was obtained by Ken Shoulders #5018180.
– 10 – Dr. McKurbe’s cold fusion experiments at SRI in the USA continue to produce unexplained excess energy. Jerry E. Bishop, The WALL STREET JOURNAL 7/14/94.
– 11 – Dr. Miley of the Hot Fusion Studies Lab at the University of Illinois developed his theory, “The Swimming Electron Theory” This theory shows that high density electron clouds exist in the CETI cold fusion cell.
– 12 – Andrei Sakharov SOVIET PHYSICS DACKLADI Vol 12, May 1968, Page 1040
– 13 – V Arunasalam, ” Superfluid Quasi Photons in the Early Universe ” ; Physics Essays vol. 15, No. 1, 2002
– 14 – See Chapter 12 Pg 1 ” Is Radiation of Negative Energy Possible? “, F.X.. Vacanti and J. Chauveheid, Physics Essays , Vol. 15. , Number 4, 2002
Additional reading on synthetic life
– ” In the Business of Synthetic Life “, James J. Collins, Scientific American , April 2005
– ” Yikes! It’s Alive! ” , Bennett Daviss, Discover , December 1990
– In 1953 in Chicago Stanley Miller produced the first synthitic amino-acids, since then scientists have manufactured synthetic proteins.
// end of chapter 5 …………………………………………………………………..
The principles upon which modern science is based can be traced back to the original notions of ancient philosophers. The greatest of these early philosophers was Aristotle (384-322 BC). Aristotle developed his ideas from within through a process of introspection. His ideas are based on the concepts of truth, authenticity, and perfection. The conclusions he came to form the basis of western culture and were held as the absolute truth for nearly 2,000 years. Aristotle founded a planetary system and placed the earth at the center of the universe. In the second century A.D. Aristotle’s system was revised by Ptolemy. In this system, the stars and planets are attached to nine transparent crystalline spheres, each of which rotates above the earth. The ninth sphere, the primum moblie, is the closest to heaven and is, therefore, the most perfect. Animated GIF
Aristotle’s universe is composed of four worldly elements; earth, fire, water, and air and a fifth element which is pure, authentic, and incorruptible. The stars and planets are composed of this fifth element. Due to its proximity to heaven, this fifth element possesses God like properties and is not subject to the ordinary physical laws.
In the seventeenth century, Aristotle’s teachings were still considered to be a fundamental truth. In 1600, William Gilbert published his book “The Magnet”. Little was known of magnetism in Gilbert’s time except that it was a force that emanated from bits of lodestone. Aristotle’s influence upon Gilbert is apparent from Gilbert’s conclusion that magnetism is a result of the pure, authentic nature of lodestone. Gilbert also claimed that the earth’s magnetic field was a direct result of the pure authentic character of the deep earth. In some ways, according to the philosophy of Gilbert, the heavens, the deep earth, and lodestone were close to God.
In 1820, Hans Christian Oersted of the University of Copenhagen was demonstrating an electric circuit before his students. In Oersted’s time electrical current could only be produced by crude batteries. His battery consisted of 20 cups, each containing a dilute solution of sulfuric and nitric acid. In each cup he placed a copper and a zinc electrode. During one of the experiments he placed a compass near the apparatus. To the astonishment of his students, the compass deflected when the electric circuit was completed. Oersted had discovered that an electric current produces a magnetic field. In the nineteenth century, experiments and discoveries, like those of Oersted, began to overturn the long-held ideas of Aristotle. Gilbert’s idea that magnetism is due to a God like influence was also brought into doubt.
In 1824, Michael Faraday argued that if an electrical current affects a magnet then a magnet should affect an electrical current. In 1831, Faraday wound two coils on a ring of soft iron. He imposed a current on the coil #1 and he knew, from the discovery of Oersted, that the current in coil #1 would produce a magnetic field. He expected that this magnetic field would impose a continuous current in coil #2. The magnetic field did impose a current but not the continuous current that Faraday had expected. Faraday discovered that the imposed current on coil #2 appeared only when the strength of the current in coil #1 was varied.
The work of Oersted and Faraday in the first half of the nineteenth century was taken up by James Clerk Maxwell in the latter half of the same century. In 1865, Maxwell wrote a paper entitled, “The Dynamical Theory of the Electromagnetic Field”. In the paper he developed the equations that describe the electromagnetic interaction. These equations, which are based on the concept of symmetry, quantify the symmetrical relationship that exists between the electric and magnetic fields.
Maxwell’s equations show that a changing magnetic field induces an electrical field and, conversely, that a changing electrical field (a current) induces a magnetic field. Maxwell’s equations are fundamental to the design of all electrical generators and electromagnets. As such, they form the foundation upon which the modern age of electrical power was built. Maxwell was the first to quantify a basic principle of nature. In particular, he showed that nature is constructed around underlying symmetries. The electric and magnetic fields, while different from each other, are manifestations of a single, more fundamental force. The ideas of Aristotle and Gilbert about the pure, the authentic, and the incorruptible, were replaced by the concept of symmetry. Since Maxwell’s discovery, the concept that nature is designed around a deep, underlying symmetry has proven to be true time and time again. Today most advanced studies in the field of theoretical physics are based upon the principle of symmetry.
In 1687, Isaac Newton published his book, “The Principia”, in which he spelled out the laws of gravitation and motion. In order to accurately describe gravity, Newton invented the mathematics of calculus. He used his invention of calculus to recount the laws of nature. His equations attribute the gravitational force to the presence of a field. This field is capable of exerting an attractive force. The acceleration produced by this force changes the momentum of a mass.
In 1912, Albert Einstein published his “General Theory of Relativity”. The General Theory of Relativity is also a theory of gravity. Einstein’s theory, like the theory of Newton, also demonstrates that gravitational effects are capable of exerting an attractive force. This force can change the momentum of a mass. Einstein’s theory, however, goes beyond Newton’s theory in that it shows that the converse is also true. Any force which results in a change in momentum will generate a gravitational field. Einstein’s theory, for the first time, exposed the symmetrical relationship that exists between force and gravity. This concept of a gravitational symmetry was not, as in the case of the electromagnetic symmetry, universally applied. In the twentieth century, it was applied in a limited fashion by physicists in the study of gravitational waves. Pick the link to view the symmetrical relationship between the original and induced fields.
In 1989, this author wrote his first book “Elementary Antigravity”. In this book he revealed a modified model of matter. This model is based on the idea that unbalanced forces exist within matter and that these forces are the source of the gravitational field of matter. In this present work, the symmetrical relationship that exists between the forces is fully developed. The exposure of these relationships will lead to technical developments in the fields of antigravity and energy production. These developments will parallel the developments in electrical technology that occurred following Maxwell’s discovery of the symmetrical electromagnetic relationship. In review, a changing electric field (a current) induces a magnetic field and, conversely, a changing magnetic field induces an electric field. Likewise, a gravitational field induces a force and, conversely, a force induces a gravitational field. The relationship between force, gravity, and the gravitomagnetic field has been known for 100 years. This author was the first to place force in a model of matter. This author’s work is fundamental to the development of zero point technologies. This author’s work on the force/gravity symmetry was published in INFINITE ENERGY Vol. #4, Issue #22, November 1998.
To get an idea of the magnitude of the force required to produce a gravitational field, consider the gravitational field produced by one gram of matter. The amount of gravity produced by one gram of matter is indeed tiny. Now assume that this one gram of matter is converted into energy. A vigorous nuclear explosion will result. If this explosion is contained within a vessel, the outward force on the vessel would be tremendous. This is the amount of force necessary to produce the gravitational field of one gram of matter. This force is produced naturally by the mechanisms that contain the energy of mass within matter.
A third symmetrical relationship exists between the strong nuclear force and the nuclear spin-orbit interaction. A moving nucleon induces a nuclear-magnetic field. The nuclear-magnetic field is not electromagnetic in origin. It is much stronger than the electromagnetic spin orbit interaction found within atoms. The nuclear-magnetic field tends to couple like nucleons pair wise into stable configurations. The nuclear spin orbit interaction favors nucleons with equal and even numbers of protons and neutrons. The nuclear spin-orbit interaction accounts for the fact that nucleons tend to contain the equal numbers of protons and neutrons (Z = A/2). It also accounts for the fact that nucleons with even numbers of protons and neutrons tend to be stable. The formulation of the nuclear spin orbit interaction has the same structure as electromagnetic and force-gravity interaction, however, the constants of the motion are different.
The next portion of this chapter is devoted to developing the mathematical relationship that exists between the forces. The reader, who is not interested in mathematical details, may skip forward to the conclusion without missing any of the chapter’s essential concepts. Following Maxwell’s laws, the known electromagnetic relationship will be derived. Then, by following the same procedure, the unknown gravitational force relationship will be found. The interaction between the strong nuclear force and the nuclear spin orbit interaction will also be explored.
The magnetic field produced by a changing electrical field (a current) is described by Maxwell’s electromagnetic relationship. This particular formulation, given in Equation #1, is known as Gauss’ Law. In words, Equation #1 states that the change in the number of electric flux lines passing through a closed surface is equivalent to the amount of charge that passes through the surface. The product of this charge and the electrical permittivity of free space is the current associated with the moving charge.
I = eo (d/dt)Eds
Equation #1 The current produced by an electron passing through a closed surface.
eo = the electrical permittivity of free space
I = the current in amps ; E = The electrical potential in newtons/coulomb. Note the italic bold script means that E is a vector having a magnitude and a direction.
The product of this current and the magnetic permeability of free space “uo” , yields, Equation #2, the magnetic flux through any closed loop around the flow of current.
F = (uoeo) d/dtEds
Equation #2 The magnetic flux surrounding an electrical current.
F = the magnetic flux in Webbers ; uo = the magnetic permeability of free space
Substituting charge “q/eo” for the electrical potential yields, Equation #3.
F = (uoeo ) d(q/eo) / dt
Equation #3 The magnetic flux surrounding an electrical current.
Simplifying equation #3 yields equation #4. ( Equation #3 was simplified by taking the derivative using a mathematical operation called the chain rule. In this process eo comes out as a constant.)
F = uo (dq / dt)
Equation #4 The magnetic flux surrounding a current carrying conductor.
Equation #4 states that the magnetic flux around a conductor equals the product of the current flow ( in coulombs per second dq/dt ) and the permeability of free space.
i = (dq / dt) = (qv / L)
Equation #5 The current flow through a closed surface.
Equation #5 relates current flow “i” in coulombs per second to the product of charge “q”, velocity “v”, and the reciprocal of the distance “L” around which the charge flows. Substituting Equation #5 into Equation #4 yields Equation #6.
F = uo i
Equation #6 the magnetic flux around a current carrying conductor
Equation #6 gives the magnetic flux around a conductor carrying a constant current. The flux carries the momentum of the moving electrical charges. Its magnitude is proportional to the product of the current “i” carried in the conductor and the permeability of free space.
The derivative of Equation #6 was taken to introduce an acceleration into the system. This acceleration manifests itself as a change in the strength or direction of the current flow. This acceleration generates a second electrical field. This second electrical field contributes a force to the system. This force opposes the acceleration of the electrical charges. The electrical field described by Equation #7 expresses itself as a voltage ( joules / coulomb ) across an inductor.
E2 = uo (di / dt) ; E2 = L (di/dt) volts
Equation #7 The voltage produced by accelerating charges.
A second analysis was done. This analysis derives the relationship between gravity and a changing momentum. The analysis employs the same procedure that was used to derive the electromagnetic relationship. Disturbances in the gravitational field propagate at the speed of light. 1,2,3,4,5During the propagation interval induced fields conserve the momentum of the system. Disturbances in a gravitational system induce a gravitomagnetic field. Equation #8 is the gravitational equivalent of equation #2. Equation #8 states that the momentum of a moving mass is carried by a gravitomagnetic field. The strength of the gravitomagnetic field is proportional to the number of gravitational flux lines that pass through an infinite surface.
Fg = (uoeo) (d/dt)Egds
Equation #8 The gravito-magnetic flux surrounding moving mass
Eg The vector, gravitational potential in (newtons / kg) ; Fg = The gravitomagnetic flux
Substituting mass “m” for gravitational potential yields equation #9, is the gravitational equivalent of equation #3.
Fg = (uoeo) d (Gm) / dt
Equation #9 The gravitomagnetic flux surrounding a moving mass.
m = The mass in kg ; G = The gravitational constant
Maxwell discovered a relationship between light speed, permittivity, and permeability. This relationship is given by equation #10.
uoeo = (1 / c2)
Equation #10 Maxwell’s relationship.
Substituting Maxwell’s relationship into equation #9 yields equation #11. Equation #11 is the gravitational equivalent of Equation #4.
Fg = (G / c2) dm / dt
Equation #11 The gravitomagnetic flux surrounding a moving mass.
Equation #12 states that the mass flow “I” in kilograms per second (dm/dt) is the product of mass “m”, velocity “v”, and the length “L” of a uniform body of mass. Equation #12 is the gravitational equivalent of Equation #5. It is the gravitational mass flow.
Ig = dm / dt = mv / L
Equation #12 The mass flow through a closed surface.
Substituting #12 into Equation #11 yields Equation #13. Equation #13 is the gravitational equivalent of equation #6.
Fg = (G/c2) (mv / L)
Equation #13 The gravitomagnetic flux around a moving mass.
Fg = The gravitomagnetic field
Equation #13, the gravitomagnetic field, carries the momentum “mv” of a mass moving at a constant velocity. The magnitude of this field is proportional to the product of mass flow in kilograms per second times the ratio of the gravitation constant “G” and light speed “c” squared.
Momentum “p” is substituted for the product “mv”. Taking the derivative of the result introduces acceleration into the system. This acceleration generates a second gravitational field.
This field is given by Equation #14. This second gravitational field contributes a force to the system which opposes the acceleration of the mass.
E2g = (G / c2)( dp / dt ) / L
Equation #14
The distance “L” is the length of the moving mass “m”. In a gravitational system, this length is equivalent to the gravitational radius “r” of the mass. Substituting “r” for “L” yields equation #15. Equation #15 gives the intensity of the induced inertial force. The field described by equation #15 expresses itself as an applied force ( newtons / kilogram ).
Induced inertial force = [G / c2 r ] (dp / dt)
Equation #15
is the general formula of gravitational induction. This formula will be extensively applied in upcoming chapters.
G (the gravitational constant) = 6.67 x1011 Nm² / kg² ; c (light speed) = 3 x 108 meters / second ; (dp / dt) = the applied force in newtons ; r = the gravitational radius
A third analysis was attempted . This analysis derives the relationship between the strong nuclear force and the nuclear spin-orbit interaction. This analysis employs the same procedure that was used in deriving the electromagnetic and gravitational relationships. The principle of symmetry requires a nuclear-magnetic field to be produced by the movement of nucleon. Equation #15 is the nuclear equivalent of equation #2. Equation #15 states that a change in the strong nuclear force induces a nuclear-magnetic field. The strength of this field is proportional to the number of strong nuclear flux lines that pass through an infinite surface.
Fn = (uo eo)(d/dt)Ends
Equation #15 The nuclear-magnetic (spin orbit) field
The strong nuclear force is nonlinear. The analysis cannot be fully developed. An estimate of R at the surface of a nucleon may obtained empirically.
R = 23 MeV
The nuclear field described by the equation below expresses itself as a potential ( MeV / nucleon ) associated with the spin of a nucleon.
Nuclear-magnetic energy = R d ( spin ) / dt
Nature is constructed around underlying symmetries. The idea that nature is constructed around underlying symmetries has proven to be correct time and time again. The electro-weak theory, for example, is based on the principle of symmetry. The first natural symmetry to be discovered was the relationship between the electric and magnetic field. A second symmetrical relationship exists between force and gravity. A third symmetrical relationship exists between the strong nuclear force and the nuclear spin orbit interaction. The nature of these symmetries was explored. The mathematics used to describe the electromagnetic relationship were applied to the gravitational / force relationship. This analysis yielded (Equation #15) the general formula of gravitational induction. The mathematical analysis performed shows that gravity produces a force and conversely that a force produces gravity. Each relationship has a similar formulation and involves the element of time. The relationships differ, in that electromagnetism involves a change in the magnetic field while the gravitational/force involves a change in momentum. The nuclear spin-orbit interaction was also explored. In light of what was learned from the study of electromagnetism and gravity, it was determined that the nuclear spin-orbit interaction involves a change in the strong nuclear force.
The formulation of the each relationship is the same, however, the constants of the motion (L, G/c2, and R) are radically different. Some very profound conclusions have been obtained through the application of the simple concept of symmetry. The remainder of this text will explore, expand, and develop these constructs into a synthesis that will become the foundation of many new futuristic technologies. A main point. Pick to view a chart that shows how the constants of the motion, discussed in this chapter, changes in a vibrationally reinforced condensate.
1. S. Kopeikin (2001), ” Testing the relativistic effect of the propagation of gravity by very long baseline interferometry “, Astrophys.J. 556, L1-L5.
2. T. Van Flandern (2002),
3. H. Asada (2002), Astrophys.J. 574, L69-L70.
4. S. Kopeikin (2002), .
5. T. Van Flandern (1998) , The speed of gravity What the experiments say², Phys.Lett.A 250, 1-11.
// end of chapter 6 …………………………………………………………………..
The study of the form, function, and composition of matter has been, and continues to be, one of the greatest intellectual challenges of all time. In ancient times the Greek Empedocles (495-435 B.C.) came up with the idea that matter is composed of earth, air, fire, and water. In 430 B.C. the idea of Empedocles was rejected by, the Greek, Democritus of Abdera. Democritus believed that the substances of the creation are composed of atoms. These atoms are the smallest bits into which a substance can be divided. Any additional subdivision would change the essence of the substance. He called these bits of substance “atomos” from the Greek word meaning “indivisible”. Democritus was, of course, correct in his supposition, however, at the time, no evidence was available to confirm this idea. Ancient technology was primitive and could not to confirm or contest any of these ideas. Various speculations of this sort continued to be offered and rejected over the next 2,000 years.
The scientific revolution began in the seventeenth century. With this revolution came the tools to test the theories of matter. By the eighteenth century these tools included methods of producing gases through the use of chemical reactions, and the means to weigh the resultant gases. From his studies of the gaseous by-products of chemical reactions, French chemist Antoine Lavoisier (1743-1794) discovered that the weight of the products of a chemical reaction equals the weight of the of the original compound. The principle of “the conservation of mass” was born. For his achievements Lavoisier is today known as the father of modern chemistry.
Late in the eighteenth century, the first use of another new tool began to be applied to test the theories of matter. This tool is electrical technology. The first electrical technology to be applied to the study of matter, electrolysis , involves the passing of an electrical current through a conductive solution. If an electrical current is passed through a conductive solution, the solution tends to decompose into its elements. For example, if an electric current is passed through water, the water decomposes, producing the element hydrogen at the negative electrode and the element oxygen at the positive electrode. With the knowledge obtained from the use of these new technologies, English schoolteacher, John Dalton (1766-1844) was able to lay down the principles of modern chemistry. Dalton’s theory was based on the concept that, matter is made of atoms, all atoms of the same element are identical, and atoms combine in whole number ratios to form compounds.
Electrical technology became increasingly more sophisticated during the nineteenth century. Inventions such as the cathode ray tube (a television picture tube is a cathode ray tube) allowed atoms to be broken apart and studied. The first subatomic particle to be discovered was the electron. In 1897, J.J. Thomson demonstrated that the beams seen in cathode ray tubes were composed of electrons. In 1909, Robert Millican measured the charge of the electron in his, now famous, oil drop experiment. Two years later, Ernest Rutherford ascertained the properties of the atomic nucleus by observing the angle at which alpha particles bounce off of the nucleus. Niels Bohr combined these ideas and in 1913, placed the newly discovered electron in discrete planetary orbits around the newly discovered nucleus. The planetary model of the atom was born. With the appearance of the Bohr model of the atom, the concept of the quantum nature of the atom was established. Pick the link to view the Bohr model of the atom.
As the temperature of matter is increased, it emits correspondingly shorter wavelengths of electromagnetic energy. For example, if a metal poker is heated it will become warm and emit long wavelength infrared heat energy. If the heating is continued the poker will eventually become red hot. The red color is due to the emission of shorter wavelength red light. If heated hotter still, the poker will become white hot emitting even shorter wavelengths of light. An astute observer will notice that there is an inverse relationship between the temperature of the emitter and the wavelength of the emission. This relationship extends across the entire electromagnetic spectrum. If the poker could be heated hot enough it would emit ultra-violet light or X-rays.
The German physicist Max Karl Ludwig Planck studied the light emitted from matter and came to a profound conclusion. In 1900, Planck announced that light waves were given off in discrete particle-like packets of energy called quanta. Today Planck’s quanta are know known as photons. The energy in each photon of light varies inversely with the wavelength of the emitted light. Ultraviolet, for example, has a shorter wavelength than red light and, correspondingly, more energy per photon than red light. The poker, in our example, while only red hot cannot emit ultraviolet light because its’ atoms do not possess enough energy to produce ultraviolet light. The sun, however, is hot enough to produce ultraviolet photons. The ultraviolet photons emitted by the sun contain enough energy to break chemical bonds and can “sun” burn the skin. The radiation spectrum cannot be explained by any wave theory. This spectrum can, however, be accounted for by the emission of a particle of light or photon. In 1803, Thomas Young discovered interference patterns in light. Interference patterns cannot be explained by any particle theory. These patterns can, however, be accounted for by the interaction of waves. How can light be both a particle and a wave?
In 1924, Prince Louis de Broglie proposed that matter possess wave-like properties. 1According to de Broglie’s hypothesis, all moving matter should have an associated wavelength. De Broglie’s hypothesis was confirmed by an experiment conducted at Bell Labs by Clinton J. Davisson and Lester H. Germer. In this experiment an electron beam was bounced off of a diffraction grating. The reflected beam produced a wave like interference pattern on a phosphor screen. The mystery deepened; not only does light possess particle-like properties but matter possesses wave-like properties. How can matter be both a particle and a wave? Pick the link to view one form of the Germar Davisson experiment.
Throw a stone in a lake and watch he waves propagate away from the point of impact. Listen to a distant sound that has traveled to you from its source. Shake a rope and watch the waves travel down the rope. Tune in a distant radio station, the radio waves have traveled outward from the station to you. Watch the waves in the ocean as they travel into the shore. In short, waves propagate, its their nature to do so, and that is what they invariably do. Maxwell’s equations unequivocally demonstrate the fields propagate at light speed. Matter waves, however, remain “stuck” in the matter. Why do they not propagate? What “sticks” them? An answer to this question was presented by Erwin Schrödinger and Werner Heisenberg at the Copenhagen conventions. The Copenhagen interpretation states that elementary particles are composed of particle-like bundles of waves. These bundles are know as a wave packets. The wave packets move at velocity V. These wave packets are localized (held is place) by the addition of an infinite number of component waves. Each of these component waves has a different wavelength or wave number. An infinite number of waves each with a different wave number is required to hold a wave packet fixed in space. This argument has two major flaws. It does not describe the path of the quantum transition and an infinite number of real component waves cannot exist within a finite universe.
Max Born attempted to side step these problems by stating that the wave packets of matter are only mathematical functions of probability. Only real waves can exist in the real world, therefore an imaginary place of residence, called configuration space, was created for the probability waves. Configuration space contains only functions of kinetic and potential energy. Forces are ignored in configuration space.
“Forces of constraint are not an issue. Indeed, the standard Lagrangian formulation ignores them…In such systems, energies reign supreme, and it is no accident that the Hamiltonian and Lagrangian functions assume fundamental roles in a formulation of the theory of quantum mechanics..” Grant R. Fowles University of Utah
Ordinary rules, including the rules of wave propagation, do not apply in configuration space. The propagation mystery was supposedly solved. This solution sounds like and has much in common with those of the ancient philosophers. It is dead wrong!
“Schrödinger never accepted this view, but registered his concern and disappointment that this transcendental, almost psychical interpretation had become universally accepted dogma.” Modern Physics Serway, Moses, Moyer; 1997
Einstein also believed that something was amiss with the whole idea. His remark, “God does not play dice” indicates that he placed little confidence in these waves of probability. For the most part, the error made little difference, modern science advanced, and bigger things were discovered. It did, however, make at least one difference; it forestalled the development of gravitational and low level nuclear technologies for an entire Century.
Matter is composed of energy and fields of force. Matter can be mathematically modeled but a mathematical model does not make matter. Matter waves are real, they contain energy, are the essence of mass, and convey momentum. Disturbances in the force fields propagate at light speed.
” This result is rather surprising… since electrons are observed in practice to have velocities considerably less than the velocity of light it would seem that we have a contradiction with experiment.
Paul Dirac, his equations suggested that the electron propagates at light speed. 11″
Matter does not disperse because it is held together by forces. These forces generate the gravitational field of matter, establish the inertial properties of matter, and set matter’s dynamic attributes. The understanding of the nature of the restraining forces provides insight of the quantum transition. The remainder of this chapter will be spent qualifying these forces and the relationship that they share with matter. The ideas to follow are central to this author’s work. Reader’s who have no interest in math may skip to the conclusion without missing the essential details of this chapter. Essentially the math shows that forces within matter are responsible for many of the properties of matter.
This concept will be extended in Chapter 10. The various fields that compose matter have radically different ranges and strengths. The force, that pins the various fields within matter, is generated when the amplitude of a field exceeds the elastic limit of space.
A version of this section was published in “Infinite Energy” Vol 4, #22 1998
The matter wave function is composed of various fields. Photons were employed to represent these various fields. Photons exhibit the underlying relationship between momentum and energy of a field (static or dynamic) in which disturbances propagate at luminal velocities. Consider photons trapped in a massless perfectly reflecting box. The photon in a box is a simplistic representation of matter. Light has two transverse modes of vibration and carries momentum in the direction of its travel. All three modes need to be employed in a three dimensional model. For the sake of simplicity this analysis considers only a single dimension. The photons in this model represents the matter wave function and the box represents the potential well of matter. As the photons bounce off of the walls of the box momentum “p” is transferred to the walls of the box. Each time a photon strikes a wall of the box it produces a force. This force generates the gravitational mass associated with the photon in the box. The general formula of gravitational induction, as presented in the General Theory of Relativity 3, 4(this equation was derived in Chapter 6) is given by Equation #2.
Eg = G / (c2r) (dp / dt)
Equation # 2 The gravitational field produced by a force
Eg = the gravitational field in newtons / kg ; G = the gravitational constant ; r = the gravitational radius ; dp/dt = force
Each time the photon strikes the wall of the box it produces a gravitation field according to equation #2. The gravitational field produced by an impact varies with the reciprocal of distance “1/r”. The gravitational field produced by matter varies as the reciprocal of distance squared “1/r2”. This author has ascertained how the “1/r2” gravitational field of is produced by a force. It will now be shown that the superposition of a positive field that varies with an “1/r” rate over a negative field that varies with an “1/r” rate, produces the “1/r2” gravitational field of matter. An exact mathematical analysis of the gravitational field produced by the photon in the box will now be undertaken.
L = The dimensions of the box ; p = momentum ; t = the time required for the photon to traverse the box=L/c ; r = the distance to point X
The far gravitational field at point X is the vector sum of the fields produced by the impacts on walls A and B.
This field is given below.
Eg at x = 1/r field from wall A – 1/r field from wall B
Equation 3 Showing the super-position of two fields.
Eg at x = (G / [ c2 (r+L) ] ) ( dp / dt ) – (G / [c2r] ) (dp / dt)
Equation 4 Simplifying.
Eg at x = – (G / c2) (dp / dt) [ L / (r2 + r L) ]
Equation 5 Taking the limit to obtain the far field.
Eg at x = limas r>>L – (G / c2) (dp / dt) [ L / (r2 + r L) ]
The result , Equation #7, is the far gravitational field of matter. Far, in this example, means greater than the wavelength of an elementary particle. In the case of a superconductor far means longer than the length of the superconductor.
Eg at x = – (G / c2) (dp / dt) L / r2 Equation #7
This momentum of an energy field that propagates at light speed is given by the equation below 2 .
p = E / c ; E = the energy of the photon ; c = light speed ; p = momentum (radiation pressure)
The amount of force (dp / dt) that is imparted to the walls of the box depends on the dimensions of the box L. Equation 8 gives the force on the walls of the box.
dp / dt = Dp / Dt = (2E / c) / (L / c) = 2E / L Equation #8
Equation #8 was substituted into Equation #7 and a factor of 1/2 was added. The factor of 1/2 is required because the resultant field is produced by two impacts and the energy can only impact one wall at a time. Equation 9 The far gravitational field produced by energy bouncing in a box
Eg at x = – (1/2) (G / c2) (2E / L) (L / r2) Equation #9
Equation 10 is Einstein’s relationship between matter and energy. M = E / c2 Equation #10
Substituting mass for energy yileds Equation #11, Newton’s formula for gravity. 5
Eg at x = – GM / r2 Equation #11
This analysis clearly shows that unbalanced forces within matter generate the gravitational field of matter. 6, 10 These forces result from the impact of energy which flows at luminal velocities.
Note: a version of this section was published in “The Journal of New Energy” Vol 5, September 2000
In 1924 Prince Louis DeBroglie proposed that matter has a wavelength associated with it. 1 Schrödinger incorporated deBroglie’s idea into his his famous wave equation. The Davission and Germer experiment demonstrated the wave nature of the the electron. The electron was described as both a particle and a wave. The construct left many lingering questions. How can the electron be both a particle and a wave? Nick Herbert writes in his book “Quantum Reality” Pg. 46;
The manner in which an electron acquires and possesses its dynamic attributes is the subject of the quantum reality question. The fact of the matter is that nobody really these days knows how an electron, or any other quantum entity, actually possesses its dynamic attributes.”
Louis deBroglie suggested that the electron may be a beat note. 7 The formation of such a beat note requires disturbances to propagate at light speed. Matter propagates at velocity V. DeBroglie could not demonstrate how the beat note formed. This author’s model demonstrates that matter vibrates naturally at its Compton frequency . A standing Compton wave is pinned in place at the elastic limit of space ( Chapter 10 ). A traveling wave component is associated with moving matter. The traveling wave component bounces off the discontinuity produced at the elastic limit of space . The reflected wave doppler shifts as it bounces of off the discontinuity. The disturbances propagate at luminal velocities. They combine to produce the dynamic DeBroglie wavelength of matter. The animation shows the DeBroglie wave as the superposition of the original and the Doppler shifted waves. Pick the link to view a JAVA animation on the DeBroglie wavelength of matter.
The harmonic vibration of a quantum particle is expressed by its Compton wavelength. Equation #1A expresses the Compton wavelength.
lc = h / Mc
Equation #2A gives the relationship between frequency f and wavelength l. Please note that the phase velocity of the wave is c.
c = f l
Substituting Eq #2A into Eq. #1A yields Eq #3A the Compton frequency of matter.
fc = Mc2 / h
A doppler shifted component of the original frequency is produced by the reflection at matter’s surface. Classical doppler shift is given by Eq #4A.
f2 = f1 ( 1 +- v / c)
A beat note is formed by the mixing of the doppler shifted and original components. This beat note is the deBroglie wave of matter.
Equation #5A and the above express a function “F” involving the sum of two sin waves.
Refer to the figure above. A minimum in the beat note envelope occurs when the component waves are opposed in phase. At time zero the angles differ by p radians. Time zero is a minimum in the beat note envelope. A maximum in the beat envelope occurs when the component waves are aligned in phase. The phases were set equal, in Equation #7A, to determine the time at which the aligned phase q condition occurs.
The result, Equation #10A, is the deBroglie wavelength of matter. Reflections result from a containment force. These reflections combine to produce the deBroglie wavelength of matter.
An analysis was done that described inertial mass in terms of a restraining force. This force restrains disturbances that propagate at luminal velocities. Consider energy trapped in a perfectly reflecting containment. (see the figure below)
This energy in a containment model is a simplistic representation of matter. In this analysis no distinction will be made between baryonic, leptonic, and electromagnetic waves.
The wavelength of the energy represents the Compton wavelength of matter. The containment represents the surface of matter. The field propagates at light speed. Its momentum is equal to E/c. The containment is at rest. The energy is ejected from wall “A” of the containment, its momentum is p1. The energy now travels to wall “B”. It hits wall B and immediately bounces off. Its momentum is p2 . The energy now travels back to wall “A”, immediately bounces off, its momentum is again p1. This process repeats continuously. If the energy in the containment is evenly distributed throughout the containment, the momentum carried by this energy will be distributed evenly between the forward and backward traveling components. The total momentum of this system is given in equation #1C.
pt =(p1 /2 – p2 /2)
The momentum of a flow of energy is given by equation #2C
p = E/c (Eq #2); E = energy ; c = light speed ; p = momentum
Substituting Eq. #2C into Eq. #1 yields Eq. #3C.
pt= [E1 /2c – E2/2c] ; ( Eq. #3C)
Given the containment is at rest. The amount of energy in the containment remains fixed, the quantity of energy traveling in the forward direction equals the quantity of energy traveling in the reverse direction. This is shown in equation #4.
E1= E2;(Eq #4C)
Substituting Eq. #4C into Eq. #3C yields Eq #5C.
pt =(E/2c)(1 – 1) ; (Eq #5C)
Equation #5 is the total momentum of the system at rest. If an external force is applied to the system its velocity will change. The forward and the reverse components of the energy will then doppler shift after bouncing off of the moving containment walls. The momentum of a an energy flow varies directly with its frequency. Given that the number of quantums of energy is conserved, the energy of the reflected quantums varies directly with their frequency. This is demonstrated by equation #6C.
E2= E(1) [ff / fi]; (Eq. #6C)
Substituting Eq. #6C into Eq #5C. yields eq. #7C.
pt = E/2c)[(ff1/fi1) – (ff2 /fi2)] ; (Eq #7C)
Equation #7C is the momentum of the system after all of its energy bounces once off of the containment walls. Equation #7 shows a net flow of energy in one direction. Equation #7C is the momentum of a moving system. The reader may desire to analyze the system after successive bounces of its energy. This analysis is quite involved and unnecessary. Momentum is always conserved. Given that no external force is applied to the system after the first bounce of its energy, its momentum will remain constant.
Relativistic doppler shift is given by equation #8C.
(ff / fi) = [1 – v2 / c2]1/2 / (1 +- v/c), Eq #8C ; v = velocity with respect to the observer ; c = light speed ; ff/fi = frequency ratio ; + or – depends on the direction of motion
Substituting equation #8 into equation #7C yields equation 9C
Substituting mass for energy, M = E/c2
The result, equation #14C is the relativistic momentum of moving matter. This first analysis graphically demonstrates that inertial mass is produced by a containment force at the surface of matter. A fundamental change in the frame of reference is produced by the force of containment. This containment force converts energy, which can only travel at light speed, into mass, which can travel at any speed less than light speed. 8
Note: A version of this analysis has been published in this author’s book Elementary Antigravity , Vantage Press 1989, ISBN 0-533-08334-6
According to existing theory the matter wave emerges from the Fourier addition of component waves. This method requires an infinite number of component waves. Natural infinities do not exist within a finite universe. The potential and kinetic components of a wave retain their phase during a Fourier localization. The aligned phase condition is a property of a traveling wave. The Fourier process cannot pin a field or stop a traveling wave.
Texts in quantum physics commonly employ the Euler formula in their analysis. The late Richard Feynman said, “The Euler formula is the most remarkable formula in mathematics. This is our jewel.” The Euler formula is given below:
e i q = cosq + i sinq
The Euler formula describes the simple harmonic motion of a standing wave. The cos component represents the potential energy of a standing wave. The sin component represents the kinetic energy of a standing wave. The kinetic component is displaced by 90 degrees and has a i associated with it. The localization of a traveling wave through a Fourier addition of component waves is in error. To employ this method of localization and then to describe the standing wave with the Euler formula is inconsistent. This author corrected this error through the introduction of restraining forces. The discontinuity produced at the elastic limit of space restrains the matter wave. The potential and kinetic components of the restrained wave are displaced by 90 degrees. A mass bouncing on the end of a spring is a good example of this type of harmonic motion. At the end of it travel the mass has no motion ( kinetic energy = zero) and the spring is drawn up tight ( potential energy = maximum ). One quarter of the way into the cycle the spring is relaxed and the mass is moving at its highest velocity (kinetic energy = maximum). A similar harmonic motion is exhibited by the force fields. The energy of a force field oscillates between its static and magnetic components.
For some motion music click on the moving mass.
Mass energy ( Em ) is a standing wave. A standing wave is represented on the -j axis of a complex plane.
The phase of a standing wave is 90 degrees. All standing waves are localized by restraining forces.
A traveling wave has its kinetic and potential components aligned in phase. An ocean wave is a good example of this type of harmonic motion. The wave’s height ( potential energy ) progresses with the kinetic energy of the wave.
The energy “E” contained by a wave carrying momentum “P” is expressed below.
E = Pc
The traveling wave expresses itself through its relativistic momentum “P”.
P = Mv / (1- v2 / c 2 )1/2
Substituting yields the amount of energy that is in motion “Eq“. Energy flows are represented on the X axis of a complex plane.
The vector sum of the standing ( Em ) and traveling ( Eq ) components equals the relativistic energy ( Er ) of moving matter.
The relativistic energy is represented by the length of the hypotenuse on a complex plain
The ratio of standing energy to the relativistic energy [ Em / Er ] reduces to (1- v2 / c 2 )1/2. This function express the properties of special relativity. The arc sin of this ratio is the phase;
b = arc sin (1- v2 / c 2 )1/2
The phase b expresses the angular separation of the potential and kinetic energy of matter. The physical length of a standing wave is determined by the spatial displacement of its potential and kinetic energy. This displacement varies directly with the phase b. The phase b varies inversely with the group velocity of the wave. This effect produces the length contraction associated with special relativity.
Time is represented on the Z (out of the plain) axis on a complex diagram. The rotation of a vector around the X axis into the Z axis represents the change in potential energy with respect to time. The rotation of a vector around the Y axis into the Z axis represents a change in potential energy with respect to position. Relativistic energy is reflected on both axes. The loss in time by the relativistic component Er is compensated for by gain in position.
The phase b of a wave expresses the displacement of its potential and the kinetic energy. When placed on a complex diagram the phase directly determines the relativistic momentum, mass, time, and length. These effects reconcile special relativity and quantum physics.
The analysis reveals information not provided by special relativity. The ratio of traveling energy to the relativistic energy ( Eq / Er ) reduces to v/c. The simplicity of the ratio suggests that it represents a fundamental property of matter. In an electrical transmission line this ratio is known as the power factor. The power factor is a ratio of the flowing energy to the total energy. The construct of Special Relativity may be derived a-priori from the premice that the group velocity of the matter wave is V and the phase velocity of the matter wave is c. The difference between these two velocities is produced by reflections. Reflections result from restraining forces. The same principles apply to all waves in harmonic motion.
This model requires a restrained matter wave function. What is the nature of this restraing force? Are forces beyond the four known forces required? This author will show that no addtional forces are requied. The restraining force is produced throgh the action of the known forces. The nature of this restraining force will be presented in Chapter 10. An analysis of this restraining force, in Chapter 12, revealed the path of the quantum transition.
This model produces a second result. This result contains negative mass with a lagging phase angle. It is located along the +j axis. Is this second result that of anti-matter? Capacitors are used to cancel reactance along a power line. On the complex plane the capacitance at vector -j cancels out the reactance at vector +j leaving only the X axis traveling wave component. Is this what occurs when matter meets antimatter? This author believes that matter’s leading power factor ( kinetic energy leads potential energy) is an affect of a elastic anomaly. This anomaly is associated with the restraint of the wave function. These ideas will be developed in Chapters 10 and 11.
Einstein’s principle of equivalence states that gravitational and inertial mass are always in proportion. The photon has no rest mass and a fixed inertial mass. What is its gravitational mass? General relativity states that gravity warps space. Photons take the quickest path through this warped space. The path of a photon is effected by gravity. The effect has been measured. The light of star was bent as it passed near the sun. The momentum of the light was altered. The principle of the conservation of momentum requires that the sun experiences an eqivalent change in momentum. The bending light must generate a gravitational field that pulls back on the sun. Bending light has a gravitational mass.
Photons from the extremes of the universe have traveled side by side for billions of years. These photons do not agglomerate. The slightest agglomeration would result in a decrease in entropy. This decrease would be in violation of the laws of thermodynamics. Photons traveling in a parallel paths extert no gravitational influence upon each other.
Matter gives up energy during the process of photon ejection. The principle of the conservation of energy requires that the negative gravitational potential and the positive energy of the universe remain in balance. The ejected photon must carry a gravitational mass that is equivalent to the gravitational mass lost by the particle.
These conditions are satisfied with a variable gravitational mass. The gravitational mass of the photon varies directly with the force it experiences. This force is expressed as a changing momentum (dp/dt).
Hubbles’ constant expresses the expansion space in units of 1/time. Ordinarily, the effects resulting for the Hubble expansion are quite tiny. At great distances and at high velocities significant effects do, however, take place. As a photon travels through space at the high velocity of light it red shifts. This red shift may be considered to be the result of an applied force. This force is produced by the acceleration given in equation #1D.
Acceleration = Hc Eq #1D ; H = Hubble’s constant, given in units of (1/sec) ; c = light speed
To demonstrate the gravitational relationships of a photon the principle of the conservation of momentum will be employed. According to this principle exploding bodies conserve there center of gravitational mass. Mass M ejects a photon while over the pivot I. The gravitational center of mass must remain balanced over the pivot point I. Mass M is propelled to the left velocity at v1 and the photon travels to the right at velocity c. The product of the velocity and time is the displacement S.
Setting the products of the displacements S and the gravitational masses
Mg equal yields equation #2D.
Mg1 S1= Mg2 S2 ; Eq #2D
Relativity 3, 4 ( this equation was derived in Chapter 6 ) is substituted for the gravitational mass of the photon on right side of the equation below.
GMS /r2 = (G/c2) (force) S /r2
GMS /r2= (G/c2) dp/dt S /r2
GM(v1 t) = (G/c2) dp/dt (ct)r ; Eq #5D
Substituting, dp/dt = Ma = MHc = (E/c2)Hc = EH/cEq #6D
G(Mv1)t = (G/c2 )( E / c) Hctr
Substituting momentum p for Mv1 and E/c
Gp1 t = (G/c2)p2 Hctr
Setting the momentums equal. p1= p2
c = Hr ; Eq #11D
The result, equation #11D, shows that the gravitational mass of a photon is generated by the force it experiences as it accelerates through Hubble’s constant. The result is true only under the condition where the speed of light is equal to the product of Hubble’s constant and the radius of the universe. This qualification is essentially consistent with the measured cosmological values. The radius and expansion rate are independent properties of this universe. The speed of light and the gravitational constant G are dependent upon these properties. It will be shown in upcoming chapters that the values of the natural constants are dependent upon the mass, radius, and expansion rate of the universe. 12The gravitational radius of the photon is the radius of the universe. On the largest scales the gravitational effect of photonic energy is equal to the gravitational effect of mass energy. The result demonstrates that the negative gravitational potential and the positive energy of the universe remain in balance.
# Schrödinger’s Wave Equation Revisited
Schrödinger’s wave equation is a basic tenement of low energy physics. It embodies all of chemistry and most of physics. The equation is considered to be fundamental and not derivable from more basic principles. The equation will be produced (not derived) using an accepted approach. Several assumptions are fundamental to this approach. The flaws within these assumptions will be exposed.
This author will derive Schrödinger’s wave equation using an alternate approach. It will be shown that Schrödinger’s wave equation can be fundamentally derived from the premice that the phase velocity of the mater wave is luminal. Restraining forces confine the luminal disturbances. The new approach is fundamental and yields known results.
# The accepted approach
The wave equation describes a classical relationship between velocity, time, and position. The velocity of the wave packet is v. It is an error to assume that the natural velocity of a matter wave is velocity v. Disturbances within force fields propagate at light speed c. The matter wave Y is also a field. Like all fields it propagates at velocity c. Equation #3 below expresses the wave equation as of function of position and time.
The exponential form of the sin function (e jwt ) is introduced. This function describes a sin wave. The j in the exponent states that the wave contains real and imaginary components. The potential energy of wave is represented by the real component and the kinetic energy of a wave is represented by the imaginary component. In a standing waves these components are 90 degrees out of phase. Standing waves are produced by reflections. The required reflections are not incorporated within current models. Current models include (e jwt ) ad-hoc.
The deBroglie relationship is introduced. It is also incorporated ad-hoc. The introduction of the deBroglie wave was questioned by Professors Einstein and Langevin. H. Ziegler pointed out in a 1909 discussion with Einstein, Planck, and Stark that relativity would be a natural result if all of the most basic components of mass moved at the constant speed of light. 13This author’s work is based on the premice that disturbances in the force fields propagate at light speed. This analysis has shown that the deBrogle wavelength is a beat note. This beat note is generated by a reflection of a matter’s Compton wave. Disturbances in the the Compton wave propagate at luminal velocity.
The result below is the time independent Schrödinger equation. The Schrödinger equation states that the total energy of the system equals the sum of its kinetic and potential energy. Energy is a scalar quantity. Scalar quantities do not have direction. This type of equation is known as a Hamiltonian. The Hamiltonian ignores restraining forces. The unrestrained matter wave propagates at velocity v. It is an error to assume that an unrestrained wave propagates without dispersion.
A new approach
The phase velocity of the matter wave is c. The matter wave is pinned into the structure of matter by restraining forces. The resultant force has a magnitude and a direction. The solution is Newtonian. The superposition of the Compton wave and its doppler shifted reflection is the deBrogle wavelength of matter.
The electrons natural vibrational frequency was determined in Chapter 10 of this text. This frequency is known as the Compton frequency of the electron.
Refer to equation A1. Substituting Ñ2 for acceleration divided by light speed squared. This step embodies the idea the disturbances in the matter wave propagate at luminal velocities.
The time independent Schrödinger equation has been derived from a simple technique. It has been demonstrated that the matter wave contains the forces of nature. Disturbances within these fields propagate at luminal velocities. Restraining forces prevent dispersion. This influence extends through the atomic energy levels.
The movement of ordinary matter does not produce a net magnetic field. The movement of charged matter does produce a net magnetic field. Charged matter is produced by the separation of positive and negative charges. The derivation used to develop Newton’s formula of gravity (Equation #3) shows that matter may harbor positive and negative near field gravitational components.
The wavefunctions of superconductors are collimated. The collimated wave functions act in unison like a single macroscopic elementary particle. The near field gravitational components of a superconductor are macroscopic in size. The rotation of these local gravitational fields is responsible for the gravitational anomaly observed at Tampere University. 9
The forces of nature are pinned into the structure of matter by forces. Forces generate the gravitational mass of matter, determine matter’s relativistic properties, and determine matter’s dynamic properties. The nature of the bundling force will be presented in Chapters 10 & 11. The understanding of the nature of the bundling force reveals the path of the quantum transition.
– 1.French aristocrat Louis de Broglie described the electrons wavelength in his Ph. D. thesis in 1924. De Broglie’s hypothesis was verified by C. J. Davisson and L. H. Germer at Bell Labs.
– 2.Gilbert N. Lewis demonstrated the relationship between external radiation pressure and momentum.Gilbert N. Lewis.Philosophical Magazine, Nov 1908.
– 3.A. Einstein, Ann d. Physics 49,1916.
– 4.Einstein’s principle of equivalence was experimentallyconfirmed byR.v.Eötös in the 1920’s. R.v.Eötös,D. Pekar,and Feteke,Ann. d. Phys 1922.
Roll, Krotkov and Dicke followed up on the Eötvös experiment and confirmed the principle of equivalence to and accuracy of one part in 10 11in the 1960’s.
R.G. Roll,R.Krokow&Dicke,Ann.of Physics 26,1964.
– 6. Jennison, R.C. “What is an Electron?”Wireless World, June 1979. p. 43.
“Jennison became drawn to this model after having experimentally demonstrated the previously unestablished fact that a trapped electromagnetic standing wave has rest mass and inertia.”
Jennison & Drinkwater Journal of Physics A, vol 10, pp.(167-179) 1977
Jennison & Drinkwater Journal of Physics A, vol 13, pp.(2247-2250)1980
Jennison & Drinkwater Journal of Physics A, vol 16, pp.(3635-3638)1983
– 7.B. Haisch & A. Rueda of The California Institute for Physics and Astrophysics have also developed the deBroblie wave as a beat note.Refer to:
– 8.Znidarsic F. ” The Constants of the Motion ” The Journal of New Energy. Vol. 5, No. 2 September 2000
– 9.”A Possibility of Gravitational Force Shielding by Bulk YBa2Cu307-x”, E. Podkletnov and R. Nieminen,Physica C, vol 203, (1992), pp 441-444.
– 10.Puthoff has shown that the gravitational field results from the cancellation of waves.This author’s model is an extension version this idea.
H.E. Puthoff, ” Ground State Hydrogen as a Zero-Point-Fluctuation-Determined State “, Physical Review D, vol 35, Number 3260, 1987
H. E. Puthoff ” GRAVITY AS A ZERO-POINT FLUCTUATION FORCE “, Physical Review A, vol 39, Number 5, March 1989
– 11.Ezzat G. Bakhoum ” Fundamental disagreement of Wave Mechanics with Relativity “, Physics EssaysVolume 15, number 1, 2002
– 12.John D. Barrow and John K. Webb” Inconstant Constants “, Scientific AmericanJune 2005
– 13.Albert Einstein, ” Development of our Conception of Nature and Constitution of Radiation “, Physikalische Zeitschrift 22 , 1909. |
36d94eea03af42a0 | Dismiss Notice
Join Physics Forums Today!
Quantum mechanics of a particle
1. Mar 4, 2005 #1
I have a couple things I don't understand:
1. Why is it that the more you conifne a particle, the higher its energy is?
2. Why is it that the more nodes there are in the wavefunction the higher the energy is?
3. What causes the energy of a particle to be quantized?
2. jcsd
3. Mar 4, 2005 #2
User Avatar
Science Advisor
Homework Helper
1.What do you mean by "conifne a particle"...?If you mean "confine a particle",i have to ask you what do you mean by this...?
2.What are nodes of a wave function...?I've never heard that a function would have nodes...
3.Principles of QM...Essentially the second.
4. Mar 4, 2005 #3
Are you referring to quarkconfinemnt, then your answer is asymptotic freedom...
Are you referring to nodal planes of orbitals???then your answer is the magnetic quantumnumber...(at least to some extent)
Mother Nature
5. Mar 4, 2005 #4
It's not true that the more you confine something the more energy it has -- what is true is that the more you confine something, the variance in it's momentum will increase.
As for the node thing; I think sarabellum means that in a "particle in a box", the number of nodes refers to the number of points where the wavefunction [itex]\psi(x)=0[/itex]. There is no reason as to why that is -- it's just how the solutions to Schrodinger's equation works out.
And for the third point, as marlon says, that's just how nature is. If energy wasn't quantised, one could ask "Why is energy continuous?" and so on.
6. Mar 4, 2005 #5
This is not entirely correct. Just look at quarks...
Besides, what exactly do you mean by this and how does it apply to QM?
7. Mar 4, 2005 #6
sarabellum probably means, if you make the well smaller, the energies become higher.
Click on this link http://www.quantum-physics.polytechnique.fr/en/pages/p0203.html, and change the width of the well, by dragging the left bottom corner of the well.
sarabellum probably means for example the harmonic oscillator eigenstates.
The higher the energy, the more zeros (sarabellum calls them nodes) the wavefunction has http://encyclopedia.laborlawtalk.com/Quantum_harmonic_oscillator [Broken]
I think that's just the result of QM calculations.
Hmm....as marlon said, it's nature.
Last edited by a moderator: May 1, 2017
8. Mar 4, 2005 #7
It's the celebrated Uncertainty Principle.
9. Mar 4, 2005 #8
I think I know what you mean, bur truth is, I'm not sure how exactly to give you a good answer.
As some mentioned before, nodes are the points where the wavefuncion equals 0 (like the nodes of a vibrating string).
The fact that the more nodes there are, the higher is the energy actually happens with any kind of wave. Again, think in a vibrating string, the stronger you make it vibrate (i.e. the more energy you give to it), the bigger is the number of nodes.
The explanation on why that is so, is pretty much like this: the higher the number of nodes per unit distance, the more oscilations there are, wich means the bigger is the wavenumber, and thus the higher is the energy (remember that in the case of the wavefunction, the momentum of the particle it's proportional to the wavenumber).
In fact, in more than one QM problem (I'm not sure if there's a general rule, maybe not), the number of nodes is actually closely related to the quantum number you use for the energy.
More or less, this comes as a result of Schrödinger equation, energy is quantized whenever there are bound states (i.e. the potential confine the wavefunction to be in a certain place), when that happens, you have to force the wavefunction to be zero at certain point in space (like infinite, or the extremun of a infinite square well) and those boundary conditions can only be met by certain values of the wavenumber (and thus, the energy).
There are good discussions on this point in some standard Quantum Mechanics textbooks, like Eisberg & Resnick and Cohen-Tanoudji.
10. Mar 5, 2005 #9
Thanks for your help everyone!
11. Mar 5, 2005 #10
The kinetic energy of a wave function comes from:
[tex] \frac{-\hbar^2}{2m} \frac{d^2 \Psi}{dx^2} [/tex]
so it's related to the derivative of the wavefunction. If the wave function wiggles faster, then it has a larger derivative and will have more KE.
If you're looking at the square well, when you make the well size smaller, the energies get larger. If you look at the ground state wavefunction, making the well size smaller, with the restriction that it has to be zero at the ends and it has to be normalized makes it so the wavefunction has to go to zero faster as it approaches the edges of the well. This makes it so its derivatives are larger, and hence larger kinetic energy. |
febe20141c39abc1 | We gratefully acknowledge support from
the Simons Foundation
and member institutions
Computational Physics
New submissions
[ total of 16 entries: 1-16 ]
New submissions for Tue, 26 Sep 17
[1] arXiv:1709.08223 [pdf, other]
Title: Numerical solution of stochastic master equations using stochastic interacting wave functions
Subjects: Computational Physics (physics.comp-ph); Numerical Analysis (math.NA); Probability (math.PR); Quantum Physics (quant-ph)
We develop a new approach for solving stochastic master equations with initial mixed quantum state. Thus, we deal with the numerical simulation of, for instance, continuous weak measurements on quantum systems. We focus on finite dimensional quantum state spaces. First, we obtain that the solution of the jump-diffusion stochastic master equation is represented by a mixture of pure states satisfying a system of stochastic differential equations of Schr\"odinger type. Then, we design three exponential schemes for these coupled stochastic Schr\"odinger equations, which are driven by Brownian motions and jump processes. The good performance of the new numerical integrators is illustrated by simulating the continuous monitoring of two open quantum systems formed by a quantized electromagnetic field interacting with a two-level system, under the effect of the environment. Hence, we have constructed efficient numerical methods for the stochastic master equations based on the simulation of quantum trajectories that describe the random evolution of interacting wave functions.
Cross-lists for Tue, 26 Sep 17
[2] arXiv:1709.08007 (cross-list from physics.acc-ph) [pdf, ps, other]
Title: Comment on `Radiation from multi-GeV electrons and positrons in periodically bent silicon crystal'
Authors: Andriy Kostyuk
Comments: 8 pages, 1 figure
Subjects: Accelerator Physics (physics.acc-ph); Atomic Physics (physics.atom-ph); Computational Physics (physics.comp-ph)
Simulations of electron and positron channelling in a crystalline undulator with a small amplitude and a short period (A Kostyuk, Phys. Rev. Lett. 110 (2013) 115503) have been repeated by V G Bezchastnov, A V Korol and A V Solov'yov (J. Phys. B: At. Mol. Opt. Phys. 47 (2014) 195401) for exactly the same parameter set but using a different model of projectile scattering by crystal atoms implemented in another computer code. The authors of the latter paper claim that their approach, in contrast to the one of the former paper, allows them to observe short-period undulator oscillations in plots of simulated trajectories. In fact, both approaches can be shown to give the same amplitude of the undulator oscillations. In both cases, the undulator oscillations become visible on trajectory segments having small amplitude of channelling oscillations. The claim of Bezchastnov et al. that their model is "more accurate" is unfounded. In fact, there are indications of severe mistakes in their calculations.
[3] arXiv:1709.08059 (cross-list from physics.chem-ph) [pdf, other]
Title: Hybrid grid/basis set discretizations of the Schrödinger equation
Authors: Steven R. White
Comments: 16 pages, 9 figures
We present a new kind of basis function for discretizing the Schr\"odinger equation in electronic structure calculations, called a gausslet, which has wavelet-like features but is composed of a sum of Gaussians. Gausslets are placed on a grid and combine advantages of both grid and basis set approaches. They are orthogonal, infinitely smooth, symmetric, polynomially complete, and with a high degree of locality. Because they are formed from Gaussians, they are easily combined with traditional atom-centered Gaussian bases. We also introduce diagonal approximations which dramatically reduce the computational scaling of two-electron Coulomb terms in the Hamiltonian.
[4] arXiv:1709.08070 (cross-list from math.NA) [pdf, other]
Title: An implicit boundary integral method for computing electric potential of macromolecules in solvent
Comments: 28 pages
Subjects: Numerical Analysis (math.NA); Biological Physics (physics.bio-ph); Computational Physics (physics.comp-ph); Biomolecules (q-bio.BM)
A numerical method using implicit surface representations is proposed to solve the linearized Poisson-Boltzmann equations that arise in mathematical models for the electrostatics of molecules in solvent. The proposed method used an implicit boundary integral formulation to derived a linear system defined on Cartesian nodes in a narrowband surrounding the closed surface that separate the molecule and the solvent. The needed implicit surfaces is constructed from the given atomic description of the molecules, by a sequence of standard level set algorithms. A fast multipole method is applied to accelerate the solution of the linear system. A few numerical studies involving some standard test cases are presented and compared to other existing results.
[5] arXiv:1709.08102 (cross-list from cs.ET) [pdf, ps, other]
Title: Oscillator-based Ising Machine
Subjects: Emerging Technologies (cs.ET); Computational Physics (physics.comp-ph)
Many combinatorial optimization problems can be mapped to finding the ground states of the corresponding Ising Hamiltonians. The physical systems that can solve optimization problems in this way, namely Ising machines, have been attracting more and more attention recently. Our work shows that Ising machines can be realized using almost any nonlinear self-sustaining oscillators with logic values encoded in their phases. Many types of such oscillators are readily available for large-scale integration, with potentials in high-speed and low-power operation. In this paper, we describe the operation and mechanism of oscillator-based Ising machines. The feasibility of our scheme is demonstrated through several examples in simulation and hardware, among which a simulation study reports average solutions exceeding those from state-of-art Ising machines on a benchmark combinatorial optimization problem of size 2000.
[6] arXiv:1709.08288 (cross-list from cond-mat.mtrl-sci) [pdf, other]
Title: Orbital-Free Density-Functional Theory Simulations of Displacement Cascade in Aluminum
Authors: Ruizhi Qiu
Comments: 5 pages, 5 figures
Here, we report orbital-free density-functional theory (OF DFT) molecular dynamics simulations of the displacement cascade in aluminum. The electronic effect is our main concern. The displacement threshold energies are calculated using OF DFT and classical molecular dynamics (MD) and the comparison reveals the role of charge bridge. Compared to MD simulation, the displacement spike from OF DFT has a lower peak and shorter duration time, which is attributed to the effect of electronic damping. The charge density profiles clearly display the existence of depleted zones, vacancy and interstitial clusters. And it is found that the energy exchanges between ions and electrons are mainly contributed by the kinetic energies.
[7] arXiv:1709.08451 (cross-list from physics.flu-dyn) [pdf, other]
Title: The Mechanism behind Erosive Bursts in Porous Media
Comments: 7 pages, 8 figures
Journal-ref: Phys. Rev. Lett. 119, 124501, Published 18 September 2017
Erosion and deposition during flow through porous media can lead to large erosive bursts that manifest as jumps in permeability and pressure loss. Here we reveal that the cause of these bursts is the re-opening of clogged pores when the pressure difference between two opposite sites of the pore surpasses a certain threshold. We perform numerical simulations of flow through porous media and compare our predictions to experimental results, recovering with excellent agreement shape and power-law distribution of pressure loss jumps, and the behavior of the permeability jumps as function of particle concentration. Furthermore, we find that erosive bursts only occur for pressure gradient thresholds within the range of two critical values, independent on how the flow is driven. Our findings provide a better understanding of sudden sand production in oil wells and breakthrough in filtration.
[8] arXiv:1709.08452 (cross-list from cond-mat.stat-mech) [pdf, other]
Title: Grand Canonical Adaptive Resolution Simulation for Molecules with Electrons: A Theoretical Framework based on Physical Consistency
Authors: Luigi Delle Site
Comments: Computer Physics Communications (2017), in press
A theoretical scheme for the treatment of an open molecular system with electrons and nuclei is proposed. The idea is based on the Grand Canonical description of a quantum region embedded in a classical reservoir of molecules. Electronic properties of the quantum region are calculated at constant electronic chemical potential equal to that of the corresponding (large) bulk system treated at full quantum level. Instead, the exchange of molecules between the quantum region and the classical environment occurs at the chemical potential of the macroscopic thermodynamic conditions. T he Grand Canonical Adaptive Resolution Scheme is proposed for the treatment of the classical environment; such an approach can treat the exchange of molecules according to first principles of statistical mechanics and thermodynamic. The overall scheme is build on the basis of physical consistency, with the corresponding definition of numerical criteria of control of the approximations implied by the coupling. Given the wide range of expertise required, this work has the intention of providing guiding principles for the construction of a well founded computational protocol for actual multiscale simulations from the electronic to the mesoscopic scale.
Replacements for Tue, 26 Sep 17
[9] arXiv:1506.05094 (replaced) [pdf, ps, other]
Title: RUMD: A general purpose molecular dynamics package optimized to utilize GPU hardware down to a few thousand particles
Comments: 22 pages formatted for SciPost Physics
Subjects: Computational Physics (physics.comp-ph)
[10] arXiv:1704.03799 (replaced) [pdf]
Title: Alignment Theory of Parallel-beam CT Image Reconstruction for Elastic-type Objects using Virtual Focusing Method
Comments: 30 pages, 11 figures
Subjects: Computational Physics (physics.comp-ph)
[11] arXiv:1702.01469 (replaced) [pdf, other]
Title: Spin-Diffusions and Diffusive Molecular Dynamics
Comments: 26 Pages, corrected a typo from the previous version
[12] arXiv:1703.09622 (replaced) [pdf, other]
Title: Stepsize-adaptive integrators for dissipative solitons in cubic-quintic complex Ginzburg-Landau equations
Authors: X. Ding, S. H. Kang
Comments: 25 pages, 12 figures, 9 tables
Subjects: Computational Physics (physics.comp-ph)
[13] arXiv:1702.01664 (replaced) [pdf, other]
Title: Coalescence of immersed droplets on a substrate
Comments: 9 pages, 9 figures
Subjects: Fluid Dynamics (physics.flu-dyn); Soft Condensed Matter (cond-mat.soft); Computational Physics (physics.comp-ph)
[14] arXiv:1707.00848 (replaced) [pdf, ps, other]
Title: On the applicability of Kerker precoditioning scheme to the self-consistent density functional theory calculations of inhomogeneous systems
[15] arXiv:1707.03407 (replaced) [pdf, other]
Title: Stable Unitary Integrators for the Numerical Implementation of Continuous Unitary Transformations
Comments: 13 pages, 4 figures, Comments welcome
Journal-ref: Phys. Rev. B. 96 (2017) 115129
Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); Numerical Analysis (cs.NA); Numerical Analysis (math.NA); Computational Physics (physics.comp-ph)
[16] arXiv:1709.03852 (replaced) [pdf, other]
Title: What Makes a Good Descriptor for Heterogeneous Ice Nucleation on OH-Patterned Surfaces
Comments: main text + SI
Journal-ref: Phys. Rev. B 96 (11), 115441 (2017)
[ total of 16 entries: 1-16 ]
Disable MathJax (What is MathJax?) |
e176606887ccd394 | FacebookTwitterInstagramYouTube RSS Feed
Hyperspherical Approach to Quantal Three-body Theory
TitleHyperspherical Approach to Quantal Three-body Theory
Publication TypeThesis
Year of Publication2012
AuthorsWang, J
Hyperspherical coordinates provide a systematic way of describing three-body systems. Solving three-body Schrödinger equations in an adiabatic hyperspherical representation is the focus of this thesis. An essentially exact solution can be found numerically by including nonadiabatic couplings using either a slow variable discretization or a traditional adiabatic method. Two diff erent types of three-body systems are investigated: (1) rovibrational states of the triatomic hydrogen ion H3+ and (2) ultracold collisions of three identical bosons. |
90ef229b82a9d274 | NSF and Stony Brook University: New Nanotechnology to produce sustainable, clean water for developing nations: Video
This technology would enable communities to produce their own water filters using biomass nanofibers, making clean water more accessible and affordable.
The research in this episode was supported by NSF award #1019370, Breakthrough Concepts on Nanofibrous Membranes with Directed Water Channels for Energy-Saving Water Purification.
Watch the Video: New Nanotechnology to Produce Sustainable, Clean Drinking Water for Developing NationsSilver Nano P clean-drinking-water-india
NSF and Stony Brook University: New nanotechnology to produce sustainable, clean water for developing nations
This technology would enable communities to produce their own water filters using biomass nanofibers, making clean water more accessible and affordable – Follow the Link below to Watch the Video.
The research in this episode was supported by NSF award #1019370, Breakthrough Concepts on Nanofibrous Membranes with Directed Water Channels for Energy-Saving Water Purification.Silver Nano P clean-drinking-water-india
Watch the Video Here: New Nanotechnology for Sustainable, Clean Water for Developing Nations
NEWT (Nano Enabled Water Treatment) Nanoscale solutions to a very large problem
NEWT 040416 Westerhoff_Lab_1_f
ERCs produce both transformational technology and innovative-minded engineering graduates.
Credit and Larger Version
** NEWT is a joint designated collaboration between Rice University, ASU, UTEP and Yale University
0629_NEWT-log-lg-310x310Water, water is everywhere, but we need more drops to drink.
Water has long been a passion for Alvarez, who studies treatment and reuse, remediation strategies for contaminated aquifers and the water footprints of biofuels. His work also covers the environmental implications of using nanotechnology, and the transport — and eventual fate of — toxic chemicals in the environment. As NEWT director, he partners with researchers at Arizona State University (ASU), Yale University and the University of Texas at El Paso.
The consortium set as its first goal the development of modular water treatment systems that can deploy almost anywhere in the world. But Alvarez said the potential to make a significant impact is already expanding, with opportunities to address wastewater treatment at oil and gas drilling sites, nano-infused desalination in urban environments, and improved water treatment through more efficient filtration at existing plants.
Alvarez paused between classes recently to talk about the center’s plans.
Q. Where do you think NEWT’s greatest impact will be in 10 years?
A. It will be in drinking water, providing cleaner water to millions of people who now lack it. I think it’s going to be in developing small, portable units that will not only provide humanitarian water but also emergency response.
A. Briny ground water, for example, could be a source of drinking water in areas experiencing drought. Or in coastal areas. I think we will see more of that. We’ll see more harvesting of storm water, certainly, and for some uses, even greywater.
Those are the kinds of things our technologies will enable, but it’s not just about technology. It’s about the philosophy of changing to more sustainable, integratable water management, where we reuse more water, where we tap water that we thought was of too low quality but, as it turns out, is perfectly fine and safe and more economical for a sole intended use.
Q. In what directions are the initial projects headed?
A. I think the first thing we’re going to have out there is an adsorbent filter being developed by [NEWT deputy director] Paul Westerhoff at ASU. It’s a block of carbon with embedded nanoparticles. These particles adsorb — that is, they grab onto and hold — oxyanion contaminants like nitrate, arsenic and chromate, and effectively remove them from the water supply. [Oxyanions are negatively charged ions that contain oxygen.] It will be part of a drinking-water treatment unit.
Q. Would the technology apply to large water treatment plants?
I am sure there will be a lot that can be used by the municipal water treatment community. It’s a more difficult industry to penetrate because it’s very conservative. You have to convince them that a technology is going to save them a lot of money and that they don’t have to change too much of the infrastructure or the configuration of the plant.
We have some very good ideas of things that will fit them. If they’re already using membranes for filtration, for example, our membranes may offer better rejection of contaminants and perhaps less susceptibility to being fouled, so they will last longer without having to be replaced. They won’t clog up as easily. They will not use as much energy.
Q. Why did you pursue hosting this NSF center?
The lack of clean water is a major hindrance to human capacity. It goes beyond public health: It’s directly tied to the need for economic development.
That is certainly one important factor in my passion to provide water to many. It’s related to the concept of world affirmation, the idea that the world can be a better place and we can do something about it. Providing clean water is one way to do it.
Once it’s used, disposal of that water becomes a major challenge and a potentially serious source of pollution. So the solution to both scarcity and minimizing impact is to reuse this water. That’s one of the things we’re trying to do: develop systems that are small and easily deployed that can enable industrial wastewater reuse in remote areas.
Q. What can you do with nanoparticles that you couldn’t have done before?
When you exploit these extraordinary size-dependent properties, it allows you to introduce multifunctionality at both the reactor and materials level. This combination of multifunctionality — for example, membranes that have self-cleaning and self-healing properties — with the nanotechnology-enabled ability to selectively remove pollutants allows you to have smaller reactors. These can treat even unconventional sources of water, difficult sources, that currently would require huge reactors and very large and complex treatment trains that are impossible to take to remote locations.
Making them smaller, multifunctional and modular brings you tremendous versatility to handle a wide variety of challenges in water purification. Nanotechnology allows us to do that. It’s essential to our vision of decentralized water treatment systems.
A. Absolutely. This has to be a multidisciplinary collaborative effort to build this innovation ecosystem. We need people who know how to make materials and people who know how to characterize them, how to immobilize them, how to manipulate them — how to assess their reactivity and bioavailability and mobility, and eventually scale them up.
We want people who are good at designing and building reactors all the way to systems to think about the whole lifecycle, the techno-economic implications of these materials, to make sure they’re feasible and improve on current practices.
They have to do it in a way that’s sustainable and avoids unintended, undesirable consequences as well.
Pedro Alvarez
Menachem Elimelech
Naomi Halas
Qilin Li
Paul Westerhoff
Related Institutions/Organizations
William Marsh Rice University
Arizona State University
University of Texas-El Paso
Yale University
Related Programs
Engineering Research Centers
GNT Thumbnail Alt 3 2015-page-001
Watch Our YouTube Video
Follow Us on Twitter: @Genesisnanotech
Follow and ‘Like’ Us on Facebook
Connect with Our Website
UC Berkley: Quantum Dot Solar Cell Creates 30-Fold Concentration: Low-Cost Solar Cells that use HE Section of Solar Spectrum
Luminescent solar concentrators featuring quantum dots and photonic mirrors
Source: By Lynn Yarris, Berkeley Lab
Quantum Dots: National Science Foundation – Researching the Many Uses
rice QD finetuneIt’s easier to dissolve a sugar cube in a glass of water by crushing the cube first, because the numerous tiny particles cover more surface area in the water than the cube itself. In a way, the same principle applies to the potential value of materials composed of nanoparticles.
Because nanoparticles are so small, millions of times smaller than the width of a human hair, they have “tremendous surface area,” raising the possibility of using them to design materials with more efficient solar-to-electricity and solar-to-chemical energy pathways, says Ari Chakraborty, an assistant professor of chemistry at Syracuse University.
“They are very promising materials,” he says. “You can optimize the amount of energy you produce from a nanoparticle-based solar cell.”
Ari Chakraborty is an assistant professor of chemistry at Syracuse University. Credit: Ari Chakraborty, Syracuse University >>>
Chakraborty, an expert in physical and theoretical chemistry, quantum mechanics and nanomaterials, is seeking to understand how these nanoparticles interact with light after changing their shape and size, which means, for example, they ultimately could provide enhanced photovoltaic and light-harvesting properties. Changing their shape and size is possible “without changing their chemical composition,” he says. “The same chemical compound in different sizes and shapes will interact differently with light.”
Specifically, the National Science Foundation (NSF)-funded scientist is focusing on quantum dots, which are semiconductor crystals on a nanometer scale. Quantum dots are so tiny that the electrons within them exist only in states with specific energies. As such, quantum dots behave similarly to atoms, and, like atoms, can achieve higher levels of energy when light stimulates them.
Chakraborty works in theoretical and computational chemistry, meaning “we work with computers and computers only,” he says. “The goal of computational chemistry is to use fundamental laws of physics to understand how matter interacts with each other, and, in my research, with light. We want to predict chemical processes before they actually happen in the lab, which tells us which direction to pursue.”
These atoms and molecules follow natural laws of motion, “and we know what they are,” he says. “Unfortunately, they are too complicated to be solved by hand or calculator when applied to chemical systems, which is why we use a computer.”
The “electronically excited” states of the nanoparticles influence their optical properties, he says.
“We investigate these excited states by solving the Schrödinger equation for the nanoparticles,” he says, referring to a partial differential equation that describes how the quantum state of some physical system changes with time. “The Schrödinger equation provides the quantum mechanical description of all the electrons in the nanoparticle.
“However, accurate solution of the Schrödinger equation is challenging because of large number of electrons in system,” he adds. “For example, a 20 nanometer CdSe quantum dot contains over 6 million electrons. Currently, the primary focus of my research group is to develop new quantum chemical methods to address these challenges. The newly developed methods are implemented in open-source computational software, which will be distributed to the general public free of charge.”
Solar voltaics, “requires a substance that captures light, uses it, and transfers that energy into electrical energy,” he says. With solar cell materials made of nanoparticles, “you can use different shapes and sizes, and capture more energy,” he adds. “Also, you can have a large surface area for a small amount of materials, so you don’t need a lot of them.”Q Dot E-Jet Printing highresoluti
Nanoparticles also could be useful in converting solar energy to chemical energy, he says. “How do you store the energy when the sun is not out?” he says. “For example, leaves on a tree take energy and store it as glucose, then later use the glucose for food. One potential application is to develop artificial leaves for artificial photosynthesis. There is a huge area of ongoing research to make compounds that can store energy.”
Medical imaging presents another useful potential application, he says.
“For example, nanoparticles have been coated with binding agents that bind to cancerous cells,” he says. “Under certain chemical and physical conditions, the nanoparticles can be tuned to emit light, which allows us to take pictures of the nanoparticles. You could pinpoint the areas where there are cancerous cells in the body. The regions where the cancerous cells are located show up as bright spots in the photograph.”
As part of the grant’s educational component, Chakraborty is hosting several students from a local high school–East Syracuse Mineoa High School–in his lab. He also has organized two workshops for high school teachers on how to use computational tools in their classrooms “to make chemistry more interesting and intuitive to high school students,” he says.
“The really good part about it is that the kids can really work with the molecules because they can see them on the screen and manipulate them in 3-D space,” he adds. “They can explore their structure using computers. They can measure distances, angles, and energies associated with the molecules, which is not possible to do with a physical model. They can stretch it, and see it come back to its original structure. It’s a real hands-on experience that the kids can have while learning chemistry.”
Source: By Marlene Cimons, National Science Foundation
flexible electronics
Source: By Emil Venere, Purdue University
Penn Nano Sensor nl-2014-02279c_0006One of nanotechnology’s greatest promises is interacting with the biological world the way our own cells do, but current biosensors must be tailor-made to detect the presence of one type of protein, the identity of which must be known in advance.
University of Pennsylvania engineers have now devised a new kind of graphene-based biosensor that works in three ways at once. Because proteins trigger three different types of signals, the sensor can triangulate this information to produce more sensitive and accurate results. By taking advantage of the unique integration of multiple physical sensing modes on the same chip, this sensor device can extend the protein-concentration sensing range by a thousand-fold.
This extended range could be particularly useful in early diagnosis of certain cancers, where the blood biomarker concentration varies by orders of magnitude from patient to patient. The ability to make multiple detections of the same biomarker on the same chip also has the potential to reduce false positives and negatives in medical diagnostic tests.
Eventually, such a technique could be used in an all-purpose biosensor, which could identify a wide range of proteins through their mass, as well as their optical and electrical properties.
Penn Nano Sensor nl-2014-02279c_0006
A biosensor that did not have to be fine-tuned to detect only specific proteins would have a host of biomedical applications in diagnostic devices.
The study, published in the journal NanoLetters, was conducted by Ertugrul Cubukcu, assistant professor in the departments of Materials Science and Engineering and Electrical and Systems Engineering in Penn’s School of Engineering and Applied Science, and members of his lab, Alexander Y. Zhu, Fei Yi, Jason C. Reed and Hai Zhu.
“In a typical single mode biosensor you have two proteins that interact strongly. You attach protein A to your sensor and, when protein B binds to it, the sensor transduces that binding into some sort of electrical signal,” Cubukcu said,” But it’s kind of a dumb sensor in that it can only tell you if that kind of binding has occurred.
“But let’s say you have proteins A, B, C and D, all with different physical properties, like charge and mass. If you had a sensor that was sensitive to several of those properties, you could tell the difference between those binding events without starting with corresponding proteins for all of them.”
The more sensing modes operating at once, the better a sensor is at distinguishing between similar proteins. Proteins A and B might have the same mass but different charges, while proteins B and C have the same charges but different optical properties.
A multimodal sensor, pulling in data from multiple categories, could narrow the identity of a protein by comparing those values to a large database. Such an ability could potentially enable it to be applied to samples where the protein’s contents are unknown, a major upgrade on current technology which generally involves custom-building sensors to detect the presence of pre-defined sets of proteins.
The team’s sensors consist of a base of silicon nitride, coated with a layer of graphene, a single-atom-thick lattice of carbon atoms. Being carbon based means that graphene is an attractive bonding surface for proteins, which means that the device doesn’t need to be “functionalized” with proteins that are apt to interact with the ones the sensor aims to detect.
Graphene’s extreme thinness and unique electrical properties also allow for the mechanical, electrical and optical modes to operate simultaneously without interfering with one another.
“In the mechanical mode, the graphene is like the skin of a drum,” said Alexander Zhu, the first author of the study, who was then an undergraduate working in Cubukcu’s lab. “As proteins bind, the total mass changes and the resonance of the drum changes as a function of the total mass.
“In the electrical mode, we can look at how electrons travel across the graphene. The conductance is a function of the total available carriers inside, so, if you have something binding to the graphene, that changes the number of carriers and therefore the conductance properties.
“Finally, in the optical mode, we have a source of visible light and shine it on the sensor and measure the reflection. When nothing is bound, it’s seeing just air, but, as soon as proteins bind, we can measure the change in the refractive index.”
In their study, the researchers tested their sensor with known samples of proteins in order to demonstrate that all three modes can work simultaneously.
“We’ve shown that one sample provides all three shifts,” Yi said, “in the mass, electrical and optical readouts.”
Further work from Cubukcu’s group will investigate the feasibility of using this multimodal sensor to identify proteins from unknown samples.
The research was supported by the National Science Foundation under grants IIP-1312202 and ECCS-1408139.
Self-assembling materials
‘Cells that talk to each other’
A brighter design emerges for low-cost, “greener” LED light bulbs
A new way to make white and colorful LEDs is more Earth-friendly than existing methods. Image: American Chemical Society
The authors acknowledge funding from the National Science Foundation.
Source: American Chemical Society |
bbcb11781f29ec17 | The Book of Universes by John D. Barrow (2011)
This book is twice as long and half as good as Barrow’s earlier primer, The Origin of the Universe.
In that short book Barrow focused on the key ideas of modern cosmology – introducing them to us in ascending order of complexity, and as simply as possible. He managed to make mind-boggling ideas and demanding physics very accessible.
This book – although it presumably has the merit of being more up to date (published in 2011 as against 1994) – is an expansion of the earlier one, an attempt to be much more comprehensive, but which, in the process, tends to make the whole subject more confusing.
The basic premise of both books is that, since Einstein’s theory of relativity was developed in the 1910s, cosmologists and astronomers and astrophysicists have:
1. shown that the mathematical formulae in which Einstein’s theories are described need not be restricted to the universe as it has traditionally been conceived; in fact they can apply just as effectively to a wide variety of theoretical universes – and the professionals have, for the past hundred years, developed a bewildering array of possible universes to test Einstein’s insights to the limit
2. made a series of discoveries about our actual universe, the most important of which is that a) it is expanding b) it probably originated in a big bang about 14 billion years ago, and c) in the first few milliseconds after the bang it probably underwent a period of super-accelerated expansion known as the ‘inflation’ which may, or may not, have introduced all kinds of irregularities into ‘our’ universe, and may even have created a multitude of other universes, of which ours is just one
If you combine a hundred years of theorising with a hundred years of observations, you come up with thousands of theories and models.
In The Origin of the Universe Barrow stuck to the core story, explaining just as much of each theory as is necessary to help the reader – if not understand – then at least grasp their significance. I can write the paragraphs above because of the clarity with which The Origin of the Universe explained it.
In The Book of Universes, on the other hand, Barrow’s aim is much more comprehensive and digressive. He is setting out to list and describe every single model and theory of the universe which has been created in the past century.
He introduces the description of each model with a thumbnail sketch of its inventor. This ought to help, but it doesn’t because the inventors generally turn out to be polymaths who also made major contributions to all kinds of other areas of science. Being told a list of Paul Dirac’s other major contributions to 20th century science is not a good way for preparing your mind to then try and understand his one intervention on universe-modelling (which turned, in any case, out to be impractical and lead nowhere).
Another drawback of the ‘comprehensive’ approach is that a lot of these models have been rejected or barely saw the light of day before being disproved or – more complicatedly – were initially disproved but contained aspects or insights which turned out to be useful forty years later, and were subsequently recycled into revised models. It gets a bit challenging to try and hold all this in your mind.
In The Origin of the Universe Barrow sticks to what you could call the canonical line of models, each of which represented the central line of speculation, even if some ended up being disproved (like Hoyle and Gold and Bondi’s model of the steady state universe). Given that all of this material is pretty mind-bending, and some of it can only be described in advanced mathematical formulae, less is definitely more. I found The Book of Universes simply had too many universes, explained too quickly, and lost amid a lot of biographical bumpf summarising people’s careers or who knew who or contributed to who’s theory. Too much information.
One last drawback of the comprehensive approach is that quite important points – which are given space to breathe and sink in in The Origin of the Universe are lost in the flood of facts in The Book of Universes.
I’m particularly thinking of Einstein’s notion of the cosmological constant which was not strictly necessary to his formulations of relativity, but which Einstein invented and put into them solely in order to counteract the force of gravity and ensure his equations reflected the commonly held view that the universe was in a permanent steady state.
This was a mistake and Einstein is often quoted as admitting it was the biggest mistake of his career. In 1965 scientists discovered the cosmic background radiation which proved that the universe began in an inconceivably intense explosion, that the universe was therefore expanding and that the explosive, outward-propelling force of this bang was enough to counteract the contracting force of the gravity of all the matter in the universe without any need for a hypothetical cosmological constant.
I understand this (if I do) because in The Origin of the Universe it is given prominence and carefully explained. By contrast, in The Book of Universes it was almost lost in the flood of information and it was only because I’d read the earlier book that I grasped its importance.
The Book of Universes
Barrow gives a brisk recap of cosmology from the Sumerians and Egyptians, through the ancient Greeks’ establishment of the system named after Ptolemy in which the earth is the centre of the solar system, on through the revisions of Copernicus and Galileo which placed the sun firmly at the centre of the solar system, on to the three laws of Isaac Newton which showed how the forces which govern the solar system (and more distant bodies) operate.
There is then a passage on the models of the universe generated by the growing understanding of heat and energy acquired by Victorian physicists, which led to one of the most powerful models of the universe, the ‘heat death’ model popularised by Lord Kelvin in the 1850s, in which, in the far future, the universe evolves to a state of complete homegeneity, where no region is hotter than any other and therefore there is no thermodynamic activity, no life, just a low buzzing noise everywhere.
But this is all happens in the first 50 pages and is just preliminary throat-clearing before Barrow gets to the weird and wonderful worlds envisioned by modern cosmology i.e. from Einstein onwards.
In some of these models the universe expands indefinitely, in others it will reach a peak expansion before contracting back towards a Big Crunch. Some models envision a static universe, in others it rotates like a top, while other models are totally chaotic without any rules or order.
Some universes are smooth and regular, others characterised by clumps and lumps. Some are shaken by cosmic tides, some oscillate. Some allow time travel into the past, while others threaten to allow an infinite number of things to happen in a finite period. Some end with another big bang, some don’t end at all. And in only a few of them do the conditions arise for intelligent life to evolve.
The Book of Universes then goes on, in 12 chapters, to discuss – by my count – getting on for a hundred types or models of hypothetical universes, as conceived and worked out by mathematicians, physicists, astrophysicists and cosmologists from Einstein’s time right up to the date of publication, 2011.
A list of names
Barrow namechecks and briefly explains the models of the universe developed by the following (I am undertaking this exercise partly to remind myself of everyone mentioned, partly to indicate to you the overwhelming number of names and ideas the reader is bombarded with):
• Aristotle
• Ptolemy
• Copernicus
• Giovanni Riccioli
• Tycho Brahe
• Isaac Newton
• Thomas Wright (1771-86)
• Immanuel Kant (1724-1804)
• Pierre Laplace (1749-1827) devised what became the standard Victorian model of the universe
• Alfred Russel Wallace (1823-1913) discussed the physical conditions of a universe necessary for life to evolve in it
• Lord Kelvin (1824-1907) material falls into the central region of the universe and coalesce with other stars to maintain power output over immense periods
• Rudolf Clausius (1822-88) coined the word ‘entropy’ in 1865 to describe the inevitable progress from ordered to disordered states
• William Jevons (1835-82) believed the second law of thermodynamics implies that universe must have had a beginning
• Pierre Duhem (1961-1916) Catholic physicist accepted the notion of entropy but denied that it implied the universe ever had a beginning
• Samuel Tolver Preson (1844-1917) English engineer and physicist, suggested the universe is so vast that different ‘patches’ might experience different rates of entropy
• Ludwig Boltzmann and Ernst Zermelo suggested the universe is infinite and is already in a state of thermal equilibrium, but just with random fluctuations away from uniformity, and our galaxy is one of those fluctuations
• Albert Einstein (1879-1955) his discoveries were based on insights, not maths: thus he saw the problem with Newtonian physics is that it privileges an objective outside observer of all the events in the universe; one of Einstein’s insights was to abolish the idea of a privileged point of view and emphasise that everyone is involved in the universe’s dynamic interactions; thus gravity does not pass through a clear, fixed thing called space; gravity bends space.
The American physicist John Wheeler once encapsulated Einstein’s theory in two sentences:
Matter tells space how to curve. Space tells matter how to move. (quoted on page 52)
• Marcel Grossmann provided the mathematical underpinning for Einstein’s insights
• Willem de Sitter (1872-1934) inventor of, among other things, the de Sitter effect which represents the effect of the curvature of spacetime, as predicted by general relativity, on a vector carried along with an orbiting body – de Sitter’s universe gets bigger and bigger for ever but never had a zero point; but then de Sitter’s model contains no matter
• Vesto Slipher (1875-1969) astronomer who discovered the red shifting of distant galaxies in 1912, the first ever empirical evidence for the expansion of the galaxy
• Alexander Friedmann (1888-1925) Russian mathematician who produced purely mathematical solutions to Einstein’s equation, devising models where the universe started out of nothing and expanded a) fast enough to escape the gravity exerted by its own contents and so will expand forever or b) will eventually succumb to the gravity of its own contents, stop expanding and contract back towards a big crunch. He also speculated that this process (expansion and contraction) could happen an infinite number of times, creating a cyclic series of bangs, expansions and contractions, then another bang etc
• Arthur Eddington (1882-1944) most distinguished astrophysicist of the 1920s
• George Lemaître (1894-1966) first to combine an expanding universe interpretation of Einstein’s equations with the latest data about redshifting, and show that the universe of Einstein’s equations would be very sensitive to small changes – his model is close to Eddington’s so that it is often called the Eddington-Lemaître universe: it is expanding, curved and finite but doesn’t have a beginning
• Edwin Hubble (1889-1953) provided solid evidence of the redshifting (moving away) of distant galaxies, a main plank in the whole theory of a big bang, inventor of Hubble’s Law:
• Objects observed in deep space – extragalactic space, 10 megaparsecs (Mpc) or more – are found to have a redshift, interpreted as a relative velocity away from Earth
• This Doppler shift-measured velocity of various galaxies receding from the Earth is approximately proportional to their distance from the Earth for galaxies up to a few hundred megaparsecs away
• Richard Tolman (1881-1948) took Friedmann’s idea of an oscillating universe and showed that the increased entropy of each universe would accumulate, meaning that each successive ‘bounce’ would get bigger; he also investigated what ‘lumpy’ universes would look like where matter is not evenly spaced but clumped: some parts of the universe might reach a maximum and start contracting while others wouldn’t; some parts might have had a big bang origin, others might not have
• Arthur Milne (1896-1950) showed that the tension between the outward exploding force posited by Einstein’s cosmological constant and the gravitational contraction could actually be described using just Newtonian mathematics: ‘Milne’s universe is the simplest possible universe with the assumption that the universe s uniform in space and isotropic’, a ‘rational’ and consistent geometry of space – Milne labelled the assumption of Einsteinian physics that the universe is the same in all places the Cosmological Principle
• Edmund Fournier d’Albe (1868-1933) posited that the universe has a hierarchical structure from atoms to the solar system and beyond
• Carl Charlier (1862-1934) introduced a mathematical description of a never-ending hierarchy of clusters
• Karl Schwarzschild (1873-1916) suggested that the geometry of the universe is not flat as Euclid had taught, but might be curved as in the non-Euclidean geometries developed by mathematicians Riemann, Gauss, Bolyai and Lobachevski in the early 19th century
• Franz Selety (1893-1933) devised a model for an infinitely large hierarchical universe which contained an infinite mass of clustered stars filling the whole of space, yet with a zero average density and no special centre
• Edward Kasner (1878-1955) a mathematician interested solely in finding mathematical solutions to Einstein’s equations, Kasner came up with a new idea, that the universe might expand at different rates in different directions, in some parts it might shrink, changing shape to look like a vast pancake
• Paul Dirac (1902-84) developed a Large Number Hypothesis that the really large numbers which are taken as constants in Einstein’s and other astrophysics equations are linked at a deep undiscovered level, among other things abandoning the idea that gravity is a constant: soon disproved
• Pascual Jordan (1902-80) suggested a slight variation of Einstein’s theory which accounted for a varying constant of gravitation as through it were a new source of energy and gravitation
• Robert Dicke (1916-97) developed an alternative theory of gravitation
• Nathan Rosen (1909-995) young assistant to Einstein in America with whom he authored a paper in 1936 describing a universe which expands but has the symmetry of a cylinder, a theory which predicted the universe would be washed over by gravitational waves
• Ernst Straus (1922-83) another young assistant to Einstein with whom he developed a new model, an expanding universe like those of Friedman and Lemaître but which had spherical holes removed like the bubbles in an Aero, each hole with a mass at its centre equal to the matter which had been excavated to create the hole
• Eugene Lifschitz (1915-85) in 1946 showed that very small differences in the uniformity of matter in the early universe would tend to increase, an explanation of how the clumpy universe we live in evolved from an almost but not quite uniform distribution of matter – as we have come to understand that something like this did happen, Lifshitz’s calculations have come to be seen as a landmark
• Kurt Gödel (1906-1978) posited a rotating universe which didn’t expand and, in theory, permitted time travel!
• Hermann Bondi, Thomas Gold and Fred Hoyle collaborated on the steady state theory of a universe which is growing but remains essentially the same, fed by the creation of new matter out of nothing
• George Gamow (1904-68)
• Ralph Alpher and Robert Herman in 1948 showed that the ratio of the matter density of the universe to the cube of the temperature of any heat radiation present from its hot beginning is constant if the expansion is uniform and isotropic – they calculated the current radiation temperature should be 5 degrees Kelvin – ‘one of the most momentous predictions ever made in science’
• Abraham Taub (1911-99) made a study of all the universes that are the same everywhere in space but can expand at different rates in different directions
• Charles Misner (b.1932) suggested ‘chaotic cosmology’ i.e. that no matter how chaotic the starting conditions, Einstein’s equations prove that any universe will inevitably become homogenous and isotropic – disproved by the smoothness of the background radiation. Misner then suggested the Mixmaster universe, the most complicated interpretation of the Einstein equations in which the universe expands at different rates in different directions and the gravitational waves generated by one direction interferes with all the others, with infinite complexity
• Hannes Alfvén devised a matter-antimatter cosmology
• Alan Guth (b.1947) in 1981 proposed a theory of ‘inflation’, that milliseconds after the big bang the universe underwent a swift process of hyper-expansion: inflation answers at a stroke a number of technical problems prompted by conventional big bang theory; but had the unforeseen implication that, though our region is smooth, parts of the universe beyond our light horizon might have grown from other areas of inflated singularity and have completely different qualities
• Andrei Linde (b.1948) extrapolated that the inflationary regions might create sub-regions in which further inflation might take place, so that a potentially infinite series of new universes spawn new universes in an ‘endlessly bifurcating multiverse’. We happen to be living in one of these bubbles which has lasted long enough for the heavy elements and therefore life to develop; who knows what’s happening in the other bubbles?
• Ted Harrison (1919-2007) British cosmologist speculated that super-intelligent life forms might be able to develop and control baby universe, guiding the process of inflation so as to promote the constants require for just the right speed of growth to allow stars, planets and life forms to evolve. Maybe they’ve done it already. Maybe we are the result of their experiments.
• Nick Bostrom (b.1973) Swedish philosopher: if universes can be created and developed like this then they will proliferate until the odds are that we are living in a ‘created’ universe and, maybe, are ourselves simulations in a kind of multiverse computer simulation
Although the arrival of Einstein and his theory of relativity marks a decisive break with the tradition of Newtonian physics, and comes at page 47 of this 300-page book, it seemed to me the really decisive break comes on page 198 with the publication Alan Guth’s theory of inflation.
Up till the Guth breakthrough, astrophysicists and astronomers appear to have focused their energy on the universe we inhabit. There were theoretical digressions into fantasies about other worlds and alternative universes but they appear to have been personal foibles and everyone agreed they were diversions from the main story.
However, the idea of inflation, while it solved half a dozen problems caused by the idea of a big bang, seems to have spawned a literally fantastic series of theories and speculations.
Throughout the twentieth century, cosmologists grew used to studying the different types of universe that emerged from Einstein’s equations, but they expected that some special principle, or starting state, would pick out one that best described the actual universe. Now, unexpectedly, we find that there might be room for many, perhaps all, of these possible universes somewhere in the multiverse. (p.254)
This is a really massive shift and it is marked by a shift in the tone and approach of Barrow’s book. Up till this point it had jogged along at a brisk rate namechecking a steady stream of mathematicians, physicists and explaining how their successive models of the universe followed on from or varied from each other.
Now this procedure comes to a grinding halt while Barrow enters a realm of speculation. He discusses the notion that the universe we live in might be a fake, evolved from a long sequence of fakes, created and moulded by super-intelligences for their own purposes.
Each of us might be mannequins acting out experiments, observed by these super-intelligences. In which case what value would human life have? What would be the definition of free will?
Maybe the discrepancies we observe in some of the laws of the universe have been planted there as clues by higher intelligences? Or maybe, over vast periods of time, and countless iterations of new universes, the laws they first created for this universe where living intelligences could evolve have slipped, revealing the fact that the whole thing is a facade.
These super-intelligences would, of course, have computers and technology far in advance of ours etc. I felt like I had wandered into a prose version of The Matrix and, indeed, Barrow apologises for straying into areas normally associated with science fiction (p.241).
Imagine living in a universe where nothing is original. Everything is a fake. No ideas are ever new. There is no novelty, no originality. Nothing is ever done for the first time and nothing will ever be done for the last time… (p.244)
And so on. During this 15-page-long fantasy the handy sequence of physicists comes to an end as he introduces us to contemporary philosophers and ethicists who are paid to think about the problem of being a simulated being inside a simulated reality.
Take Robin Hanson (b.1959), a research associate at the Future of Humanity Institute of Oxford University who, apparently, advises us all that we ought to behave so as to prolong our existence in the simulation or, hopefully, ensure we get recreated in future iterations of the simulation.
Are these people mad? I felt like I’d been transported into an episode of The Outer Limits or was back with my schoolfriend Paul, lying in a summer field getting stoned and wondering whether dandelions were a form of alien life that were just biding their time till they could take over the world. Why not, man?
I suppose Barrow has to include this material, and explain the nature of the anthropic principle (p.250), and go on to a digression about the search for extra-terrestrial life (p.248), and discuss the ‘replication paradox’ (in an infinite universe there will be infinite copies of you and me in which we perform an infinite number of variations on our lives: what would happen if you came face to face with one of your ‘copies?? p.246) – because these are, in their way, theories – if very fantastical theories – about the nature of the universe and he his stated aim is to be completely comprehensive.
The anthropic principle Observations of the universe must be compatible with the conscious and intelligent life that observes it. The universe is the way it is, because it has to be the way it is in order for life forms like us to evolve enough to understand it.
Still, it was a relief when he returned from vague and diffuse philosophical speculation to the more solid territory of specific physical theories for the last forty or so pages of the book. But it was very noticeable that, as he came up to date, the theories were less and less attached to individuals: modern research is carried out by large groups. And he increasingly is describing the swirl of ideas in which cosmologists work, which often don’t have or need specific names attached. And this change is denoted, in the texture of the prose, by an increase in the passive voice, the voice in which science papers are written: ‘it was observed that…’, ‘it was expected that…’, and so on.
• Edward Tryon (b.1940) American particle physicist speculated that the entire universe might be a virtual fluctuation from the quantum vacuum, governed by the Heisenberg Uncertainty Principle that limits our simultaneous knowledge of the position and momentum, or the time of occurrence and energy, of anything in Nature.
• George Ellis (b.1939) created a catalogue of ‘topologies’ or shapes which the universe might have
• Dmitri Sokolov and Victor Shvartsman in 1974 worked out what the practical results would be for astronomers if we lived in a strange shaped universe, for example a vast doughnut shape
• Yakob Zeldovich and Andrei Starobinsky in 1984 further explored the likelihood of various types of ‘wraparound’ universes, predicting the fluctuations in the cosmic background radiation which might confirm such a shape
• 1967 the Wheeler-De Witt equation – a first attempt to combine Einstein’s equations of general relativity with the Schrödinger equation that describes how the quantum wave function changes with space and time
• the ‘no boundary’ proposal – in 1982 Stephen Hawking and James Hartle used ‘an elegant formulation of quantum mechanics introduced by Richard Feynman to calculate the probability that the universe would be found to be in a particular state. What is interesting is that in this theory time is not important; time is a quality that emerges only when the universe is big enough for quantum effects to become negligible; the universe doesn’t technically have a beginning because the nearer you approach to it, time disappears, becoming part of four-dimensional space. This ‘no boundary’ state is the centrepiece of Hawking’s bestselling book A Brief History of Time (1988). According to Barrow, the Hartle-Hawking model was eventually shown to lead to a universe that was infinitely large and empty i.e. not our one.
• In 1986 Barrow proposed a universe with a past but no beginning because all the paths through time and space would be very large closed loops
• In 1997 Richard Gott and Li-Xin Li took the eternal inflationary universe postulated above and speculated that some of the branches loop back on themselves, giving birth to themselves
The self-creating universe of J.Richard Gott III and Li-Xin Li
• In 2001 Justin Khoury, Burt Ovrut, Paul Steinhardt and Neil Turok proposed a variation of the cyclic universe which incorporated strong theory and they called the ‘ekpyrotic’ universe, epkyrotic denoting the fiery flame into which each universe plunges only to be born again in a big bang. The new idea they introduced is that two three-dimensional universes may approach each other by moving through the additional dimensions posited by strong theory. When they collide they set off another big bang. These 3-D universes are called ‘braneworlds’, short for membrane, because they will be very thin
• If a universe existing in a ‘bubble’ in another dimension ‘close’ to ours had ever impacted on our universe, some calculations indicate it would leave marks in the cosmic background radiation, a stripey effect.
• In 1998 Andy Albrecht, João Maguijo and Barrow explored what might have happened if the speed of light, the most famous of cosmological constants, had in fact decreased in the first few milliseconds after the bang? There is now an entire suite of theories known as ‘Varying Speed of Light’ cosmologies.
• Modern ‘String Theory’ only functions if it assumes quite a few more dimensions than the three we are used to. In fact some string theories require there to be more than one dimension of time. If there are really ten or 11 dimensions then, possibly, the ‘constants’ all physicists have taken for granted are only partial aspects of constants which exist in higher dimensions. Possibly, they might change, effectively undermining all of physics.
• The Lambda-CDM model is a cosmological model in which the universe contains three major components: 1. a cosmological constant denoted by Lambda (Greek Λ) and associated with dark energy; 2. the postulated cold dark matter (abbreviated CDM); 3. ordinary matter. It is frequently referred to as the standard model of Big Bang cosmology because it is the simplest model that provides a reasonably good account of the following properties of the cosmos:
• the existence and structure of the cosmic microwave background
• the large-scale structure in the distribution of galaxies
He ends with a summary of our existing knowledge, and indicates the deep puzzles which remain, not least the true nature of the ‘dark matter’ which is required to make sense of the expanding universe model. And he ends the whole book with a pithy soundbite. Speaking about the ongoing acceptance of models which posit a ‘multiverse’, in which all manner of other universes may be in existence, but beyond the horizon of where can see, he says:
Copernicus taught us that our planet was not at the centre of the universe. Now we may have to accept that even our universe is not at the centre of the Universe.
Related links
Reviews of other science books
The environment
Human evolution
• The Double Helix by James Watson (1968)
Particle physics
%d bloggers like this: |
0b997c3dcf4ef852 | Download The Symplectization of Science - Pacific Institute for the
yes no Was this document useful for you?
Thank you for your participation!
Document related concepts
Theoretical and experimental justification for the Schrödinger equation wikipedia, lookup
Copenhagen interpretation wikipedia, lookup
History of quantum field theory wikipedia, lookup
Canonical quantization wikipedia, lookup
Quantum state wikipedia, lookup
Interpretations of quantum mechanics wikipedia, lookup
Path integral formulation wikipedia, lookup
Hidden variable theory wikipedia, lookup
T-symmetry wikipedia, lookup
Scalar field theory wikipedia, lookup
Max Born wikipedia, lookup
Renormalization group wikipedia, lookup
EPR paradox wikipedia, lookup
Symmetry in quantum mechanics wikipedia, lookup
Bra–ket notation wikipedia, lookup
Matter wave wikipedia, lookup
Topological quantum field theory wikipedia, lookup
Symplectic Geometry Lies at the Very
Foundations of Physics and Mathematics
Mark J. Gotay
Department of Mathematics
University of Hawai‘i
2565 The Mall
Honolulu, HI 96822 USA
James A.Isenberg
Institute of Theoretical Science and Department of Mathematics
University of Oregon
Eugene, OR 97403-5203 USA
February 18, 1992
We would like to thank Jerry Marsden and Alan Weinstein for their comments on previous drafts. This work was partially supported by NSF grants
DMS-8805699 and PHY-9012301. The first author would like to express his
appreciation to the Ford Foundation for fellowship support while this work was
Published in: Gazette des Mathématiciens 54, 59-79 (1992).
Physics is geometry. This dictum is one of the guiding principles of modern physics. It largely originated with Albert Einstein, whose most important
contribution–via his General Theory of Relativity–was to view the phenomenon
of gravity as a reflection of the curvature of the geometry of spacetime. Einstein’s vision is remarkable in its simplicity, has great conceptual power and is
physically compelling. As well, it leads to a theory of gravity which is very accurate in its agreement with experiment and observation. A further triumph of the
geometric point of view has been the development, over the past four decades,
of the “gauge” or Yang-Mills field theories of fundamental physical processes.
Now, not only is gravity a manifestation of geometry, so are electromagnetism
and the nuclear forces. Work actively continues towards the ultimate “grand
unification,” the marriage of all basic physical interactions with each other and
The geometry of general relativity and gauge field theories, known as Riemannian geometry after the great nineteenth century German mathematician
Georg Friedrich Bernhard Riemann, is a curved generalization of the familiar
(and ancient) geometry of Euclid. But there is another, less familiar and less
intuitive, type of geometry which is even more deeply rooted in physics: symplectic geometry. This is the mathematics that underlies mechanics and hence
is at the very foundation of classical physics. The behavior of systems and phenomena as diverse as spinning tops, magnetism, the propagation of water waves
and even the gravitational field itself, can to a large extent be both described
and understood in terms of this geometry.
Thus physics is indeed geometry–symplectic geometry. This is true not just
at the formula, theoretical level, but at the practical, engineering level as well.
Symplectic geometry is turning out to be an indispensable tool for comprehending the large scale behavior of complex physical systems like the superconducting
supercollider, or the Galileo probe on its way to Jupiter. Understanding which
may be lost in the study of the complicated differential equations governing the
motion of such a body can be recovered by means of symplectic geometry. Such
understanding is often crucial: it could have saved the late 1950s satellite Explorer I, which tumbled out of control when spun about a dynamically unstable
Besides this key role in physics and engineering, symplectic geometry is found
increasingly to play an important role in mathematics: not only are symplectic
ideas prevalent throughout, there are indications that much of mathematics
will eventually be “symplectized.” We may well be witnessing the advent of
a symplectic revolution in fundamental science. Already, there is impressive
evidence that symplectic geometry will come to be regarded as one of the most
important and productive offshoots of, and links between, mathematics and
physics in this century.
Box 1: Some History
Part of the reason symplectic geometry is relatively little known and was
so long in developing is that it is a rather abstruse–and almost weird–sort of
geometry. To aid in its description, we shall compare and contrast it throughout
with the more familiar Euclidean geometry (and its curved Riemannian generalization). The essence of both can be gleaned by studying the simplest possible
case: the geometry of the plane R2 .
As its Greek root γoµτ ρια (“measure of land”) implies, geometry has its
origins in the science of surveying. Thus it focuses on the measurement of the
lengths of lines, and on measuring the angles between lines. All this information
is encoded mathematically into the basic notion of Euclidean geometry: the
metric g. This is an object which associates a number to every pair of vectors
v = (v1 , v2 ) and w = (w1 , w2 ) in the plane according to the formula
g(v, w) = v1 w1 + v2 w2 .
It is both symmetric (i.e., g(v, w) = g(w, v)) and nondegenerate (i.e., g(v, w)
= 0 for all vectors w if and only
if v = 0 ). Using this device, one defines the
length of a vector v to be v = g(v, v) ; this gives the Pythagorean result
v2 = (v1 ) + (v2 ) .
Similarly, the angle θ between two vectors v and w is determined via
cos θ =
g(v, w)
v · w
In particular, v is at right angles to w if and only if g(v, w) = 0. The
symmetry of g guarantees that the angle between v and w is the same as that
between w and v, while nondegeneracy ensures that no nonzero vector can be
perpendicular to every other vector. Thus various kinds of familiar geometric
information can be recovered from the metric g.
In the same vein, the standard symplectic form Ω on R2 is also an object
which associates to every two vectors v and w a number; in this case the formula
Ω(v, w) = v1 w2 − v2 w1 .
Note both the ordering and the all important minus sign–because of these, Ω
is anti symmetric: Ω(w, v) = −Ω(v, w). This means that, unlike the Euclidean
metric g, Ω does not give
rise to a notion of length or angle. Indeed, the “symplectic length” v = Ω(v, v) of a vector v always vanishes and, even worse,
every vector is now perpendicular to itself!
But the symplectic form does do something sensible, and that is to make
precise the concept of “oriented area.” Consider a parallelogram whose sides
are formed by two vectors v and w; then the area of this figure is given exactly
by Ω(v, w). The sign of Ω(v, w) is determined by comparing the orientation
of the pair [v,w] with the standard orientation of the plane–the sign is positive
if they agree, negative otherwise. Taking these vectors in the opposite order will
then flip the orientation and reverse the sign; hence the antisymmetry. Like the
metric g, the symplectic form Ω is nondegenerate, which in this context means
that only the collapsed parallelogram (when v and w are parallel) has zero area.
So symplectic geometry is a purely “areal” type of geometry.1 We shall see
later how this seemingly obscure concentration on oriented area ties in with
classical mechanics.
The symplectic and Euclidean geometries of the plane R2 are already interesting and nontrivial. But physics and mathematics–as well as common
experience–don’t all happen just on the flat plane, so it is necessary to generalize. There are two ways to make geometry more useful (and fascinating!):
by adding dimensions and by allowing for more complicated (“warped”) spaces.
Both generalizations are essential: indeed the physical universe–spacetime–in
which we now reside is a curved 4-dimensional space. Each of these generalizations will now be discussed in turn, for both Euclidean and symplectic
For Euclidean geometry the generalization to higher dimensions is straightforward. There are lines, with angles between them, in everyday 3-dimensional
space R3 , as well as in the more abstract spaces Rn of any number of dimensions n. Hence, in any of these spaces one can construct a metric which assigns
lengths to vectors and angles to pairs of vectors in any directions.
The extension to higher dimensions is more complicated for symplectic geometry, since area is essentially a two-dimensional construct. In fact, one cannot
consistently define “oriented area” for odd-dimensional spaces, such as R3 , without introducing unwanted phenomena (“degeneracies”). On the other hand, it
is possible to build a symplectic form on the four-dimensional space R4 as follows. View R4 as the “sum” R2 ⊕ R2 of two planes; then the oriented area
of a parallelogram in this space is the sum of the oriented areas of its shadows
on these planes. This approach also works for R6 , R8 , etc.; the upshot is that
only even-dimensional spaces can carry symplectic structures. (We shall see a
physical reason for this later.)
Let us now consider the other avenue of generalization necessary in geometry.
The story is a familiar one in the Euclidean case: During the nineteenth century,
geometers such as Bolyai, Gauss and Lobachevski wondered what geometry
might be like if “parallel lines” didn’t stay parallel, but rather converged or
diverged. Triangles, they found, could then have angles summing to more or to
less than 180◦ , there could be six-sided “squares” (with all angles being right
angles), and all the familiar mensuration formulas of Euclidean geometry were
replaced by new and exotic ones.
Box 2: Geometry on Manifolds
Especially interesting are two features of these ’non-Euclidean’ or “Riemannian” geometries: The first is that the geometry can vary considerably from
point to point in the space. So triangles at one location might have angles
which sum to 200◦ , while another triangle elsewhere might have angles summing to 165◦ . All this information can still be obtained from a metric g, but
this metric must be allowed to be different at each point. It is now a metric function. The second feature is that the spaces which underlie these non-Euclidean
geometries are often structurally quite different from the spaces Rn . The space
could for example “curl up” on itself and close in a variety of ways. These
“warped” or “twisted” spaces–called manifolds–are generalizations of surfaces
such as the sphere. They arise naturally in a variety of contexts, both physical
and mathematical, and can be quite complicated. Some examples are pictured
in Boxes 2 and 5; further discussion of manifolds and their non-Euclidean geometries can be found in the Scientific American article “The Mathematics of
Three-dimensional Manifolds,” by W.P. Thurston and J.R. Weeks (July, 1984).
Roughly speaking, then, Riemannian geometry is Euclidean geometry extended to curved spaces of arbitrary dimension. Élie Cartan captured the
essence of Riemannian geometry when he observed: “A Riemannian manifold
is really made up of an infinity of small pieces of Euclidean spaces.” On each of
these infinitesimally small pieces (represented intuitively as flat “tangent spaces”
to the manifold) the metric g takes a fixed value. Thus one may compute the
length of a curve by decomposing it into a chain of “tangent vectors,” computing the length of each of these using the induced Euclidean metric on each
tangent space (as illustrated previously in the two-dimensional case), and then
Similar remarks apply to symplectic geometry. One generalizes the symplectic plane R2 to an even-dimensional manifold and allows the geometry to
vary from point to point. On each tangent space the symplectic form Ω defines
a notion of oriented area. Thus, for instance, one may compute the oriented
area of a surface residing in a symplectic manifold by breaking the surface up
into infinitesimal parallelograms and summing the oriented areas of these. As
with Riemannian geometry, the “usual” rules for computing oriented areas will
typically not remain valid in this more general setting. (See Box 2.)
At this juncture a crucial difference between symplectic and Riemannian
geometries appears. One may think of constructing a Riemannian geometry
piece-by-piece by “gluing together” the Euclidean geometries on each tangent
space. This gluing can be done in an arbitrary way, so long as it is done
smoothly. One may try to build a symplectic geometry in a similar fashion.
But there is a constraint on the values of the symplectic form Ω so obtained
at neighboring points: these must be arranged so that the oriented surface
area of every compact three-dimensional region is zero (“Jacobi’s identity”).2
This condition is responsible for many of the most interesting–and occasionally
frustrating–facets of symplectic geometry.
There is no such gluing condition in Riemannian geometry, with the consequence that every manifold (regardless of dimension) can be endowed with a
Riemannian metric. But Jacobi’s identity cannot always be satisfied, so there
are (even-dimensional) manifolds which do not carry symplectic forms (the sixdimensional sphere, for example).3 Thus symplectic geometry is really rather
special. In fact, it is not known exactly which manifolds are symplectic, and
researchers have just begun to make progress in characterizing and classifying
those that are. Along these lines we mention here an important recent result:
Mikhail Gromov’s discovery of an “exotic” symplectic structure on R4 . This is
yet a different symplectic geometry than the “standard” one described earlier.
Although the existence of this exotic geometry was established theoretically several years ago, it was only in 1989 that an explicit expression for the symplectic
form was found. Using computer graphics (Box 3), one can now get some insight
into how this exotic structure behaves vis-à-vis the standard one. These and
similar topics are the province of symplectic topology, currently one of the most
active areas in the field. See the recent article The Symplectic Camel by Ian
Stewart (Nature, September 1987) as well as the companion article by Claude
Viterbo in this issue for more on this subject.
Box 3: Symplectic Geometries on R4
The most important ramification of the Jacobi identity is that symplectic
geometries are “flat.” This means that all symplectic manifolds of the same
(even) dimension are locally indistinguishable; one cannot tell one from another
if one looks at them with a magnifying glass. Only globally, when one observes
the entire space, do differences between various symplectic manifolds begin to
emerge. [It is partly for this reason that it is so difficult to get a handle on
the exotic symplectic structure on R4 – local measurements cannot differentiate it from the standard symplectic structure. In this regard, we point out
that the twisting in the picture in Box 3 is a large scale phenomenon–it cannot
be detected locally.] This best highlights how symplectic and Riemannian geometries differ, since the latter do not have this property; indeed, Riemannian
geometries are typically curved. No portion of, say, a sphere–no matter how
small–can be mapped onto the plane without distorting shapes (that is, lengths
and angles). But it is possible to map part of the globe onto the plane in such
a way that relative sizes (that is, areas) remain unaltered, a fact with which
cartographers are well acquainted. We emphasize, however, that flatness does
not rule out a complicated large scale structure: a circular cylinder is flat in
both the symplectic and Riemannian senses, but is not planar.
Box 4: Cartography
Symplectic geometry is also quite “flexible,” at least in comparison with
Riemannian geometry. To appreciate this, we dwell for a moment on the notion
of symmetry. Consider again the round sphere. If we rigidly rotate it about
any axis through its center, its Riemannian geometry remains the same. Thus
rigid rotations are Riemannian symmetries, that is, transformations which leave
lengths and angles invariant. But if instead we rotate the sphere differentially,
with one point lagging behind a neighboring point, then angles and lengths
are distorted and the geometry changes. We see that for a sphere there are a
limited number of symmetries; and ellipsoid has fewer yet, and most Riemannian manifolds have no symmetries at all. By way of contrast, every symplectic
manifold has a enormous number of symmetries (i.e., transformations which
preserve oriented area). In fact, the set of all symmetries of a symplectic manifold is always infinite-dimensional!4 Thus one can deform symplectic manifolds
to a much greater extent than one can their Riemannian counterparts; this is
because the former are all flat and hence much less rigid than the latter.
These observations provide some insight into the character of symplectic
geometry. but symplectic geometry is not only of interest in itself; over the past
two decades, there has been a burst of applications of symplectic techniques to
other areas in mathematics. Possibly the most significant of these has been to
the theory of group representations, culminating in what is known as “geometric
quantization theory.” (We shall encounter this theory later in physics as well.)
Various branches of analysis, number theory and lately even knot theory have
also profited from symplectic ideas. One particularly interesting development
relates to catastrophe theory, in which context symplectic geometry has been
used to clear up several mysteries in laser optics. The book Catastrophe Theory
by Vladimir I. Arnol’d (Springer-Verlag, 1986) contains a readable account of
these results.
Potentially even more momentous, however, is the philosophy of “symplectization” advocated recently by Arnol’d. He cites mounting evidence which
indicates that many “ordinary” mathematical ideas and constructions not only
have analogues within the domain of symplectic geometry, but are in fact ultimately grounded there. (Certainly the corresponding assertion is true in classical physics.) This suggests that it may be possible to completely recast a great
deal of mathematics in symplectic terms. Arnol’d has characterized this process
of symplectization as “ of the small number of operations of the highest
level, which act[s] ... on all of mathematics at once.” It could conceivably lead to
a revolution in mathematics comparable to the invention of complex numbers!
In Stewart’s words:
Mathematicians must have felt this way when they discovered that
complex numbers were more than just one extra gimmick: virtually
every idea of mathematics, from the geometry of curves to the analysis of partial differential equations, was ripe for complexification.
Mathematics exploded overnight.
Symplectic geometry may well light up the mathematical sky once again.
But this is for the future; let us now delve into its role in the realm of physics.
Symplectic geometry was “invented” by Lagrange in 1808 during his seminal studies of celestial mechanics. It first appeared as an analytical technique
whereby the equations of planetary motion could be written in a greatly simplified form. (See Alan Weinstein, Lectures on Symplectic Manifolds (American
Mathematical Society, 1977), for nice mathematical discussions of both Lagrange’s work and symplectic geometry.) These techniques were substantially
amplified and expanded by William Rowan Hamilton, who showed that Lagrange’s discoveries could be applied to mechanics as a whole. The resulting
collection of ideas and calculational procedures is known as Hamiltonian mechanics. This theory was further expanded and refined by Jacobi, Liouville and
Poisson, among others, and now forms the structural basis for essentially all of
classical physics.
The paradigm for Hamiltonian mechanics is particle dynamics–the study
of the motion of an object subject to various forces, like an electron moving
through the electromagnetic fields inside a cathode ray tube. To determine the
trajectory which such an object will follow, it is not enough to know the object’s
initial position; one must also know its initial velocity or, equivalently, its initial
momentum. Only then is one able to “predict the future,” that is, divine the
object’s location and movements at all future times.
This leads one to study particle dynamics on a space–called phase space–
which consists both of all possible positions or “configurations” q and all possible
velocities or “momenta” p of the particle. For example, the phase space of a
particle moving in everyday three-dimensional space R3 is R6 whose points are
labeled by six quantities (x, y, z, px , py , pz ) giving the three components of position and momentum. Similarly, the planar pendulum has a phase space which
is a circular cylinder parametrized by (θ, pθ ), where θ is the angular position of
the bob and pθ is its angular momentum. Other bodies, like relativistic particles
with spin or coupled rigid bodies, have more complicated phase spaces.
Box 5: The Phase Flow for the Planar Pendulum
The underlying idea is that once one knows a particle’s initial state (q, p) in
phase space and also the forces that act on it, then one has enough information
to chart the particle’s motion in time. All this can be neatly visualized (Box
5). To specify the (net) force is to assign a particular arrow (or “flow vector”)
to every point in phase space. Then a particle starting out at some given state
moves along a unique trajectory (“flow line”) as indicated by the arrows. The
collection of all possible trajectories (the “flow”) fills in the phase space, with
no two trajectories crossing. If one were to work, say, in just the space of all
possible configurations (q’s) rather than phase space (q’s and p’s), one would
not have such an elegant and orderly description of particle motion. Thus phase
space is the appropriate arena for dynamics.
The picture of particle motion we have been sketching here is both figuratively and literally analogous to the flow of a fluid. In fact dynamics, in a precise
mathematical sense, is merely a fluid flow in phase space. This “phase flow,”
however, has a very special property: it is area-preserving, that is the areas of
two-dimensional sheets of fluid remain unchanged as they flow along. Since a
fluid flow on a manifold may be viewed as a time-dependent transformation of
that manifold, we conclude that dynamics consists of a time-dependent transformation of phase space which preserves areas. And where there are areas,
there must be symplectic forms!
These observations constitute the fundamental links between mechanics and
geometry, viz., the phase spaces of particle dynamics are symplectic manifolds,
and dynamics corresponds to a time-dependent symplectic transformation.
These facts have had far-reaching consequences for physics. Indeed, it is the
presence of the symplectic structure on phase space which is largely responsible, in one way or another, for the tremendous success Hamiltonian mechanics
has had in describing the physical world.
What precisely does the symplectic form do physically? Two interrelated
things, primarily. First, it sorts out how the generalized positions q and momenta p fit together. For a complicated system such as the Galileo Jupiter
probe, with literally hundreds of configuration variables q and momentum variables p, it is crucial that the right momentum be tied to the right position if
the system’s motion is to make sense. Now recall that, mathematically, the
symplectic form locally serves to split a 2n-dimensional space into a collection of n transverse planes. So in a sense it collects the 2n independent directions (q1 , · · · , qn , p1 , · · · , pn ) of phase space into pairs: (q1 , p1 ; · · · ; qn , pn ).5
This explains, from a physical standpoint, why symplectic manifolds are evendimensional: it is because each position is paired with a corresponding momentum. [As we will see, the intertwining of the p’s and q’s also has striking
consequences in quantum mechanics.] Second, the symplectic form tells one
exactly how to convert the forces which act on the system into an assignment of
flow vectors on phase space, thereby enabling one to compute the phase flow and
hence all allowable motions of the system. Succinctly, it converts dynamic data–
about forces–into kinematic information–about the system’s motion. For more
on the “why” and “how” of symplectic geometry in physics, we refer the reader
to the books Foundations of Mechanics, by Ralph Abraham and Jerrold E.
Marsden (Benjamin-Cummings, second ed., 1978), and Symplectic Techniques
in Physics, by Victor Guillemin and Shlomo Sternberg (Cambridge University
Press, second ed., 1989).
Besides their primary duties, symplectic structures can be exploited for numerous other purposes in mechanics. They provide an efficient way to relate
symmetries (translational, rotational, “internal” or “gauge,” etc.) of physical
systems to quantities (energy-momentum, angular momentum, electric charge,
etc.) which remain constant as the systems evolve in time. Such “conservation
laws” are very helpful in the general analysis of how systems behave, especially
in nonlinear dynamics, where it is often impossible to obtain exact quantitative
results. Symplectic methods are also crucial for studying questions of stability
(e.g., could small oscillations of the Galileo probe’s antenna grow uncontrollably in time?), and have greatly enhanced our ability to model and accurately
predict the dynamical behavior of complex mechanical systems. One beautiful
application of symplectic geometry (involving many of the above ideas) is to the
age-old question of shy a falling cat always lands on its feet. It turns out that
the cat achieves this by behaving as if it were a particle moving in a certain
Yang-Mills field !
Box 6: The Yang-Mills Cat
Thus the symplectic form is an essential ingredient both theoretically and
practically in classical mechanics. And recently, in a formulation due to S.
Sternberg, it has assumed an even more transcendent role. In Sternberg’s approach the forces themselves have been subsumed into the symplectic structure,
which is now all there is! We have come full circle: Symplectic geometry is
the mathematics of mechanics, and Hamiltonian mechanics is nothing but symplectic geometry on phase space. While symplectic geometry is most visible
in the context of mechanics, its use extends far beyond that small branch of
physics. Indeed, most classical systems, however complex, can be studied by
means of Hamiltonian techniques. These include models of galaxy formation,
electric circuits, and collective models of the nucleus. And if we generalize to
infinite-dimensional phase spaces, then classical fields can be studied as well. In
this way we have learned much about gravitational and other fields, about elasticity theory, about plasmas, and even about corrosion. Several close relatives
of symplectic geometry are likewise important. One is contact geometry, which
is an extension of symplectic geometry to odd -dimensional manifolds. It plays a
role in optics which is similar to that of symplectic geometry in mechanics. [It
is for this reason that mechanics and optics have had such a close and parallel
development; they are mathematical first cousins.]
In our exposition thus far, we have been concerned exclusively with classical
physics. One of the great lessons of the twentieth century, however, is that the
fundamental description of most (if not all) physical systems must be quantum
mechanical . The differences between the classical and quantum approaches to
physics are pronounced as well as profound, but it would take us too far afield
to discuss them in detail. We will therefore be content with making a few
basic observations. (A nice account of some of the more interesting aspects of
quantum theory can be found in Richard P. Feynman’s book QED (Princeton
University Press, 1985).)
The crux of the matter is that whereas classical physics is completely deterministic, quantum mechanics is inherently probabilistic. One consequence
is that while classically an observer may (in principle) simultaneously measure
physical quantities to any desired accuracy, an observer in a quantum mechanical world cannot: there are unavoidable limitations on the precision with which
certain pairs of quantities may be simultaneously measured. This is the content
of Heisenberg’s famous “uncertainty principle.” For instance, a measurement
which seeks to pinpoint simultaneously the position q and momentum p of an
electron must be uncertain in both, with the inaccuracies ∆q and ∆p satisfying
the inequality
∆q ∆p ≥ h/4π,
where Planck’s constant h = 6.6246 × 10−34 Joules-seconds. Since this constant
is very small, for most macroscopic systems such uncertainties are negligible.
For microscopic systems, they are not.
Does symplectic geometry play a role in quantum physics comparable to
that which it plays in the classical theory? It would not seem so. For one
thing quantum mechanical states are not represented by points of the classical
phase space. Rather, the quantum state space for a physical system is an
infinite-dimensional space called Hilbert space. This Hilbert space moreover
is related to the classical configuration space (viz., the space of all positions q)
of the system and not to its phase space ( em viz., the space of all momenta
p as well as positions q). Thus the basic symplectic object–phase space–of
classical physics has “disappeared” in the quantum theory. However, like the
Cheshire cat’s smile, remnants of symplectic geometry are still to be found in
the quantum theory. One such remnant is apparent in the above formula for the
uncertainty principle: the symplectic pairing of a position with its corresponding
momentum. In fact, there is no uncertainty principle for quantities which are not
symplectically intertwined. Other symplectic relics are the “Bohr-SommerfeldMaslov quantization rules,” which explain why certain physical parameters such
as electric charge and elementary particle spin can only take on a discrete set
of values.
But symplectic geometry is intimately involved in the transition from the
classical description of a system to its quantum version, that is, quantization.
To appreciate the significance of this, let us consider how the classical and quantum descriptions of a given physical system are related. In principle, as noted
above, every physical system is quantum mechanical in nature. However, for
most (sufficiently macroscopic) systems, the quantum description has a unique
“classical limit”–a classical description which in a certain sense accurately approximates the quantum one. In practice, on the other hand, one almost always
has a better ab initio understanding of the classical limit of a system than of
its full-blown quantum formulation. [This is for two reasons: our everyday
common-sense environment is classical, so that we are used to thinking in classical terms, and because the classical approximation is much simpler than the
quantum description.] Thus physicists are more often confronted with the problem of constructing a quantum formulation of a system from a knowledge of its
classical limit than they are with recovering the classical description from the
quantum. In other words, it is occasionally necessary to “quantize” a classical
Unfortunately, this is not a straightforward proposition. One difficulty is
that while the classical limit of a given quantum system is unique (if it exists),
there are always many different quantum systems which have the same classical
limit. Compounding matters is a “no-go” theorem, which asserts that it is
impossible to find a quantization scheme which can be consistently applied to
every classical system.
In spite of these obstacles, efforts to systematically quantize limited assortments of systems persist. One of the most successful such schemes is known as
geometric quantization theory. Based on the work of Bertram Kostant at M.I.T.
and Jean-Marie Souriau in Marseille, geometric quantization is a beautiful application of some of the most sophisticated ideas in symplectic geometry.
The idea behind geometric quantization is to use the symplectic geometry of
the classical phase space to construct the quantum Hilbert space. The key step
in this procedure is to polarize the phase space; that is, to invariantly separate
the positions q from the momenta p. This distinction is used to cut the phase
space down to the configuration space which, as we indicated earlier, is closely
related to the quantum Hilbert space. Once the phase space has been polarized,
it is possible to generate a quantum theory of the system.
While geometric quantization is a powerful tool, it can be both difficult and
subtle. For instance, there usually is some freedom in the choice of polarization,
mirroring the fact that many quantum systems have identical classical limits.
Thus the geometric quantization procedure may produce several inequivalent
quantum theories, and it is usually necessary to resort to experiment to select
the physically correct one. At the opposite extreme, there exist symplectic
manifolds which cannot be polarized. It is not known if there are any genuine
physical systems whose phase spaces have this property; in any event, it is
not clear what to make of these “purely classical” symplectic geometries. One
area in physics where geometric quantization may play a useful role is general
relativity. The classical physics of the gravitational field is fairly well understood
in terms of Einstein’s theory. Yet, of all the fundamental interactions in nature,
gravity alone does not have a consistent quantum description. This a major
puzzle of theoretical physics, and is why “quantum gravity” is an area of active
research. One reason the gravitational field so strongly resists quantization
is that its phase space is both infinite-dimensional and highly nonlinear. To
gain some preliminary insight into quantum gravity, a favorite stratagem is to
“freeze out” all but a finite number of these dimensions by demanding that
the gravitational field be the same everywhere in space (while still allowing it to
change in time). One thereby builds (relatively) tractable models of the Universe
and its gravitational fields known as homogeneous cosmologies. These form
handy “laboratories” for studying the quantization of gravity, and geometric
quantization has proven most useful in these “experiments.”
One of the experiments the authors have run in this laboratory concerns
the fascinating problem of gravitational singularities. These are unimaginably
dense, gravitationally violent phases through which the Universe, according to
general relativity, must pass at some point in its evolution. One such singularity–
the “Big Bang”–has almost certainly occurred, at the moment of creation. Will
there be another, final singularity–the “Big Crunch”? Observational data is
inconclusive at this stage, but if gravitational attraction is strong enough to
overcome the current expansion of the Universe, then it appears that the Universe will collapse, unrelentingly, to a fiery doom.
Box 7: The Fate of the Universe
This prediction is founded on classical general relativity. However, as gravity squeezes the Universe to submicroscopic dimensions, quantum effects should
predominate, and there has been much speculation that such effects could slow
or even halt the collapse. The authors have used geometric quantization to
investigate this possibility within the framework of homogeneous cosmologies.
Although the issue is far from settled, indications–unluckily!–are that quantum effects cannot prevent the final, catastrophic collapse of the Universe to a
In all of this discussion we have left unanswered one important question:
what is the origin of the unusual name “symplectic”? It is derived from the
Greek σνµπλκτ ικús, which is the antecedent of the Latin “complex.” Its
mathematical usage is due to Hermann Weyl who, in an effort to avoid a certain semantic confusion, renamed the then obscure “line complex group” the
“symplectic group.” But whatever its etymology, the adjective “symplectic”
means “plaited together” or “woven.” This is wonderfully apt, for it is this
intertwining–already evident in the above expression for the form–that most
characterizes, and is in fact the essence of, both symplectic geometry and Hamiltonian mechanics. And it is the intricate plaiting together of mathematics and
physics which gives symplectic geometry its power and its promise.
1. From the above discussion it may seem that the symplectic form is essentially the vector (or cross) product. This is a coincidence on R2 ; in higher
dimensions there is no relation between the two.
2. Technically, the Jacobi identity amounts to the 2-form Ω being closed:
dΩ = 0. In this regard we observe that the exterior differential of forms is
analogous to the Lie bracket of vector fields.
3. S 6 cannot be symplectic for cohomological reasons. But manifolds may
fail to be symplectic on other grounds. For instance, on S 4 it is not even
possible to define a nondegenerate antisymmetric bilinear form (closed or
not), so the obstruction in this case is more “algebraic.”
4. The set of symmetries of a geometrical structure on a manifold forms a
(Lie) group under composition. It is a well known fact that the isometry
group of a Riemannian metric is always finite-dimensional. But in symplectic geometry the analogous object–the symplectomorphism group–is
huge. (In fact, the Lie algebra of this group is isomorphic to the set of all
closed 1-forms on the manifold in question.)
5. This is particularly evident in the local expression for the symplectic form,
which is
dqi ∧ dpi ,
where ∧ is the wedge product of forms.
Box 1: Some History
Menaechmus advised Alexander that “There is no royal road to Geometry.” But if there is no royal road to symplectic geometry, the historical path is
surely replete with regal personages. For the roots of symplectic geometry can
be traced back to the work of many of the most outstanding names in mathematics and physics of the nineteenth and early twentieth centuries: Lagrange,
Poisson, Hamilton, Jacobi, Liouville, Hertz, Noether and Poincaré. Indeed, they
made lasting contributions to the science of mechanics, from whence symplectic geometry springs and with which it is inextricably linked. As is often the
case the mathematics owes its existence to physics, and this is especially true of
symplectic geometry; physics, in turn, has been enriched by the techniques and
insights afforded by the maturation of the mathematics it engendered.
Although symplectic geometry was really “there” from the beginning, the
geometric nature of mechanics was obscured by an early emphasis on the analytical and computational aspects of the theory. This mindset was so overwhelming
that Lagrange, in the preface to his monumental Mécanique Analytique [1788],
could boast:
The reader will find no figures in this work. The methods which I set
forth do not require either constructions or geometrical or mechanical
reasonings: but only algebraic operations, subject to a regular and
uniform rule of procedure.
It was not until more than a hundred years later, well after the groundbreaking mathematical work or Riemann in 1854, that one finds Darboux in
1889 and Hertz in 1899 employing geometrical ideas in mechanics, treating the
motion of any system, no matter how complicated, as that of a “particle” moving
in a certain higher-dimensional curved space. The death knell of the analytical
era was finally rung by the great French mathematician Henri Poincaré in 1889,
when he realized that purely quantitative techniques could not suffice to solve
various problems in celestial mechanics, notably that of stability in the “nbody problem” (for instance, the motion of the planets subject to their mutual
gravitational attractions).
This famous failure ushered in the “qualitative period” in mechanics, and
geometry at long last got its due. The first theorem in symplectic geometry
as such as Poincaré’s “last geometric theorem” in 1912 which, among other
things, predicts the existence of periodic orbits in the nobody problem. Still,
the geometrization of mechanics proceeded slowly, and by and large the symplectic aspects remained shadowy. Even as late as the 1940s, one finds physicists
utilizing symplectic ideas rather cautiously and superficially. (A contemporary
account of mechanics is given by Cornelius Lanczos in The Variational Principles
of Mechanics (University of Toronto Press, 1949).)
Meanwhile, mathematicians were rediscovering symplectic geometry from
entirely different angles, with the researches of Sophus Lie, Poincaré and Élie
Cartan paving the way. But symplectic geometry, as a distinct mathematical
discipline, did not really appear until the 1940s with the (little-known) work
of Hwa-Chung Lee in China. Within ten years, one finds French mathematicians such as Charles Ehresmann, André Lichnerowicz and Georges Reeb setting
the stage for future developments and later applications to mechanics. By the
mid 1960s, symplectic geometry was cast into the modern mathematical idiom,
and the symplectic compact between geometry and mechanics was irreversibly
sealed. A period of explosive growth, fueled primarily by emergent schools of
American and Russian symplectic geometers, followed and continues unabated
to this day.
Box 2: Geometry on Manifolds
The familiar mensuration formulas of Euclidean and symplectic geometry
will change when the flat plane R2 is replaced by a more general “warped”
space, such as a sphere or a saddle. On both the (positively curved) sphere and
the (negatively curved) saddle we draw a disk of radius r as illustrated below.
Provided r is relatively small, the circumference C of the disk is given by
1 3
C = 2π r − κr + · · ·
κ is a constant which is +1 for the sphere, 0 for the plane and −1 for the
saddle. Only when κ = 0 do we recover the “standard” formula C = 2πr,
showing that the Riemannian geometries of the sphere and saddle are genuinely
Similarly, the symplectic geometries of the sphere and saddle are distinct
from that of the plane. For if we compute the area A of the disk, we find
A = π r2 − κr4 + · · ·
which deviates from the familiar result A = πr2 when κ = 0.
Note that on the sphere, the circumference and area of the disk are smaller
than those of a disk of radius r in the plane. This can be most easily seen by
cutting the disk out of the sphere and flattening it onto the plane; it would split
apart as shown below left. If one does the same for the disk on the saddle, it
would fold over on itself as shown below right, which explains why the values of
C and A are greater on the saddle than the plane.
Box 3: Symplectic Geometries on R4
To get a feeling for the nature of the exotic symplectic geometry on R4 ,
it is useful to contrast it with the standard one. Since we cannot visualize
4-dimensional objects, we restrict attention to the unit 3-dimensional sphere
centered at the origin of R4 . When restricted to this sphere, both the exotic
and standard symplectic forms define symplectic planes at each point. Some
of these planes are plotted in the photographs here. Notice how twisted the
pattern of planes for the exotic symplectic structure is (above) as compared
to the standard one (below). (Computer graphics courtesy of Larry Bates and
Charles Herr, Universities of Calgary and Alberta.)
Box 4: Cartography
A map attempts to represent the surface of a sphere on a plane. Every
map distorts shapes (Riemannian geometry) to some extent, even the currently
accepted “best” map, the Robinson projection of 1963 (left). On the other
hand, one can draw maps with no size distortion (symplectic geometry). One
such map is the Mollweide elliptical equal-area projection of 1805 (right). This
phenomenon reflects the “flatness” of symplectic manifolds.
Box 5: The Phase Flow for the Planar Pendulum
A planar pendulum is a mass at the end of a light rod which is free to swing
in a given plane. All possible positions of the bob are parametrized by the angle
θ, with −180◦ ≤ θ ≤ 180◦ . Its corresponding angular momentum pθ can take
on any value. The phase space for the planar pendulum is therefore a circular
cylinder, with θ running counterclockwise around the cylinder and pθ running
along its axis. Since 180◦ and −180◦ represent the same position (“up”), these
two configurations must be identified; hence the circle.)
The forces acting on the bob are gravity and the tension in the rod. The pattern of force vectors and resulting dynamical trajectories in the phase space are
drawn above. For clarity, we have “unwrapped” the phase space into a rectangular strip; the two vertical lines corresponding to θ = ±180◦ must be identified
with one another. The point in the center represents a stable equilibrium, with
the pendulum motionless and hanging straight down. The ellipses surrounding
this point correspond to rocking motions of the pendulum. These increase in
amplitude until the bob “just” reaches the top. There we have an unstable
equilibrium (located at θ = ±180◦ , pθ = 0), with the pendulum stationary
and pointing straight up. The remaining wavy trajectories represent motions
wherein the bob swings entirely around the hinge going either clockwise (top)
or counterclockwise (bottom).
Box 6: The Yang-Mills Cat
The falling cat can be modeled mechanically by two cylinders connected
with a ball-and-socket joint. In control theory terms, the problem is to maneuver this system, by rotating the cylinders both about their axes and relative
to one another, to achieve a desired orientation in an optimal way. Symplectic
geometry provides a succinct description of this problem and greatly facilitates
its solution. In effect, the cat lands on its feet by solving the Yang-Mills equations! Contrary to an old wives’ tale, the cat’s tail is irrelevant; Manx cats land
on their feet too. (This last assertion is rumored to have been experimentally
rayon de l’univers
rayon de l’univers
Box 7: The Fate of the Universe
“Big Bang”
“Big Crunch”
“Quantum Bounce”
Einstein’s classical theory of gravity predicts that the Universe has evolved
from an initial singularity–the Big Bang. But what will become of the Universe is not yet known; the standard scenarios (for homogeneous and isotropic
models) are sketched above left. The Universe is currently expanding and its
fate depends on how quickly gravitational attraction slows this expansion. This
in turn depends crucially on the average mass density ρ of the Universe. The
critical value of ρ is ρc ≈ 10−30 gm/cm3 . Models with ρ = ρc will just escape
gravitational collapse (the middle curve). In models with ρ > ρc (such as the
third curve), gravitational attraction overwhelms the expansion and the model
collapses to a final singularity–the Big Crunch. Observationally ρ ≈ 10−31 , but
the issue is far from settled.
Cosmologists have wondered whether quantum effects, which are expected
to be of primary importance when the Universe has been crushed to subatomic
size (the shaded region in the figure on the right), might prevent the ultimate
collapse. Conceivably the Universe could “bounce” into a new expansion phase. |
f20b1da82219fe09 | All Issues
Volume 13, 2019
Volume 12, 2018
Volume 11, 2017
Volume 10, 2016
Volume 9, 2015
Volume 8, 2014
Volume 7, 2013
Volume 6, 2012
Volume 5, 2011
Volume 4, 2010
Volume 3, 2009
Volume 2, 2008
Volume 1, 2007
Inverse Problems & Imaging
November 2014 , Volume 8 , Issue 4
Special issue on complex geometrical optics (CGO) solutions
Select all articles
Sarah Hamilton, Kim Knudsen, Samuli Siltanen and Gunther Uhlmann
2014, 8(4): i-ii doi: 10.3934/ipi.2014.8.4i +[Abstract](773) +[PDF](106.9KB)
Complex Geometrical Optics (CGO) solutions have, for almost three decades, played a large role in the rigorous analysis of nonlinear inverse problems. They have the added bonus of also being useful in practical reconstruction algorithms. The main benefit of CGO solutions is to provide solutions in the form of almost-exponential functions that can be used in a variety of ways, for example for defining tailor-made nonlinear Fourier transforms to study the unique solvability of a nonlinear inverse problem.
For more information please click the “Full Text” above.
Stability of the Calderón problem in admissible geometries
Pedro Caro and Mikko Salo
2014, 8(4): 939-957 doi: 10.3934/ipi.2014.8.939 +[Abstract](1338) +[PDF](449.3KB)
In this paper we prove log log type stability estimates for inverse boundary value problems on admissible Riemannian manifolds of dimension $n \geq 3$. The stability estimates correspond to the uniqueness results in [13]. These inverse problems arise naturally when studying the anisotropic Calderón problem.
Partial data for the Neumann-Dirichlet magnetic Schrödinger inverse problem
Francis J. Chung
2014, 8(4): 959-989 doi: 10.3934/ipi.2014.8.959 +[Abstract](1050) +[PDF](499.4KB)
We show that an electric potential and magnetic field can be uniquely determined by partial boundary measurements of the Neumann-to-Dirichlet map of the associated magnetic Schrödinger operator. This improves upon the results in [4] by including the determination of a magnetic field. The main technical advance is an improvement on the Carleman estimate in [4]. This allows the construction of complex geometrical optics solutions with greater regularity, which are needed to deal with the first order term in the operator. This improved regularity of CGO solutions may have applications in the study of inverse problems in systems of equations with partial boundary data.
Numerical nonlinear complex geometrical optics algorithm for the 3D Calderón problem
Fabrice Delbary and Kim Knudsen
2014, 8(4): 991-1012 doi: 10.3934/ipi.2014.8.991 +[Abstract](1434) +[PDF](1600.7KB)
The Calderón problem is the mathematical formulation of the inverse problem in Electrical Impedance Tomography and asks for the uniqueness and reconstruction of an electrical conductivity distribution in a bounded domain from the knowledge of the Dirichlet-to-Neumann map associated to the generalized Laplace equation. The 3D problem was solved in theory in late 1980s using complex geometrical optics solutions and a scattering transform. Several approximations to the reconstruction method have been suggested and implemented numerically in the literature, but here, for the first time, a complete computer implementation of the full nonlinear algorithm is given. First a boundary integral equation is solved by a Nyström method for the traces of the complex geometrical optics solutions, second the scattering transform is computed and inverted using fast Fourier transform, and finally a boundary value problem is solved for the conductivity distribution. To test the performance of the algorithm highly accurate data is required, and to this end a boundary element method is developed and implemented for the forward problem. The numerical reconstruction algorithm is tested on simulated data and compared to the simpler approximations. In addition, convergence of the numerical solution towards the exact solution of the boundary integral equation is proved.
A real-time D-bar algorithm for 2-D electrical impedance tomography data
Melody Dodd and Jennifer L. Mueller
2014, 8(4): 1013-1031 doi: 10.3934/ipi.2014.8.1013 +[Abstract](1360) +[PDF](412.9KB)
The aim of this paper is to show the feasibility of the D-bar method for real-time 2-D EIT reconstructions. A fast implementation of the D-bar method for reconstructing conductivity changes on a 2-D chest-shaped domain is described. Cross-sectional difference images from the chest of a healthy human subject are presented, demonstrating what can be achieved in real time. The images constitute the first D-bar images from EIT data on a human subject collected on a pairwise current injection system.
Reconstruction of complex-valued tensors in the Maxwell system from knowledge of internal magnetic fields
Chenxi Guo and Guillaume Bal
2014, 8(4): 1033-1051 doi: 10.3934/ipi.2014.8.1033 +[Abstract](1055) +[PDF](492.7KB)
This paper concerns the reconstruction of a complex-valued anisotropic tensor $\gamma = \sigma + \iota\omega\varepsilon$ from knowledge of several internal magnetic fields $H$, where $H$ satisfies the anisotropic Maxwell system on a bounded domain with prescribed boundary conditions. We show that $\gamma$ can be uniquely reconstructed with a loss of two derivatives from errors in the acquisition of $H$. A minimum number of $6$ such functionals is sufficient to obtain a local reconstruction of $\gamma$ in dimension three provided that the electric field satisfies appropriate boundary conditions. When $\gamma$ is close to a scalar tensor, such boundary conditions are shown to exist using the notion of complex geometric optics (CGO) solutions. For arbitrary symmetric tensors $\gamma$, a Runge approximation property is used instead to obtain partial results. This problem finds applications in the medical imaging modalities Current Density Imaging and Magnetic Resonance Electrical Impedance Tomography.
A data-driven edge-preserving D-bar method for electrical impedance tomography
Sarah Jane Hamilton, Andreas Hauptmann and Samuli Siltanen
2014, 8(4): 1053-1072 doi: 10.3934/ipi.2014.8.1053 +[Abstract](1431) +[PDF](2349.8KB)
In Electrical Impedance Tomography (EIT), the internal conductivity of a body is recovered via current and voltage measurements taken at its surface. The reconstruction task is a highly ill-posed nonlinear inverse problem, which is very sensitive to noise, and requires the use of regularized solution methods, of which D-bar is the only proven method. The resulting EIT images have low spatial resolution due to smoothing caused by low-pass filtered regularization. In many applications, such as medical imaging, it is known a priori that the target contains sharp features such as organ boundaries, as well as approximate ranges for realistic conductivity values. In this paper, we use this information in a new edge-preserving EIT algorithm, based on the original D-bar method coupled with a deblurring flow stopped at a minimal data discrepancy. The method makes heavy use of a novel data fidelity term based on the so-called CGO sinogram. This nonlinear data step provides superior robustness over traditional EIT data formats such as current-to-voltage matrices or Dirichlet-to-Neumann operators, for commonly used current patterns.
An inverse problem for a three-dimensional heat equation in thermal imaging and the enclosure method
Masaru Ikehata and Mishio Kawashita
2014, 8(4): 1073-1116 doi: 10.3934/ipi.2014.8.1073 +[Abstract](1167) +[PDF](667.2KB)
This paper studies a prototype of inverse initial boundary value problems whose governing equation is the heat equation in three dimensions. An unknown discontinuity embedded in a three-dimensional heat conductive body is considered. A single set of the temperature and heat flux on the lateral boundary for a fixed observation time is given as an observation datum. It is shown that this datum yields the minimum length of broken paths that start at a given point outside the body, go to a point on the boundary of the unknown discontinuity and return to a point on the boundary of the body under some conditions on the input heat flux, the unknown discontinuity and the body. This is new information obtained by using enclosure method.
Calderón problem for Maxwell's equations in cylindrical domain
Oleg Yu. Imanuvilov and Masahiro Yamamoto
2014, 8(4): 1117-1137 doi: 10.3934/ipi.2014.8.1117 +[Abstract](1163) +[PDF](492.8KB)
We prove some uniqueness results in determination of the conductivity, the permeability and the permittivity of Maxwell's equations in a cylindrical domain $\Omega \times (0,L)$ from partial boundary map. More specifically, for an arbitrarily given subboundary $\Gamma_0 \subset \partial\Omega$, we prove that the coefficients of Maxwell's equations can be uniquely determined in the subdomain $(\Omega \setminus$ [the convex hull of $\Gamma_0])$ $ \times (0,L)$ by the boundary map only for inputs vanishing on $\Gamma_0 \times (0,L)$.
Increasing stability for determining the potential in the Schrödinger equation with attenuation from the Dirichlet-to-Neumann map
Victor Isakov and Jenn-Nan Wang
2014, 8(4): 1139-1150 doi: 10.3934/ipi.2014.8.1139 +[Abstract](1189) +[PDF](346.5KB)
We derive some bounds which can be viewed as an evidence of increasing stability in the problem of recovering the potential coefficient in the Schrödinger equation from the Dirichlet-to-Neumann map in the presence of attenuation, when energy level/frequency is growing. These bounds hold under certain a-priori regularity constraints on the unknown coefficient. Proofs use complex and bounded complex geometrical optics solutions.
The nonlinear Fourier transform for two-dimensional subcritical potentials
Michael Music
2014, 8(4): 1151-1167 doi: 10.3934/ipi.2014.8.1151 +[Abstract](1148) +[PDF](408.6KB)
The inverse scattering method for the Novikov-Veselov equation is studied for a larger class of Schrödinger potentials than could be handled previously. Previous work concerns so-called conductivity type potentials, which have a bounded positive solution at zero energy and are a nowhere dense set of potentials. We relax the conductivity type assumption to include logarithmically growing positive solutions at zero energy. These potentials are stable under perturbations. Assuming only that the potential is subcritical and has two weak derivatives in a weighted Sobolev space, we prove that the associated scattering transform can be inverted, and the original potential is recovered from the scattering data.
An inverse problem for the magnetic Schrödinger operator on a half space with partial data
Valter Pohjola
2014, 8(4): 1169-1189 doi: 10.3934/ipi.2014.8.1169 +[Abstract](1241) +[PDF](467.4KB)
In this paper we prove uniqueness for an inverse boundary value problem for the magnetic Schrödinger equation in a half space, with partial data. We prove that the curl of the magnetic potential $A$, when $A\in W_{comp}^{1,\infty}(\overline{\mathbb{R}_{-}^3},\mathbb{R}^3)$, and the electric pontetial $q \in L_{comp}^{\infty}(\overline{\mathbb{R}_{-}^3},\mathbb{C})$ are uniquely determined by the knowledge of the Dirichlet-to-Neumann map on parts of the boundary of the half space.
2018 Impact Factor: 1.469
Email Alert
[Back to Top] |
26ec28e5df8924d3 | David Albert
A Quantum Threat to Special Relativity
By David Z. Albert and Rivka Galchen
Scientific American, March 2009
Edited by Andy Ross
Our intuition is that things can only directly affect other things that are right next to them. We term this intuition locality.
Before quantum mechanics, we believed that a complete description of the physical world could be expressed as the sum of the stories of its smallest and most elementary physical constituents. Quantum mechanics violates this belief.
Real, measurable, physical features of collections of particles can exceed or elude or have nothing to do with the sum of the features of the individual particles. Particles related in this fashion are quantum mechanically entangled with one another. Entanglement may connect particles irrespective of where they are and what they are. Entanglement appears to entail nonlocality. And nonlocality threatens special relativity.
In 1935, Albert Einstein and his colleagues Boris Podolsky and Nathan Rosen presented what is now known as the EPR argument. Suppose that we measure the position of a particle that is entangled with a second particle so that neither individually has a precise position. When we learn the outcome of the measurement, we change our description of the first particle. Entanglement allows us to alter our description of the second particle, instantaneously, no matter how far away it may be or what may lie between the two particles.
Einstein, Podolsky and Rosen took it for granted that the apparent nonlocality of quantum mechanics must be some kind of anomaly or infelicity. They argued that if locality prevails in the world and if the experimental predictions of quantum mechanics are correct, then quantum mechanics must leave aspects of the world out of its account.
In 1964, John S. Bell reasoned that if any local algorithm existed that made the same predictions for the outcomes of experiments as the quantum mechanical algorithm does, then the EPR argument would justify dismissing the nonlocalities in quantum mechanics as mere artifacts of the formalism. Conversely, if no algorithm could avoid nonlocalities, then they must be genuine physical phenomena. Bell analyzed a specific entanglement scenario and concluded that no such local algorithm was mathematically possible. The world is nonlocal.
Bell had shown that locality was incompatible not merely with the abstract theoretical apparatus of quantum mechanics but with certain of its empirical predictions as well. Since then, experimenters have left no doubt that those predictions are indeed correct. The bad news is not for quantum mechanics but for the principle of locality — and for special relativity, which appears to rely on a presumption of locality.
Special relativity is bound up with the impossibility of transmitting messages faster than the speed of light. If special relativity is true, no material carrier of a message can be accelerated from rest to speeds greater than that of light. A message transmitted faster than light would, according to some clocks, be a message that arrived before it was sent, potentially unleashing all the paradoxes of time travel.
In 1932, John von Neumann proved that the nonlocality of quantum mechanics can never be used to transmit messages instantaneously. The proof seemed to affirm that quantum-mechanical nonlocality and special relativity can coexist.
In 1994, Tim Maudlin published a rigorous discussion of quantum nonlocality and relativity. By then, a number of specific proposals existed to account for apparent nonlocality. These proposals included the Bohmian mechanics of David Bohm and the GRW model of GianCarlo Ghirardi, Alberto Rimini and Tullio Weber.
Maudlin pointed out that the special theory of relativity is a claim about the geometric structure of space and time. The impossibility of transmitting mass or energy or information or causal influences faster than light do not show that quantum mechanical nonlocality and special relativity can coexist. Indeed special relativity is compatible with a variety of hypothetical mechanisms for faster-than-light transmission of mass and energy and information and causal influence.
However, the nonlocal interaction between particles in quantum mechanics depends only on whether the particles are entangled with each other. This seems to call for absolute simultaneity, which would pose a threat to special relativity.
In 2006, Roderich Tumulka showed how all the empirical predictions of quantum mechanics for entangled pairs of particles can be reproduced by a modification of the GRW theory. The modification is nonlocal, and yet it is compatible with the spacetime geometry of special relativity.
Tumulka's theory introduces a new variety of nonlocality into the laws of nature — nonlocality in time. To use his theory to determine the probabilities of what happens next, one must plug in not only the world's current complete physical state but also certain facts about the past. In this way, nonlocality can coexist with special relativity.
So it turns out that the combination of quantum mechanics and special relativity contradicts a primordial intuition. We believe that everything there is to say about the world can in principle be put into the form of a narrative sequence of propositions about spatial configurations of the world at specific times. But entanglement and special relativity together imply that the physical history of the world is far too rich for that.
Special relativity mixes up space and time to transform entanglements between systems that are spatially separated into entanglements between their states at different times. Entanglement and nonlocality are implied by the wave function that Erwin Schrödinger introduced to define quantum states.
Quantum mechanical wave functions are represented mathematically in a vast configuration space. If the quantum mechanical waves wave are real physical objects, then perhaps the history of the world unfolds not in the 3D space of our everyday experience or in the 4D spacetime of special relativity but rather in the infinite-dimensional configuration space. Our 3D world and the idea of locality would need to be understood as emergent.
If temporal nonlocality is a problem, the status of special relativity is open to question.
A Relativistic Version of the Ghirardi-Rimini-Weber Model
Roderich Tumulka, 2006
Carrying out a research program outlined by John S. Bell in 1987, we arrive at a relativistic version of the Ghirardi-Rimini-Weber (GRW) model of spontaneous wavefunction collapse. The GRW model was proposed as a solution of the measurement problem of quantum mechanics and involves a stochastic and nonlinear modification of the Schrödinger equation. It deviates very little from the Schrödinger equation for microscopic systems but efficiently suppresses, for macroscopic systems, superpositions of macroscopically different states.
As suggested by Bell, we take the primitive ontology, or local beables, of our model to be a discrete set of space-time points, at which the collapses are centered. This set is random with distribution determined by the initial wavefunction. Our model is nonlocal and violates Bell's inequality though it does not make use of a preferred slicing of space-time or any other sort of synchronization of spacelike separated points. Like the GRW model, it reproduces the quantum probabilities in all cases presently testable, though it entails deviations from the quantum formalism that are in principle testable. Our model works in Minkowski space-time as well as in (well-behaved) curved background space-times.
AR I know David Albert. I read his two books years ago and watched him lecture at T2K and T2K2. His support for David Bohm's version of quantum mechanics (where particles are like little spaceships guided by pilot waves defined by the Schrödinger wavefunction) always struck me as unfortunate — which ruined his first book for me (quite apart from its studied avoidance of complex numbers where we all agree they help).
Now I see that Roderich Tumulka also supports Bohmian mechanics (BM) too (as shown by his slides for the Perimeter Institute meeting on time in quantum mechanics held in September 2008), so I guess we should not dismiss BM yet. My problem with it, for the record, is the deeply mysterious nature of the instantaneous guidance provided by information in the pilot waves. Bohm's "implicate order" seems as bad as Bohr's mysticism to me.
Some years ago I was enamored of the GRW approach to solving Schrödinger's cat problem, despite the fact that it seemed somewhat "unromantic" (in John Bell's sense). Still, I was disturbed by its unrelativistic aspect. Now Tumulka has rescued GRW from that problem and made it a serious — and potentialy testable — candidate theory.
As for nonlocality, I fear we're stuck with it. Down at the quantum level everything is entangled with everything else in a nasty knot that we can only approach statistically — this seems likely to be a fundamental limitation on human knowledge. Temporal nonlocality adds no new problem of principle. I think Albert may be exaggerating the threat here.
Temporal locality is an issue I have reflected upon for twenty years, following a fine essay by Michael Dummett on the problem it seems to raise of retroactive causation. This problem was both brilliantly visualized and neatly resolved in Steven Spielberg's Back to the Future movies, in effect by taking the Everett-Deutsch approach of invoking multiple parallel worlds (see my Mindworlds slides). |
f5ed4c03fccfd89d | lördag 31 december 2011
Quantum.informational medicine from Belgrade.
I got a letter from Rakovic' with greetings for the New Year. It contained an interesting link. Abstracts in english.
Belgrade, 23-25 September 2011
EDITORS NOTE p.12-14, D. Raković, M. Mićović, S. Arandjelović
Contemporary medicine has put its emphasis on „alopatic-dosed non-economic“ highly pharmaceutic-oriented medicine technologies. On the contrary, in the past years more attention is payed to bioadequate „homeopatic-dosed economic“ bioresonant quantum-informational medicine technologies, related to usage of such values of the field energy, appearing in normal functioning of human organism [1–4]. On these lines, contemporary investigations of psychosomatic diseases imply the necessity of application of holistic methods, oriented to healing the person as a whole and not disease as a symptom of disorder of the whole, suggesting their macroscopic quantum origin [3,4].
In the focus of these quantum-holistic methods are body's acupuncture system and consciousness – which have quantum-informational structure of quantum-holographic Hopfield-like associative neural network, within the Feynman propagator version of Schrödinger equation [5] – with very significant quantum-holographic psychosomatic implications [3]. In this context, it should be noted that Resonant Recognition Model (RRM) of biomolecular recognition implies that on the biomolecular level information processing is going on in the inverse space of Fourier spectra of the primary sequences of biomolecules [6], similarly to (quantum) holographic ideas that cognitive information processing is going on in the inverse space of Fourier spectra of the perceptive stimuli [7], thus supporting idea on quantum-holographic fractal coupling of various hierarchical levels in biological species.
In the context of quantum-informational bioresonant therapies [3,4,8,9], their goal would be a resonant re-emitting of corresponding band of electromagnetic (EM) spectrum of microwave/ultralowfrequency-modulated radiation of the treated psychosomatically disordered (palpatory painful) state (as one of hundreds possible disordered states) thus enabling that its initial memory attractor is resonantly excited becoming more shallower and wider on the account of deepening of the (energy-dominating) attractor of healthy acupuncture (palpatory painless) state – which is then altogether quantum-holographically projected on the lower EM quantum-holographic cellular level, thus changing the expression of genes [3]. In this context, homeopathy might be also categorized into quantum-informational bioresonant therapies, as (non)disolved homeopathic initial substance [10-12] with characteristic EM memory-attractor states (like any other substance, as demonstrated by muscle test of the Applied kineziology [13]), can interact with macroscopic quantum-sensory EM level of the acupuncture system/consciousness and imprint inthere its program of homeopathic correction.
On the other hand, in the context of quantum-informational meridian (psycho)therapies [3,14], via simultaneous effects of visualization of the treated (psychologically traumatic) problem and tapping/touching upon the selected acupuncture points, new boundary conditions are imposed in the energy-state space of the acupuncture system/consciousness, and memory attractor of the initial psychosomatic disorder becomes shallower and wider, with greater overlap and followed associative integration into memory attractor of normal (energy-dominating) ego-state. In this context, techniques of energy healing of the acupuncture system/consciousness [15], positively-visualizing meditation healing [16], and various psychotherapeutic techniques for recognition/integration of psychological conflicts and personality growth [17]) might be also categorized into quantum-informational therapies, via introspective emotional/ traumatic excitation with imposed new (psychologically healing) boundary conditions in the energy-state space of the acupuncture system/consciousness.
In the same context it is important to make half-a-year diagnostics and balansing of the acupuncture system, whose disbalanse presumably originates from the restituted patient’s mental loads from his non-reprogrammed mental transpersonal environment of the quantum-holographic collective consciousness [3] (as supported by the Tibetan pulse diagnostics, enabling precise diagnosis of psychosomatic disorders not only of the patient himself but also of his family members and enemies [18]).
This implies that memory attractors of quantum-holographic network of collective consciousness might be treated as psychosomatic collective disorders which represent generalized field-related quantum-holistic records (including inter-personal finally-reprogramable nonlocal loads, via hesychastic prayer or circular psychotherapies from all relevant meta-positions included in the problem [3]) – which might represent basis of quantum-informational medicine of collective consciousness.
So, on the basis of this quantum-holographic context, it might be said that three front lines of integrative psychosomatic medicine do exist [3]: (1) Spirituality and circular (psycho)therapies from all relevant meta-positions, with possibility of permanent erasing of mutual memory attractors on the level of collective consciousness; (2) Eastern (quantum)holistic medicine and non-circular (psycho)therapies, whose efforts temporary erase memory attractors on the level of acupuncture system/individual consciousness, and prevent or alleviate their somatization, as a consequence of the indolence on the first level; (3) Western symptomatic medicine, whose activities on the somatic level via immunology, pharmacology, biomedical diagnostics and surgery hinder or soothe somatized consequences of the carelessness on the first two levels. It should be stressed, that necessary activities on the second and third levels, with neglect of the first level, have a consequence of further transfer of memory attractors on the level of individual and collective consciousness in this and further generations, thus accumulating quantum-holographic loads which afterwards cause not only illnesses, but also interpersonal fights, wars, and other troubles.
Symposium “Quantum-Informational Medicine QIM 2011: Acupuncture-Based & Consciousness-Based Holistic Approaches & Techniques” provides fundamental quantum-informational framework for better understanding of the nature of psychosomatic diseases as well as limitations of the healing methods, which might help in developing strategies for psychosomatic integrative medicine in the 21st century.
The Editors are indebted to their institutions and numerous sponsors, as well as to members of QIM 2011 Program Commitee, Organizing Commitee, and Secretariat, for logistic support to QIM 2011. We also kindly acknowledge QIM 2011 Symposium plenary and oral speakers and QIM 2011 Workshop teachers for their reports on pioneer research and enthusiastic work in the emerging field of acupuncture-based & consciousness-based holistic medicine and all QIM 2011 Round Table Knowledge Federation Dialog Belgrade 2011 invited participants for their related critical consideration of partial and holistic oriented approaches to physics and engineering, medicine and biology, psychology and transpersonal phenomena, art and philosophy, society and religion.
1. http://www.issseem.org; V. N. Volchenko, Inevitability, reality and possibility of reaching subtle world, Consciousness and Physical Reality, Nos. 1-2 (1996), in Russian.
2. V. Stambolović (ed), Alternative Approaches to Health Improvement, ALCD, Belgrade, 2003, in Serbian.
3. www.dejanrakovicfund.org; D. Raković, Holistic quantum-holographic framework for psychosomatics, Med Data Rev, 3(2) (2011) 211-214, Invited paper; D. Raković, Integrative Biophysics, Quantum Medicine, and Quantum-Holographic Informatics: Psychosomatic-Cognitive Implications, IASC & IEPSP, Belgrade, 2009; D. Raković, A. Škokljev, D. Djordjević, Introduction to Quantum-Informational Medicine, With Basics of Quantum-Holographic Psychosomatics, Acupuncturology and Reflexotherapy, ECPD, Belgrade, 2009, in Serbian; D. Raković, Fundamentals of Biophysics, 3rd ed, IASC & IEFPG, Belgrade, 2008, in Serbian.
4. Group of authors, Anti-Stress Holistic Handbook, With Fundamentals of Acupuncture, Microwave Resonance Therapy, Relaxation Massage, Airoionotherapy, Autogenic Training, and Consciousness, IASC, Belgrade, 1999, in Serbian; Z. Jovanović-Ignjatić, Quantum-Holographic Medicine: Via Acupuncture and Microwave-resonance (Self)regulatory Mechanisms, Quanttes, Belgrade, 2010, in Serbian.
5. M. Peruš, Neuro-quantum parallelism in mind-brain and computers, Informatica 20 (1996) 173-183.
6. I. Cosic, The Resonant Recognition Model of Macromolecular Bioactivity: Theory and Applications, Birkhauser Verlag, Basel, 1997; G. Keković, D. Raković, B. Tošić, D. Davidović, I. Cosic, Quantum-mechanical foundations of Resonance Recognition Model, Acta Phys. Polon. A 17(5) (2010) 756-759.
7. K. Pribram, Languages of the Brain: Experimental Paradoxes and Principles in Neuropsychology, Brandon, New York, 1971; K. Pribram, Brain and Perception: Holonomy and Structure in Figural Processing, Lawrence Erlbaum, Hillsdale, 1991.
8. S. P. Sit'ko, L. N. Mkrtchian, Introduction to Quantum Medicine, Pattern, Kiev, 1994; N. D. Devyatkov, O. Betskii (eds), Biological Aspects of Low Intensity Millimetre Waves, Seven Plus, Moscow, 1994; Yu. P. Potehina, Yu. A. Tkachenko, A.M. Kozhemyakin, Report on Clinical Evaluation for Apparatus EHF-IR Therapies Portable with Changeable Oscillators CEM TECH, CEM Corp, Nizhniy Novgorod, 2008.
9. Contemporary critical review of Western and Eastern technologies in energy-quantum-informational medicine can be found on the website: www.energy-medicine.info
10. B. Bellavite, A. Signorini, The Emerging Science of Homeopathy: Complexity, Biodynamics and exity, Biodynamics and Nanopharmacology,North Atlantic Books, Berkeley, 2002; A. Krstić, Homeopathy and Health. Handbook on Self-Aid and Mutual Aid in Healing People, Mol, Belgrade, 2000, in Serbian; B. Todorović, Scientific Bases of Homeopathy: Bioinformatics and Nanopharmacology Prometej, Novi Sad, 2005, in Serbian.
11. R. Voll, Twenty years of electroacupuncture diagnosis in Germany. A progress report, Am. J Acup. 3(1) (1975) 7-17; R. Voll, Topographishe Lage der Messpunkte der Elektroakupunktur, Medizinich Literaturishe Verlagsgesellschaft MBH, Uelzen, 1976, in German.
12. A. V. Samohin, Yu. V. Gotovskiy, Electropuncture Diagnostics and Therapy after Voll, 5th ed, IMEDIS, Moscow, 2007, in Russian; M. Yu. Gotovskiy, Yu. F. Petrov, L. V. Chernecova, Bioresonant Therapy, IMEDIS, Moscow, 2008, in Russian.
13. W. Fishman, M. Grinims, Muscle Response Test, Richard Marek, New York, 1979.
14. R. J. Callahan, J. Callahan, Thought Field Therapy and Trauma: Treatment and Theory, Indian Wells, 1996; Ž. Mihajlović Slavinski, PEAT and Neutralization of Primeival Polarities, Belgrade, 2001.
15. W. Lee Rand, Reiki The Healing Touch, Vision, Southfield, 1998; E. Pearl, The Reconnection: Heal Others, Heal Yourself, Hay House, Carlsbad, 2001; V. Stibal, Theta Healing: Go Up and Seek God, Go Up and Work With God, THInK, Idaho Falls, 2006.
16. D. Chopra, Quantum Healing: Exploring the Frontiers of Mind/Body Medicine, Bantam, New York, 1989; M. Talbot, The Holographic Universe, Harper Collins, New York, 1991.
17. http://en.wikipedia.org/wiki/Psychotherapy; S. Milenković, Values of Contemporary Psychotherapy, Narodna knjiga – Alfa, Belgrade, 1997, in Serbian; V. Jerotić, Individuation and (or) Deification, Ars Libri, Belgrade & National and University Library, Priština, 1998, in Serbian; C. Tart (ed), Transpersonal Psychologies, 2nd ed, Harper, San Francisco, 1992.
18. http://en.wikipedia.org/wiki/Traditional_Tibetan_medicine; S. Petrović, Tibetan Medicine, Narodna knjiga – Alfa, Belgrade, 2000, in Serbian.
Some interesting headings:
A. Ya. Grabovschiner (Russia), J-L.Garillon (France)
B. P. Grubnik (Ukraine) Abstract. Electromagnetic radiation of a millimeter range (ERHF) has versatile influence on a human body and, first of all, on processes of regulation and homeostasis maintenance. Realization of this influence is appreciably provided at the expense of subcellular, cellular mechanisms of regulation of functions and is caused by a number of features. One of the cores, its multilevel character, is: effects of influence are shown at all levels of the biological organization of an organism. In the mechanism of influence ERHF on a human body unique feature is possibility of its resonant interaction with endogenic ERHF. ERHF affects practically all known types of cages (nervous, muscular, reception, etc.) in modeling systems of any level of the organization of biological object. Mechanisms of this influence are defined by specificity of arising changes of functional parameters of a cage, its separate components caused by influence of the given kind of radiation at molecular and sub-molecular level of the organization of a cage. Possibility of updating under the influence of ERHF physics – chemical properties of a plasmatic membrane, activity of enzymes, ionic transport, permeability of cellular membranes, processes of aggregation of cages – are shown. The expressed influence of ERHF on electric activity of separate neuron is shown. Influence of Microwave Resonance Therapy (МRТ) on a sick organism promotes restoration of an aerobic way of recycling of glucose by cells. The extensive clinical and experimental material is testifying changes of the immune status of sick people and after influence of MRT the activity of immune cells is saved up. It is shown that the irradiation of blood of ulcer patients in vitro leads to restoration of the lowered metabolic activity of leukocytes, fagocytic activity of neutrofills, and monocytes. MRT normalizing impact on coagulation system is observed at diseases of cardiovascular system, in particular stenocardias. Usage of MRT at sick of hypertensive illness of I stage restores compensatory possibilities of cardiovascular system, with favorable normalizing impact on hemodynamics. MRT is effective at treatment of gastro duodenal ulcers, at neurologic patients, in complex treatment of patients with hyperplastic processes in a uterus, treatment of gynecologic diseases, in treatment orthopedic, diseases of an urological profile, at treatment of a chronic obstructive bronchitis, in treatment of oncological patients of SH-1Y stage, at sick of a cerebral atherosclerosis, in preventive maintenance and treatment of paresis, a gastro enteric path after operations, treatment of a children's cerebral paralysis. Primary effects concern the general for cages of an organism of the processes underlying ability to live – fabric breath, mechanisms of electronic carrying over. Secondary effects of display of influence of MRT on biological object are defined by hierarchy of levels of the organization peculiar for given biological object. At higher levels of the organization of metaphytes (organic system, level of a complete organism), effects of influence MRT cover the increasing range of displays. At these levels of the biological organization, the increasing role is played by normalization of broken regulation mechanisms – hormonal, nervous, immune, and restoration of the broken functions of separate tissue, bodies, systems of bodies, adaptation-compensatory organism systems. The analysis of results of the usage of MRT in clinical practice allows to conclude that efficiency of microwave resonance therapy is defined not only specifically on the concrete form of diseases, but also in depth of infringement adaptive-compensatory organism systems, convertibility of organic changes in bodies and fabrics. The extensive actual material testified to the various parties of the ERHF influence on biological objects is the basis for the further expansion of a range of searches of applied use of methods of MRT in medicine.
S. P. Sitko, I. Chervony (Ukraine) Abstract. Quantum Physics of the Alive is based on the definition of the Alive (in its distinction from the Dead-inanimate) as a fourth level of quantum organization of Nature (after nuclear, atomic and molecular levels). Self-consistent potential of each living object is formed in accordance with genome as a laser of mm-range wavelength. Such a notion concerning the Alive, grounded on theoretical considerations, clinical material and the direct experiments, allows us to cast a fresh glance on the fundamental problems of biology and not only on them…
Y. P. Potekhina, Y. A. Tkachenko (Russia)
A. V. Ivanovskaya (Ukraine)
Z. Jovanović-Ignjatić (Serbia)
B. Milovanović, V. Radivojević, S. Mutavdžin, B. Milovanović, T. Krajnović (Serbia) Abstract. It is very known fact that according to some new studies hypertension and resting heart rate are genetically determined. According to some studies heart rate variability is the constant related to type of autonomic pattern. Owing to this fact that function of autonomic nervous system is constant, the treatment of diseases could be evaluated using different groups related to type of dysfunction. This is the first but very important step in the development of general principles of personalised medicine. In order to reveal right type of autonomic disorder we used short and long term of HRV analysis with special Ansa Scan Software. A comprehensive study protocol was done including finger blood pressure variability (BPV) and heart rate variability (HRV) beat-to-beat analysis and nonlinear analysis, 24-hour Holter ECG monitoring with QT and HRV analysis, 24-hour blood pressure (BP) monitoring with systolic and diastolic BPV analysis, cardiovascular autonomic reflex tests, cold pressure test, and mental stress test. The patients were also divided into sympathetic and parasympathetic groups, depending on predominance in short term and long term spectral analysis. In the second level of treatment of patients, we used drugs which are in complementary relationship to the type of autonomic pattern.
L. Trifunović (Serbia) Abstract. Homeopathy is a medical system founded by the German doctor and chemist Samuel Christian Hahnemann (1755 – 1843) who had profound insight into life, the human body, health and disease. Only recently modern scientific disciplines, such as quantum physics, have started to explain the discoveries to which Hahnemann came intuitively. Homeopathy, contrary to conventional western medicine, is based on the belief that humans are much more than their material physical body. Homeopathy recognizes levels of existence that are not perceivable by our five senses. The core of our being is made of energy, our vital force. It abides at the energy level, but manifests itself on three different levels. The most subtle level of its manifestation is the mental level where it manifests as thoughts, the next level is our emotional level where it manifests as emotions, and on the physical level its manifestation is material body. When the vital force is in its natural state of balance, its ideal state, it is manifested as mental, emotional and physical health. However, if our vital force is out of balance it is considered in homeopathy as disease. This disease will be expressed by our vital force as pathological symptoms on the mental, emotional and physical levels, extending the concept of disease that classical medicine upholds. For homeopathy disease is possible only at the energy level. Symptoms on the mental, emotional and physical level are only the external, visible manifestation of the disease. In homeopathy influences on the vital force that throw it out of balance are called miasms. Miasm is a word of Greek origin which means pollution, impurity, or stain. Hippocrates was the first to use word miasm to explain how diseases are spread by air, water, or other ways. If we treat a single case of pneumonia we can heal it as an acute illness without understanding of miasms. However, if it reoccurs with other illnesses of the respiratory tract, this means there is a tendency towards these diseases. Our aim is to address the tendency and we cannot heal it without the understanding of miasms. Miasms are pathological energy fields that influence the vital force and keep it out of balance. This causes a predisposition towards different kinds of diseases that occur repeatedly, or towards a chronic disease with various complications and the onset of low immunity. As miasms are energetic influences, they could be treated only by other energetic influences, which can be homeopathic remedies. The therapeutic effects of homeopathic remedies are not based on biochemical reactions as in classical medicine, but on the interaction of energies.
N. Mišić (Serbia) .... Therefore, we first tried to unify these concepts and then to bring them in connection with certain mathematical, physical and biological systems and models, with particular reference to the meridian system. This analysis enabled the comprehension of Five Phases as a hierarchical 3D model through the self-similarity symmetry, which is consistent with the observed fractal organization in living systems and with a holistic view of the human body, providing the support for Integrative Medicine.
R. Prelević (United Kingdom)
A. S. Tomić, G. Marjanović, R. Vojnić-Tunić, Dj. Koruga (Serbia) Abstract. The opto-magnetic method [1] is successful applicable to skin properties determination and description. We applied method on “bio-resonant massager”, device produced and applied to bio-medical purposes by Rudolf Vojnić-Tunić, specially adapted Tesla’s coil, working with 4 W power and impetus on frequency in range kHz. How works special camera and software of opto-magnetic device, we demonstrated on example of young man with skin problems, before and after application of Tesla’s coil massager. In these circumstances positive influence of harmonized electro-magnetic radiation in frequent band 880 Hz-125 MHz appears as source of longitudinal low energy mechanical waves in bio-molecules with under 2m, or wave length equal and shorter than human body length. Belong to holistic concept in medicine, it assume fact that human body works as system which presumption for function are harmonization of all subsystem into human body system. According to our previous theoretical investigation [2] we concluded that mechanism of influence to bio-molecules presents non-linear process known as maser effect.
D. Mandić, D. Đorđević, D. Cvetković, S. Kažić, J. Popović (Serbia)
and its therapies, in bone fractures, osteoneogenesis, rheumatoid artritis, angiogenesis, and neuroneogenesis.
Abstract. The Earth magnetism belongs to one of four natural central forces that have made significant contribution to survival and health preservation of all the life. The substitutional therapy of MADU new medical technology is based on application of two inventions acclaimed as patents and registered as medical devices. The MADU therapy is aknowledged as new health technology in 2007 by Ministry of Health, Republic of Serbia (No. 022-04-19/2006-07), and it includes the application of Trap for shell fragments (first patent) for displacement and evacuation of foreign ferrous remaining fragments, as well as MADU strip (second patent) with wide application field. The confirmed indications, based on experience in medical practice up to now, are: (i) Faster and more complete development of callus in bone fracture; (ii) Delivery of medicaments with ferromagnetic and paramagnetic properties; (iii) Non-invasive displacement and evacuation of ferrous foreign bodies; (iv) Preventive and curative with vein deformations; (v) Immobilization of thrombus for its faster rechanellization; (vi) More lavish oxygen delivery by blood into the areas of reduced micro-circulation; (vii) Reduced swelling in the area under the influence of directed deep magnetic field; (viii) Improved viscosity in the arterial and vein blood vessels; and (ix) Faster and improved regeneration of various tissues, especially cartilage. The subtle influence of permanent magnetic field increases and improves metabolic processes and thus stimulates regenerative processes. The processes of cartilage regeneration, angioneogenesis and neuroneogenesis are of great significance for the mankind. The ancient knowledge (as reflexology, acupuncture...) is very effectively used as the basis of opening gap junction channels - prainformative centers in the organisms. The knowledge accumulated throught centuries of human history is explained and scientificaly approved in 1980s. It was more thoroughly studied and presented in the PhD study about magnetic fields, including MADU (D. Djordjević, MD PhD, 2007/2008). The positive results are obtained in treating disorders of osteoarticular system (ISCD-10, M 00-M 99), as well as in treating disorders of peripheral vasscular system (ISCD-10, I 70-I 99). Both of these groups of common diseases, the most sucessfully treated, have huge social-economic and medical relevancy. The MADU therapy could be applied as additional therapy together with contemporary medical procedures. Having in mind the experience gathered through application of MADU and its effects on local and global level, indicational field is getting more and more wider while contraindications and precautions are narrowing down. Thus, this type of magnetotherapy belongs to the future.
bone fractures: Abstract. The human organism does not function solely on the basis of biological and biochemical cellular reactions, but humans are also electromagnetic beings. In cases of very slow healing, the complicated fractures were treated by MADU magnetotherapy with aim to improve the healing of the bone, as MADU field promotes and stimulate calcium ion impact in bone tissue, with very rich vascularization. The gap junction channels (GJCs) are special informational system in bones, which connect not only osteocytes but also all smooth muscle cells in blood vessels of lavishly vascularized bone tissue. The molecular mechanism of electromagnetic field (EMF) and magnetic field (MF) affect metabolism of bone cells in the course of fields’ interference with signal transduction processes, included in hormonal and transmitter, particularly, cytokine regulation of osteoblast function, especially their proliferation and differentiation. The MADU inhibition of IL-1 and TNF-α production disable multiplication of fibroblasts activated by them from a surrounding area, and thereby the replenishment of a defect. More rapid maturation of connective tissue is achieved by MADU, so that increase in osteoblasts activity and Ca2+ metabolism result in increase of minerals deposited into a bone matrix, not only at the surface, but also within a bone depth.
bone regeneration: The possibilities of regeneration of bone tissue through the use of static magnetic field (SMF) are great, especially when the SMF is oriented to North (N) magnetic pole face turned towards the skin. .... The research of the effects of SMF at various inductions of exposition of osteoblast calvary cells in culture have shown dose-dependent proliferation and growth of cells during the activation from static G1 phase to S phase, which increases synthesis of DNA and accelerates cell proliferation.
RA: MADU magnetic strips, which create magnetic deep unipolar field (shortened MADU) with penetration of 55 cm into human body. It was approved by the Ministry of Health of Serbia in 2007. The therapeutic effects include antinflammatory and analgetic effects of magnetic field, activation of enzymes (particularly mettaloenzymes), activation of K/Na pump which promotes shifting of pH value of treated cartilage towards more basic levels which in turn promotes regeneration of chondrocytes and osteocytes. Considering characteristics of magnetic water, continuous stimulation by MADU achieved long-lasting effects in patients with degenerative disease like coxarthrosis and gonarthrosis. ..... No side effects of MADU therapy have been noted. In conclusion, MADU method provides the state of renewed joint space in respect of cartilage and bone and their maintenance in patients with rheumatoid arthritis. The process of reparation of the joint space with all the following benefits regarding pain and function, recommends this new non-invasive method as supplementary, with full respect of all scientific and therapeutic methods.
Angiogenesis: The MADU strip is based on MAgnetic Deep Unipolar oriented static magnetic field (SMF), using possibilities of reflexogenic therapy on humans. ... After 2.5 years the clinical examination showed that leg was vital, with normal skin temperature and the outpatient could walk. The leg was saved due to developed microcirculation, oxygenation through new-formed small blood vessels successfully providing vitality of the leg. In conclusion, the applied MADU strips with the guaranteed optimal magnetic field intensity lasting for 10 years, is providing long lasting protective activity in the area of the diseased blood vessels and poor tissue nutrition. The initiation of regenerative processes is performed due to known pathophysiology mechanisms changing acid reaction into alkaline, providing regenerative processes. This medical device is environment-friendly, the method is non-invasive, complement to modern medical procedures. No side effects are noticed.
Neuroregeneration: One or more magnets, in the form of a strip, are placed on the surface of the body on reflexogenic zones (RZ) and focused like a magnetophore on reflexogenic points (RP), with the north face turned towards the skin [North (N) pole or Negative pole (-)]. The first observations of regenerative processes in sensitive and motoric function of injured nerves were detected from 1992 to 1998 while we treated wounded people by Trap for shell fragments. It was scientifically proven that medium SMF of 1-1000 mT has influence on the various biological systems, including morphology, differentiation and/or proliferation of many types of cells. The experimental research has shown that cell migration, measured by the cell’s diffusion constant, depends on the exposition time and the kind of cells and does not depend on vertical or horizontal direction of applied SMF of 30-120 mT. In neural progenitor cells, cultivated under the SMF of 100 mT, exsist significant increase of expression of mRNA for a few types of proneural gen activators, such as Mash1, Math1 and Math3, together with decreased expression of mRNA for repressor type Hes5. The transitional increase of binding DNA nuclear transcription activation factor protein 1, formed from the members of Fos and Jun family in cultivated hyppocampal neurons of the rats, was found. The rats were under the SMF influence of 100 mT during15 min. The significant potenciation of Ca2+ influx mediated by N-methyl-D-aspartate (NMDA) receptors was shown, with decrease of expression microtubule-associated protein (MAP-2) neuronal marker throught the expression Ntan1 gen, which includes ubiquitin-proteasome proteolysis process, as so called N-end rule pathway in hyppocampal neurons cultivated under SMF's influence of 100 mT but without cell death. It was detected that influence of SMF causes neuro reactions, confirmed at the maturation of non-matured cells in a culture of neurons of rat's hyppocampus and achieved throught the modulation of expression specially NMDA receptor's subunits. It was proven that neurit sprouting of chicken's embrional ganglia is significantely increased in SMF of 22,5-90 mT. These discoveries give us a hope for the more efficient healing of many neurogical and psychiatric disorders and diseases.
V. Jerotić (Serbia) In 21st century, there are numerous worth, less worth and worthless psychotherapeutic methods, at disposal of disordered psychical life of people. But, basically, both prayer and psychotherapy (as ways of self-knowing) are necessary for (both healthy and sick) people.
Ž. Trišić (Serbia) Abstract. The identification of the unified field by modern physics is only the first glimpse of a new area of investigation that underlies all disciplines of knowledge, and which can be explored not only through objective science but through a new technology of consciousness (Transcendental Meditation, TM), based on capability of human mind to settle into a state of deep silence while remaining awake, and therein to experience a completely unified, simple, and unbounded state of awareness, called pure consciousness, which is quite distinct from our ordinary waking, sleeping or dreaming state of consciousness. This experience is not on the level of thinking, theoretical conjecture, or imagination, but on the level of direct experience. So, even though is a common belief that the unified field of physics is an objective reality of nature while consciousness is a subjective experience, and that the two belong therefore to different categories of existence (one is material another is mental and the two cannot be equated) – we see them as two different modes to approach same reality, unified field or pure consciousness. Each individual nervous system, when refined through practice of TM technique is an instrument through which Unified field becomes accessible for inquiry and investigation through direct experience. Thus, modern physics through its objective method of inquiry has glimpsed a unified field as underlying all of nature, source of order in nature, while by direct experiencing it via TM technique we get influence onto our physiology, which starts working in a more balanced and ordered way. So, we see that TM technique has an important practical application in the area of health. According to Ayurveda (traditional science of life and health) all sickness comes from imbalance. In terms of physiological functioning this means perfect integration and balance from the biochemical and molecular levels to the macroscopic, organismic level.
I. Kononenko (Slovenija)
G. I. Brekhman (Israel, Russia) Abstract. The author considers the Man from a position of the theory of wave–particle duality of a matter. It has opened existence in a nature of ways of interaction and information interchange between genes, cells, persons, about which we did not suspect or knew a little. The concept of duality has allowed understanding the riches of the information contained in the man that has enabled to consider him as a psychosomatic system and to explain some features of thinking and behaviour of the people, sources of their talents and problems, and also feature of functioning in a society and relations with each other. In the certain measure the concept of duality gives an explanation of reasons of diseases, and gives interpretation to methods of treatment, which (despite of the efficiency) ascribe to alternative and do not admit by official medicine. Author describes the uterine myoma as a psychosomatic process, manifesting itself in ischemic uterus disease. He substantiated and used the holistic approach and nonstandard method of psychoelectroregulation in these patients which gave the long-term results.
Lj. Ristovski (Serbia) Abstract: There are a variety of doctrinal descriptions of the bioenergies and biofields in different AM systems, which are expressed in different terms and vocabularies, because their origins belong to the different culture traditions. This variety of doctrinal descriptions leads to the irreconcilable differences between the different AM systems, as well as between the majority of them and actual science. However, the supporters of some AM systems exert a trend to express the contents of theirs doctrines using the terms and vocabulary of actual science. In this way, they facilitates the beginning of a dialog with actual science, as well as the reaffirmation of many serious scientific investigations of the phenomena in alternative medicine, which are accomplished during last almost five decades on the periphery of the actual science. In the further, it will be considered the variations of AM doctrines which manifest an interest to establish a dialog with actual science by accepting the terms and vocabulary of actual science, independently on the manner by that the doctrine content is expressed. The main objective of this paper is to show that a small effort will help to achieve consensus on the terminology used in the AM doctrines with scientific terminology. This will help to terms with the same name means the same, although we cannot expect that it will obviously speed up the convergence of doctrine and scientific views about the AM phenomena. Nevertheless, this progress can be expected because the large difference is between felt and measured bioenergy, between seen and photographed. Therefore, the attention is devoted to experimental techniques that allow any type of detection or visualization of the subtle entities of alternative medicine (bienergy, biofield, aura). For reasons that will be later explained, particular attention will be devoted to the PIP (Polycontrast Interference Photography) imaging, which is the unique real time imaging technique. Certainly the most important result of the application of PIP imaging system is the visualization of the energy changes in the human aura, which may be due to bio-energetic, or any other AM therapeutic treatment. It should be noted that the PIP system provides information that enables qualitative analysis but not quantitative analysis of the bio-energetic phenomena. In addition, it is recommended as a diagnostic tool also. However this recommendation must be accepted very cautiously, as will be discussed later.
M. Milenković, M. Mićović (Serbia) Abstract. Reiki is an ancient spiritual-energetic method of healing, usually considered to be Japanese, and it is applied along with methods of modern medicine in health care institutions around the world. For many years now Reiki has been considered to be No 1 method of self-help in the world. Reiki primarily helps with reduction of stress, with relaxation, and is also used for mobilizing/activating all defense mechanisms in the human body. It improves the efficiency of all bodily functions, and allows us to reach a state of harmony between the physical, mental and emotional levels of our being. The highly developed, non-standard approach to Reiki allows everyone to acquire selected techniques and use them successfully for self-help, without having to attend traditional training classes, usually required for the Reiki method. Combining of Reiki techniques with acupressure, with breathing and stretching exercises, as well as with mechanisms of the human reflexes and mental orientation, can allow everyone to find their own ways to prevent further development of various health problems, caused primarily by stress. The use of self-help to reduce stress, to relax, improve concentration and memory capacity, to improve one’s mood in general and increase the energy potential of the body, is necessary and easily applied in everyday life. Association Reiki of Serbia is founded in 2001 and has several hundreds members, actively f Serbia is founded in 2001 and has several hundreds members, actively participating in coninuously organized educations and seminars. In participating in coninuously organized educations and seminars. In 2008 Reiki was formally acknowledged by Committee for Regulation of Traditional Medicine, in Serbian Ministry of Health, as a method of improving general health of people. By establishing these regulations it was officialy enabled to introduce Reiki therapy into medical institutions in Serbia.
S. Simonovska (Austria) Abstract. Quantum Transformation is a practical application of the Two Points Method in the field of healing and life issues solving. Two Points Methods has its roots in the ancient Hawaiian spiritual technique of Huna, rediscovered independently by Dr. Richard Bartlet (Matrix Energetics) and Dr. Frank Kinslow (Quantum Entrainment), with a great contribution of Andrew Blake (QCT-Quantum Consciousness Transformation). Not only because it is easy, simple to use, and at the same time very effective, but also because of the possibility of combining it with other therapeutic and healing techniques, Two Points Method is spreading through Europe very quickly. When two distinct points on the body or aura get connected, Quantum waves are produced, initiating huge changes on all levels and in all areas of life. Quantum Waves initiation makes changes on a deepest level, altering our Matrix. The Matrix contains our deepest beliefs, in other words, fixed attitudes originated from individual consciousness and different life stages (the childhood, the prenatal period, the birth process, conception or inheritance - karma) and fixed attitudes derived from group or collective consciousness. Those subconscious beliefs are often opposed to our conscious beliefs and they are the main cause of our psychological and physical suffering, as well as many diseases. Faster transformation of subconscious beliefs can be achieved using Quantum Waves, which leads toward change of our reality. The essence of Quantum Transformation is pure consciousness. Pure consciousness is actually pure love. Quantum Transformation is a method that uses love energy for healing purposes. Quantum Transformation workshops, using quantum waves, meditation, music, movement and body work “teach us” through experience how to use the energy of love in healing and combining this method with other healing and energy based methods.
M. Tomšić Akengen (Slovenia) Abstract. The philosophy of Ifa with its origin in Africa, where it has been preserved to this very day by the people of Yoruba, contains the entire opus of understanding a human life, character, predestination, destiny and nature. One of the toughest challenges is how to treat (heal) someone, who is born with the energy of Abiku – born to die prematurely (born to experience premature death). Ifa considers the individual top priority, using all the knowledge and instruments it deals with making the individuals life good here and now, in this life. Everyone is born with some sort of predestination. It is not fate, because if something is fated, then the individual has no way of affecting that. But when something is predestinated, someone can realize that or not, because everyone is responsible for his own life. In life we have all that which we can call good luck: progress, longevity, health, luck… But good goes hand-in-hand with destructive energy, and if we wish to achieve the good, we have to neutralize the bad. We can classify destructive energy into four basic destructive elements: death, sickness, failure and confusion. When we consider a person who has the abiku syndrome, it means that these destructive energies are constantly stalking him and that he is under heavy influence in at least one area by some of these elements. When everything seems to go well, and suddenly it seems as if one of these energies got activated, and it gives out the impression of being out of the person’s control. Spiritually it is considered for people with this energy, that they have been heavily involved in a parallel spiritual world. It is considered that they have their own group in the parallel world, which constantly pulls the person back or make his life here unbearable, and make him wish to leave from here sooner.
Č. Hadži Nikolić (Serbia) Abstract. The most significant dimension of entheogenic shamanistic ritual, being an essence of therapeutic pracice in isolated groups of Amazonia, is that dimension which touches universal themes, from person’s identity to its place in cosmic scheme. This experience is described as transcending boarders of empirical reality, coming out the framework of profane existence and entering into realm of cosmic existence and meeting with most supreme principle. In psychology literature it is frequently called transpersonal experience, as being denoted a mystical experience in religious terminology. The question arises on characteristics of this experience (if it exists) in the framework of shamanistic concept, as according to many experts on shamanism it does not have this (mystical) component being exclusively oriented to practical purposes of healing. According to shamanistic practice in observed groups, these categories are not excluded mutually. On the contrary, they are complementary, with mystical or transpersonal experience in direct function of healing, if this healing is comprehended in the context of shamanistic concept of disease. As in this mystical experience a person is continuously returning back to his mystical trans-subjective roots and beginnings, a shamanistic return into myth past with myth scenes and symbols achieving significant relationship between a person and universe is completely reasonable. This transition from “here and now” into “there and then” means experience of transcending space and time limitations, i.e. coming out form narrow profane human framework and entering into realm of transpersonal experiences. In this context a shamanistic “road” which basically is searching for absolute, undoubtedly does have elements of mystical. Mystical, however, does not exclude practical goals of this road. On the contrary, mystical or let us call it transpersonal or integrative experience, is complementary in respect to these goals. In other words, this experience is in direct function of shamanistic action, in first place of healing.
D. Nešić (Serbia)
A. Frauenkron-Hoffmann (Luxemburg), D. Portić (Serbia) Abstract. Sometimes in past, it has been thought that physical and psychical changes or illness is a result of damnation, or evil faith which affects us from external illness causes. Today we have a rather different view on that particular problem. With latest scientific discoveries it has been proven that sickness is a direct response on stressful situations, which in that given moment can’t be processed on mental or emotional level. We do not become ill by coincidence, illness always has a direct link with things going on in our life. If it comes to resolving the problem, we do not get ill, if not, it becomes biological stress (stress of direct life endangerment) which can be an illness generator apropos symptom on physical level. Illness or dysfunction of organism is influenced by very precise biological laws. Two-phased progression of every illness can be traced (Dr. Hamer): symphathicotony and vagothony. This knowledge serves us in apprising direct course of illness (why this illness and not any other one), as it shows a logical flow on emotional, mental and physical level. Our behavioural patterns have crucial part in shaping our way of responding on stressful situation (just that and not any other way). When in stressful situation, our brain begins searching for every kind of data, information and programs which could bring to its solution. In that moment brain chooses the best possible reaction, and influenced by experience - best behavioural pattern, having only one aim and that is to empower further existence. Our brain memorises primarily three types of data which it uses in stressful situations (Useful Biological Program): (i) all experiences and circumstances since birth to this very moment, followed by emotions and perceptions; (ii) all that we have experienced before we were born, since the moment of conception, even our moment of birth are completely integrated in our system; (iii) everything that our ancestors have experienced is also available to our brain. Decoding can be established by endeavouring toward two directions. First course is establishing and explaining crucial cause of stressful situation, therefore it is of at most importance to take precautions against objective circumstances being the trigger for genesis of a disease, but emotional feeling of the individual itself, so as which behavioural pattern will occur in certain stressful situation (different person react differently on the same kind of stress). Second course is releasing old behavioural patterns and gaining new mental pictures which awake new emotions annulling the old ones at the same time. Images from the past are always the cause of our illness. If we could manage to transform our mental images and to them related emotions, we could live healthier in future. Often just the realisation that there is a choice is enough: Stay devoted to old pattern (which in some cases can represent death) or establish new ones. We are not able to become new people, but we can learn how to control old mechanisms which made are ill in the first place!
D. Nikolovski, Z. Stević (Serbia)
B. Bedričić, M. Stokić, Z. Milosavljević, D. Milovanović, D. Raković, M. Sovilj, S. Maksimović (Srbija)
N. Trifunović, D. Jevdić, A. Jevdić, K. Jevdić (Serbia) Abstract. The aim of this paper is to consider contribution of anomalous intensities of electromagnetic and magnetic fields in etiopatogenesis of mental disorders and diseases. This research has been applying in past 20 years, with objective geophysical evaluation by proton magnetometer produced in USA and geological compass "Brunton". In this period we have examined a couple of hundreds patients with different type of mental disorders (sy anx-depressivum, depression, schizophrenia) both sex and all ages. We applied BPRS scale for sy anx-depressivum, Hamilton scale for depression, and PANSS scale for schizophrenia. Hereby we present several case studies. Results are excellent after the patients spent time in spaces with natural values of EM-M fields. During examination patients were receiving regular medicine (pharmacotherapy). Finally, we will present theoretical model for influence of geomagnetic field and cosmic radiation on the biological evolution.
N. Trifunović (Serbia)
Dj. Koruga (Serbia) Abstract. In this paper serendipity event is presented, which highlights Galileo’s number importance: numbers are letters by which the Universe is built. In our case the simulative effect of the number sixty importance comes from contemporary science: nanotechnology of carbon materials with sixty atoms and molecular structure of clathrin in our brain. However, we found out that roots of number sixty lie hidden in thought of ancient civilization. Moreover, in some way sixty is the part of light (c=lambda x v) as ratio of spatio-temporal unity through wavelength ( in nm) and frequency (v in THz). If we use model of light, of both classical and quantum properties, as model of an existence (life) – then embryogenesis (as the primary quantum process) presents information processing which makes set up of events (with different probabilities) for a life after birth. Which one of these events will happen depends on probability of events in classical information processing during life. Classical and quantum probabilities are complements and compatible to each other. In everyday life this complex manifestation is given as our behavior based on interaction of real-imaginary and rational-irrational pairs dynamics (logic square of our fingerprint).
M. M. Rakočević (Serbia)... possible asspects of a universal code attatched to human mind, the reality of holism...
Lj. M. Ristovski (Serbia),,G. S. Davidović-Ristovski (Serbia) Abstract. As it should be expected, the doctrinal theories of traditional medicine obviously belong to the domain of metascience (metaphysics). It means that they are based on revealed truths, which are necessarily expressed by an symbolic and metaphoric language. On the other hand, scientific theories are based on perceived truths, which are expressed by the language, which necessarily excludes the metaphors and symbols. Therefore, symbolic statements, as well as revealed truth, per definitionem cannot be expressed in a rational way, i.e. metascientific content can not be fully expressed in a scientific manner. The aim of this paper is not to translate the traditional (metaphysical) Chakras doctrine into the language of science, but to point out that in that teaching exist the contents which can be scientific interpretated. Namely, Chakras teaching, could survive for so long, only because the practice findings supports it. The traditional medicine practice can not be always explained in a scientific way, but it can not be completely ignored, if they have survived for centuries and millennia, as it is case with Chakras teaching. Specifically, as it will be pointed in this paper, the scientific knowledge about the human ontogenesis and psychhogenesis, can be unambiguously associated with the empirically established findings, which are implicitly included in the Chakras teaching. In addition, based on this correspondence, it is possible to come up to the concept of a psychogenetic explanation of (so called) opening and balancing of chakras. Finally, there are serious indications that the process of opening the chakras can be closely linked to psychiatric regression analysis.
D. Djordjević (Serbia), D. Mandić (Serbia) Abstract. Reflexology is a science dealing with mechanisms of origin, development and acting of every kind of reflexes on the all levels of ontophylogenesis. It is interdisciplinary branch of biology, which has its subject, branches, sub-branches, principles, conditions and mechanisms. The subject of reflexology is research on all levels of ontophylogenesis: principles of functioning, mechanisms of origin, development and acting, as well as influence of all kinds of reflexes from the simplest to the most complex. Basic fields of reflexology are: medical reflexology, dentristry (dental) reflexology, veterinary reflexology, animal (zoo) reflexology, and fitoreflexology. Basic branches of reflexology are: reflexodiagnostics, reflexotherapy, reflexoprophylaxis, and reflexoergonomics. Basic methods of reflexology are: physioreflexology, hemioreflexology, and bioreflexology. Basic principles of reflexology are: holographic or quantum-holographic [Tao (holistic) conception or conception of unique wholeness], circulation of energy [meridian’s (Jing-Luo) conception or conception of acupuncture channels (reflexogenic meridians)], ontophylogenic (Haeckel’s law), rhythmic functioning, self-regulation of the body systems, reverberation links (energetics and other influences), action-reaction, reflexogenic aferentation (conditional-unconditional reflex)... In Western medicine, reflexology is a treatment modality which employs only manual pressure to specific areas of the body, usually the feet and the hands, which are thought to correspond to internal organs, in order to generate positive health effects. However, in Eastern medicine and nowadays appearing Integrative medicine, reflexology is an application of any treatment modality on reflexogenic zones, reflexogenic points or reflexogenic meridians (e.g. digitopressure, acupressure, acupuncture, laseropuncture, magnetopuncture, magnetic hammer, moxibustion, cupping, etc.). In this holistic framework, each organism is considered as (quantum-holographic) unique wholeness whose health status is manifested by reflexogenic (acupuncture) zones and their (acupuncture) points, linked in pathes of reflexogenic (acupuncture) meridians. Reflexogenic zones are exteroceptive projections of internal organs on the skin (head, nose, ears, palms, foots), mucous membrane (endonasal, gums, tongue) and iris. Reflexogenic points are locations in which is occuring energy exchange (transformation) with external environment according to the principle of functioning of "body channels" or reflexogenic meridians. Through reflexogenic points as mirrors of the organ status or some individual function, one can regulate disturbed energetic equilibrium between some of organs and body parts, also between wholeness of organism as mikrocosmos and its environment, macrocosmos. Morphofunctional basis of reflexogenic zones, points and meridians is ontophylogenic linkeage of internal organs and all sistems in organisms with the most ancient energy-informational system, which is composed of the gap junction channels network.
G. Vitaliano (USA), F. Vitaliano (USA)
S. Milenković (Serbia) Abstract. Discussed here is creative challenge faced by the new profession of psychotherapy in a new millenium. Creation of consciousness-based holistic spiritual/transpersonal psychotherapy without ego is that challenge. It means basically the transformation of consciousness, of the inner feeling of one’s own existence, as well as the release of the individual form all kinds of conditioning imposed upon one’s by society. It’s characterised by transformation, self-transcendence, and expanded consciousness in which treatment is targeted primarily to the spiritual/transpersonal dimension.
Lj. Klisić (Serbia), A. Djordjević (Serbia) et al
Abstract. Body-Psychotherapy is a distinct branch of Psychotherapy, well within the main body of Psychotherapy, which has a holistic theoretical position. It involves a different and explicit theory of mind-body functioning which takes into account the complexity of the intersections and interactions between the body and the mind. The common underlying assumption is that the body is the whole person and there is a functional unity between mind and body. Directly or indirectly the body-psychotherapist works with the person as an essential embodiment of mental, emotional, social and spiritual life. He/she encourages both internal self-regulative processes and the accurate perception of external reality. Through his/her work, the body-psychotherapist makes it possible for alienated aspects of the person to become conscious, acknowledged and integrated parts of the self. In order to facilitate this transition from alienation to wholeness, the body-psychotherapist works with signs indicating vegetative flow in the organism, muscular hypertension and hypotension, energetic blockage, energetic integration, pulsation and stages of increasing and natural self regulative functioning. And the phenomena of psychodynamic processes of transference, counter- transference, projection, defensive regression, creative regression and various kinds of resistance. Almost one century ago Wilhelm Reich introduced work with self regulative processes and with cosmic superimposition. We are continuing his work here more than 3 decades, within TePsyntesis - Serbian Training School of Body Psychotherapy, founded by Prof. Dr. Ljiljana Klisic in 1976, in Belgrade, Serbia, ex-Yugoslavia. It has all characteristics of Body Psychotherapy schools, but TePsyntesis has also evolved from thirty years of research by Dr. Klisic into drives development and relationship between life force and consciousness, and it has trained more than 200 professionals. TePsyntesis offers also a new meta-theory of drives development: aggression and sexuality. In TePsyntesis we are studying and researching evolution of the basic human instinctual drives: (i) Instinct for self-preservation (aggression), (ii) Instinct for procreation of the species (sexuality). Only in unity of mind and body that opens up a suppressed spiritual dimension, we can approach the whole scope of evolution: (i) Evolution of primitive aggression and destruction towards mature power integrated with developed value system to non-dual power, (ii) Evolution of primitive sexuality towards Bliss and supreme Joy.
söndag 4 december 2011
Kea's lost thread. My inquiry.
Mitchell Porters blog 'The Lost Thread, Notes from the work of Marni Sheppeard' from a deleted thread at PhysicsForums.com, is discussing Sheppeard’s work. I have some years been following her blog and her difficulties. Sometimes I have felt sorry for her, sometimes frustrated, sometimes even angry. Those feelings are mutual, I think. She is a very 'unorthodox' person with an equally unorthodox theory or hypothesis. She claims everything must be done from scratch, everything must be created again, due to errors and misinterpretations in the past. Mitchell Porter asked me to write about my view of Keas science here, because I am an amateur. He said he would comment here, not at the thread. That is the reason for this post. First the background. This is only from recorded communications, not personal. Maybe I must point it out.
I have many times tried to have her speaking, in vain. Guess I am a bit lazy. Also that my interpretations as a layman cannot be so exact.
On Galaxyzoo she said oct.1.11:
Ulla's arrogance is mind boggling. This needs to be noted, because she is misleading people about the physics. She doesn't know the first thing about mathematics, let alone physics, and she thinks she can tell me about physics just because she reads Matti's blog. Ulla, you don't understand one paragraph of any of my papers, and you shouldn't believe everything the Dudes say about women in physics. I am always, always wrong, and behind the times, and stupid, and so on and so on and so on. You let your neurotypical groupie brain cloud your judgement. If you really cared about the science, you would spend more time studying elementary physics.
Dark Matter was first studied around 80 years ago. All theoretical physicists have been thinking about it since then. Mirror dark matter is one of the oldest ideas, and it is most certainly not Matti's own. In Physics, the details matter. Airy fairy tales don't mean shit.
Oops! Whom is she talking of? Me? It is pathetic, I am shy. The kindest person on Earth? Have I the right to talk to HER, sitting in the physics heaven? This only because I asked about dark matter, and said it could not be only from neutrinos. I think I made it clear I am no expert, so I would not mislead anybody. And Matti dislike mirror-talk (technicolor). My comment leading to this outburst:
I have thought of what kind of signal light is. It is entropy. This entropy seems to come from annihilation, and some say neutrino can act as a Dirac point. Light speed is a property of space in vacuum, not a function of something that travels through it.. Light creates negentropy (noise) in form of em-waves, and takes itself the same form (oscillate as quanta), because it is the surroundings. Only in this surroundings c=1?. This follows the uncertainty too?
This em-force is just one force, but it creates the entanglement and the macroscopic tensions; microscopic tensions are made of gluons, and those make up most of our gravity. Gravity is 'long-term memory' of Universe, and hence also entanglement ("sensors")? Kea has the non-locality there? The entanglement must be there in order to have reactions.
http://arxiv.org/abs/1101.3357 It has long been known that photon bremsstrahlung can lift helicity suppressions, also from W and Z-bosons. it has been proposed the excess electrons and positrons are not due to conventional astrophysics process, but arise instead from dark matter annihilation or decay
http://arxiv.org/abs/1002.2441,2010 flavor sensitivity
Nice to see that Kea finally has realized the dark matter scenario. But solely leptonic DM cannot be the whole truth. The whole DM-hierarchy nust be involved, because DM is also ordinary matter [over 95% of all matter is baryonic, quarks], but invisible to us. It also has 4D? And she has the prime field/world there. Why must she and Matti Pitkänen be so dam stubborn so they cannot even talk to each other. Sic!
I KNOW I am long from any "expert" on this, but still I can have questions and opinions. I have followed Keas blog in many years now, with bigger and lesser interest. She is one of few who knows what she talks about, but it is so hard to follow without math skills.
On a question I added: There are no facts in this story. This view follow the astrophysical view, and also TGD. Kea is coming to it? It is wrong to think DM would be some WIMPs or other exotic particle solely.
Graham Dungworth supported me. With many words as usual.
Ulla's conclusion is that the whole DM hierarchy is involved and Kiske valid response is whether that means that DM as ordinary matter is fact.
Many on the forum are aware of the total energy composition of the universe when expressed as approximately 4.5% normal matter, 20.5% dark matter(DM)and ~75%dark energy(DE). It matters little whether there is presently 73% DE from say WMAP. It is essentially a book keeping exercise. As we go into the future, forget reference frames, as the universe ages and remorseless expansion ensues by the time that the system doubles in size, the matter density drops as the cube of the expansion size. Thus a doubling of size leads to an eight fold reduction of matter density. The DE fraction is generated at constant density so when the universe is 8 fold greater in volume the matter fraction , normal matter and DM matter will represent ca. 3% and DE ~97% of the total energy composition. When the universe was young DE caused by the expansion of space was negligeable, as it ages the DE/matter ratio increases. After another doubling of size that is model dependent upon the Hubble flow the time factor varies; it could be ca. 40 billion years before the current universe has doubled in size. Again, because distant supernova were reported to be dimmer than predicted it would appear that the universe is now accelerating its expansion and that doubling time is decreasing.
All we know of physics concerns that 4.5% fraction of normal or ordinary matter. Stars and ourselves are constructed from it; as atoms, the elements we know of. Let's not quibble about ionised states and plasmas. All these atoms are made from more fundemental particles; protons , neutrons and electrons that include the latter's lepton cousin the neutrino. At a deeper description, the protons are built from quarks, the u and d types you are familiar with. There are a total of three copies of normal matter and these include higher energy forms of these basic building blocks. The second incorporates strange and charmed quarks and these are associated with different leptons the muon rather than the electron electron, the former is much more massive ca. 206 fold than the electron mass. Basically, you buy the whole package and the hierarchy of normal matter according to the particle physicists incorporates forty two 42 of these particles and the force particles by which they interact. This is no joke. When Doug Adams realised the significance of the numerosity of this package, you may haggle whether it's 36 or otherwise, but for him it was the forever immortalised secret of the universe; the secret of that 4.5% matter mass fraction although he never lived to know what the composition was.
The physicists have to satiate a large number of conservation laws within the SM. It is inconceivable to think of a universe constructed of say photons only; or of electrons only. At the most basic level, when a proton changes or transforms to a neutron, in standard nuclear fusion or in a supernova explosion, electric charge conservation necessitates that a positron is formed but this violates lepton conservation number so another particle must exist to conserve what a chemist would call stoichiometric balance, and that particle is the uncharged lepton or neutrino. You might quibble and say the neutrino is not a building blockand plays no role in atoms. You are taught that electrons have attributes or charges. First there is the mass charge. Electrons also have rest mass and electric charge and that they "spin", one of the two types of angular momentum. Additionally, conservation laws crop up that require they have what is called a further spin type or weak isospin. I've discussed these niceties before.
What Ulla alludes to is the whole hierarchy. Hierarchy has is a religious meaning. It is a priestly word. You have to accept the creed as a whole. One cannot pick and choose. Most physicists might object about this useage. However, they are left with one conservation law called parity that they cannot ignore. As a consequence they accept that if parity is conserveed in this universe, and it blatantly is not, then there must be areas where forms of antimatter exist- very doubtful-or that parity is conserved elsewhere in a multiverse. They would never like to admit that our universe just happens to be cack handed. It could have been left or right handed, pure chance decided the route that was taken. Whether you are religious or not swallowing the whole creed is difficult. It's impossible to avoid and yet we are discussing a 4.5% component. What about the rest? Unfortunately, this forum doesn't have the terabytes available to pursue an exhaustive discussion and neither would the whole planet were it restructured as a giant processor of information.
Ulla from a biology background hits us with a great truth. She accepts the creed. There must be other parity stuff around but where. All that other kind we know of is DM. There are many candidates for DM, the fashionable WIMPS for instance, but these don't address the parity problem but do admit different mass charges in supersymmetry extensions to the SM (Standard Model). In nature species carry around various attributes that are continually used. They don't carry useless spare baggage that would reduce their chances of survival. By the time molecular evolution evolved they all agreed to adopt the left handed chiral stuff, at least here on Earth. But that may not be the case elsewhere. Kiske then asks " Is the implication that DM is ordinary matter". The answer is yes but it may have differing mass charges and opposite parity or mirror chirality.
The preliminary Minos results was staggering news for physicists. It implied new matter of a type they never imagined and a problem that would hopefully go away.
If there are new neutrino types they would have to buy a more complicated hierarchy or creed, new baryons, new hadrons etc. After all supersymm or SUSY is long on doubling the hierarchy, many accept that and many don't; there's a reformation of creed going on. DM is nonbaryonic. That's burnt into the memory cells of most readers. If the new neutrino types exist that doesn't mean that DM is soley neutrinos, it means that there is a full hierarchy present of another or other or allo form of normal matter. An amazing coincidence of these predicted masses revels that such matter annihilations would give rise to a background radiationin the universe that is happening now and is not some relic radiation from a past that had a beginning, a creation event; the first line of the creed. All religions have a creed, several lines worth. Marni, Ulla and myself may have some differences but we agree on much as we rewrite the first lines of the beginning of a creation. We are not writing a new creed or hierarchy for matter of all types. We are addressing only that first single line where we conceive of a cold dark , stark and pallid place that never had an origin , that just existed timelessly as an ensemble of photons and neutrinos that was on the move, of a universe we know of yet to be created.
I continued:
In that way the DM could basically be only leptonic, if quarks are made from leptons. Neutrons can interact, maybe also neutrinos, I don't know. Photons can, which maybe is an indication of the quark compositeness. This is maybe the color problem of the strong force? About this we know almost nothing yet. And what we know talk against this picture. Leptons and baryons are conserved separately? This make also the conservation laws questionable.
The neutrino problem is OUTSIDE GR and SR, so Einstein may rest in piece. We have only seen a glimpse of a bigger frame (of entanglement?) as Wilczek said. First we must see if the FTL result hold. There is physics of uncertainty that say it may hold. Remember, they have thought of this in many years now, and concluded it was most honest to publish it. Very few got afraid and withdrew their names.
Also On viXra blog and on her blog we have had disputes. I asked Matti about his view on his blog.
ThePeSla said... The algebra of the sub-manifolds is in the cracks of the limits of our total standard theories and in the low dimensions, as Kea points also also, there is a Pythagorean relation- for me the 6 inside that triality triangle thus 24 is the hidden subspaces tangled or not.
Only in planes we have of course enumeration in quadratic time so to speak. These of course are in the cracks also as such early simple ideas as 24 dimensional lattices. But the reverse compliments and need for so many zeros in the assumed one dimensionality of a p-adic number as if the decimal reversed (and for some reason the composites dismissed as if primes can casually be taken or added to or subtracted from some power of 2 the even one) is an essential idea that tends to shift the ground, that is becomes a wider universe and its forces.
Matti: To me this talk of Kea about 24-D lattices, magic matrices, and Koide mass formula does not give much. I am simply unable to comprehend what is the point and big picture. In politically incorrect mood I would call it numerology;-).
I have heard about the numerology also from others, comparing to Eddington. Is her work just numerology?
I said...Well, if you don't understand Kea, how would I then? I have the same feeling, that her cosmology fails. But she uses the same numbers, the same structure as everyone else, how can then her construction need another type of Universe? Also her claim that neutrinos, that are non-interactive, make up the interactive DM (which is invisible ordinary matter in the cosmology of today) seems odd. That would mean that leptons are building stones for fermions [should be baryons] (3-quarks), and that we think is wrong. Quarks have the triality seen in her figures, though, but when she use them for neutrinos????
And why cannot she explain this in a coarse manner? I really don't understand her. She spit out harsch text, but when she meets a simple question she gets silent. Her 24-figure inside the triangle is fascinating, but then she refers to Pythagoras!!!! Is she a nut? I am astonished. Kea, if you read this, start talking.
I would rather think that neutrinos are in the non-Euclidean space (= outside GR) with a very small interaction with matter (GR). How would that interaction be seen in space with all the cosmic rays? And how do they interact with light? Orwen raid they pull out light from vacuum, but how? Annihilation is the opposite process? Can neutrinos be made interactive through a change of spin?
Matrices tells nothing at all of these circumstances. Remember that Kea mostly talk of abstract (= virtual, DM) braidings, and there is reason to suspect she travels at deep water. There is the entanglement, some kind of pressure, or heat, as environment etc. Is there any real 'structure' at all to do all those matrices with? Or are there just a quantum field as a 'soup' of numbers. Environment? So in fact she plays God. I would want to know what she thinks about this. No fairy fields, but she works in the fairy field? I have asked her many times, but as long as I cannot answer these questions her figures are just curiosa. And she wonders why nobody uses her works?
ThePeSla said... Ulla, Ulla,
I do not understand what you have against Pythagoras (and for that matter Kea's take on things). No one uses Kea's advancing work because they simply do not get it.
I answered: I have absolutely nothing against Pythagoras. He was a genious. I reacted on Keas way to refer to him as some kind of great insight. As if she was not aware of what she was doing?
No one understand her work, so why doesn't she write about her wiev. I want her 'Higgs mechanism' even if she has no fairy fields she have something instead. I want her cosmology, when she says she will make everything new, just a drawing is enough. Her matrices are exactly what you say, but you forget she works on the negative side of reality, abstract geometry (I wonder how it looks like, because I doubt it can have any matrices). Antimatter is annihilated, so those matrices must be very otherwise. The only reasonable thing she can use the matrices for is DM, but she says it is not there. She claims leptons (neutrinos) make up the 3-quarks. I have not seen any model for how she makes the bridge between leptons and quarks (the triality is not enough). She talks of l-adic braids but cannot get the unification, although Matti has done it for p-adic hadrons. I suppose you can read what I wrote.
I have absolutely nothing against Keas as person, on the contrary I wanted to help her. She reacted negatively when I tried to have her make peace with Matti, after their controversy. She should have defended her ideas, that's how this world works. You cannot sit like a child and spit out ugly words when someone asks something, even if it is just a stupid biologist.
I have quarrelled at Matti too over this stupidity, so the situation is quit. It doesn't matter if Kea says she owns something. It is her statement and future will tell. If there are two controversial scientists so near each other as Kea and Matti, it is very idiotic they cannot talk to each other because of that small controversy.
Maybe I am impatient, but I have followed her so long, and I waited she would come up with something now when the superluminal neutrinos are actual, but instead she start everything all over again. More than anything I want to give her a kick :) to get further.
Zero positive or zero negative, what is that? What exactly is zero? What is a string in no dimension? Words, words...
Orwin: Kea swims in the New Wave of category theory, which is good at the analogies the Medievals loved. But that's proper to language/Logos, not Nous, and hence the Romance gender-typing.
The New Wave take on a kind of computational proceduralism, as if to upgrade MATLAB for AI. Matti's with the new graphene hardware, in physics. But dialogue remains possible on Ulla's question of signals.
Matti : Dear Ulla, it is difficult for a layperson to see what is (or what is not) behind scientific terms and formulas.
Kea and me could not be farther from each other. Really. The only thing in common that we both have the label of crackpot but that's all.
Just look our ways of writing. I always give sequences of arguments and do analysis. Represents counter arguments and objections.
Kea gives some standard math formulas found from some source and not a slightest hint how this numerology might relate to her theory or physics in general.
I said... Ye, Matti, This is not my business in any way. I just get so frustrated when I read her small texts, where she swims on the same place year after year. When I write something to her she gets silent, and delete my comments.
And this PeSla is very successful in that too, of some reason. I must think thoroughly on why. I mean, this is your blog, and I cannot come here and chritizise your commenters. You know, my harsch tongue...
Orwin: To me the best one can expect of New Wave math is to incorporate fuzzy logic, which is better than assuming classical probabilities when approaching quantum theory. Here the philosopher to watch is Florentin Samandarache - he's like the guru of viXra. But ethnocentric, like Derrida.
Higher dimensions were known in the ancient world - Otherworld of the Celts; the higher self as 5th dimension; animal magnetism. But to piece together the evidence is very hard.
The 1/r distribution: the smaller a particle, the more it scatters: the length-scale of a Feynman diagram. So the soft microwave background is the scatter-grain of reality.
The hard part is to realize that reality acts through such appearances: special relativity, dressed charges, scattered signals. The appearance/essence distinction is not causal! Take mathematical form as an essence and you miss that!
Then I compared to Mattis theory that also has been taken for numerology.
Dear Ulla,
you have managed to monumentally misunderstand me if you think that I am some kind of numerologist. God grief! I cannot imagine anything more disgusting than random numerical considerations. Numbers predicted by TGD are the outcome of a refined conceptual models not just combinations of some magic matrices taken from sleeve without absolutely any connections to physics.
Koide mass matrix stuff is excellent example of numerology which makes me sick. It begins from an observation that a sum for square roots of charged lepton masses is near to 1/2 in suitable units. There is of course always some rational number to which it would be very near to. Only if it were exactly 1/2, the mass formula might have possibility to make some deeper meaning.
Then one feeds in magic matrices and starts to play with their sums and products. Hopeless. Hopeless because there are no physical principles involves, just random numerology getting more and more complicated.
I have developed totally different model for CKM matrices starting from physical and mathematical principles of TGD. Same applies to p-adic mass calculations. That the model predicts numbers as an outcome, does not mean it is numerology!
If some-one thinks highly of numerology, I can accept it. That someone thinks that I belong to the cast of numerologists, I cannot accept.
Orwin: To call TGD numerology is unfair: temperature has no dimension, it is a number, and rheology or fluid dynamics has several 'dimensionless ratios' as parameters (Reynolds number, etc). To make particles components of temperature is consistent thermodynamics.
Matti: Dear Ulla,
one thing I can tell that colleagues lie unashamedly when they talk about TGD, especially so if they are talking to a layman with no ability control the truth of what they are saying.
Homeopathy in many-sheeted space-time, crop circles, water memory, cold fusion, expanding earth: these are the favorite topics with which to debunk TGD. If you are a little bit analytic you indeed soon find that they *never* say anything about *contents of TGD*. What are the basic assumptions of TGD, how they criticize these. Never a single word. They use only simple emotional key words to induce negative emotions. This is how propaganda works in all dictatorships. By just looking any "criticism" of TGD you find that there is not a single word about what I am really saying: isn't this strange?
In analytic mood you might also notice they *never* mention that I just always emphasize that I am not believer or non-believer but am just asing: What if these phenomena are real, what TGD can say in this case? They just the claim that I am a fanatic believer. Again an enormous lie but represented with clear purpose.
These fellows might call themselves astro-physicists, physicists or whatever but they call themselves, but they are not scientists in the sense as I understand scientist. They are opportunistic career builders and ready to lie without hesitation if this helps them to develop their career or defend their position.
The hegemony, in particular the hegemony of finnish science, has excellent motivations for giving me a label of crackpot- a blind believer on all possible "bad science" and this what they have tried to do for all these years. Now the experimental results from LHC are flowing and it is becoming more and more obvious that I have been right all the time. Neutrino superluminality might turn out to be single experimental discovery demonstrating that TGD is the theory. No one in his right mind can anymore deny that am a top physicists and have been 34 years without human rights in a country in which most people can read. This is an incredible scandal and solely due to the enormous stupidity and arrogance of the academic power hegemony. Einstein in the patent office is nothing as compared to what these idiots have managed to do.
It is understandable that these fellows are fighting desperately to get rid of me before the bubble bursts. Revenge is also a deep motivation. They might quite well be able to prevent me from seeing the breakthrough, my health is not good. This does not however help them: the bubble will burst and the collective shame will be even deeper and these fellows will be regarded as criminals. And for full reason.
To Ulla:
this claim that TGD is numerology is probably the silliest claim that any scientists have made after the birth of Newton.
There is entire book -about 1000 pages- devoted to Physics as generalized number theory and second book to particle physics applications of p-adic physics. And then some empty head comes and claims that TGD is numerology!! God grief.
It is incredible what kind of idiots can receive monthly salary as physicists and astrophysicists.
Ye, that should be clear now . TGD is not numerology, what about Kea's M-theory?
What is numerology? Wikipedia says:
Many alchemical theories were closely related to numerology. arabian alchemist Jabir ibn Hayyan, inventor of many chemical processes still used today, framed his experiments in an elaborate numerology based on the names of substances in the Arabic language.
More about Stenger below.
Stenger's research career involved work that determined properties of gluons, quarks, strange particles, and neutrinos. Stenger was a pioneer in the emerging research focused on neutrino astronomy and very high-energy gamma rays. His final research project prior to retirement as an experimental physicist was participating in the Japan-based Super-Kamiokande underground experiment. This work demonstrated that the neutrino was massive. Masatoshi Koshiba, the leader of the project, won a share of the 2002 Nobel Prize in Physics for his efforts.
Not bad for a numerologist! But look what they say. He is an skeptic:
Stenger is now mainly known as an advocate of philosophical naturalism, skepticism, and atheism. He is a prominent critic of intelligent design and the aggressive use of the anthropic principle. He maintains that consciousness and free will, assuming that they in fact do exist, will eventually be explained in a scientific manner that invokes neither the mystical nor the supernatural. He has repeatedly criticized those who invoke the perplexities of quantum mechanics in support of the paranormal, mysticism, or supernatural phenomena, and has written several books and articles aiming to debunk contemporary pseudoscience.
Stenger is also a public speaker, including taking part in the 2008 "Origins Conference" hosted by the Skeptics Society at the California Institute of Technology alongside Nancey Murphy and Leonard Susskind.
In 1992, Uri Geller sued Stenger and Prometheus Books for $4 million, claiming defamation for questioning his "psychic powers." The suit was dismissed.
In recent years, Stenger's books and articles have been mostly written for the wider educated public. These writings explore the interfaces between physics and cosmology, and philosophy, religion, and pseudoscience.
What really is this? Numerology that gives an Nobel and the Periodic Table (octonions behind it?). Is the truth in this statement?
Dimensionless ratios? What is that? Alpha, fine tuning and hbar? That is anything must be numerology? Or...
What I asked Mitchell Porter?
Thanks Mitchell.
I have tried to understand her work in many years too, despite my outsider position. It is interesting, but the cosmology? She ought to spend more time to look on the frame, when she declares that her cosmology begans from scratch. Who can even think of citing her in that situation?
I would also want to see clearer the relations leptons – fermions [baryons again]. Now they seem intermixed. Baryonygenesis?
Her fairy fields as quantum information and abstract fields are nothing else than the virtual fields behind the Higgs mechanism. Why does she deny that? Also the eventual supersymmetry would need some light?
There should maybe be more thinking from this bottom-up view, as an effort to understand the microstates, but the endless possibilities in string-theory talk against it. How is the criticality, the finiteness, solved in her theory? Exactly what form of strings does she use? F-theory? She talks of a tripartistic world, and that should be seen in her strings (interactions)? The fact that she use many triangular forms does not explain her strings? In that case they are hadrons?
The situation now is very frustrating, and I understand also for her. I have tried to have her talk about these things, but she gets silent. Also her quarrel with Matti Pitkänen (his refusal to cite her, in spite of her being helping him with the categories) ended with her silence, instead of declaring her standpoints. That makes me suspicious. He talk of her science being numerology. That I think is going too far, but explains maybe her bad success.
I will post my 'explanations' in next post. I think she deserves all support she needs to carry on. At least the same value as Witten's has her research?
If I understand anything of it. |
eae6bf98e8054ccf | Electron configuration
From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
In atomic physics and quantum chemistry, the electron configuration is the distribution of electrons of an atom or molecule (or other physical structure) in atomic or molecular orbitals.[1] For example, the electron configuration of the neon atom is 1s2 2s2 2p6, meaning that the 1s, 2s and 2p subshells are occupied by 2, 2 and 6 electrons respectively.
Electronic configurations describe each electron as moving independently in an orbital, in an average field created by all other orbitals. Mathematically, configurations are described by Slater determinants or configuration state functions.
According to the laws of quantum mechanics, for systems with only one electron, a level of energy is associated with each electron configuration and in certain conditions, electrons are able to move from one configuration to another by the emission or absorption of a quantum of energy, in the form of a photon.
Knowledge of the electron configuration of different atoms is useful in understanding the structure of the periodic table of elements. This is also useful for describing the chemical bonds that hold atoms together. In bulk materials, this same idea helps explain the peculiar properties of lasers and semiconductors.
Shells and subshells[edit]
s ( = 0) p ( = 1)
m = 0 m = 0 m = ±1
s pz px py
n = 1 Atomic-orbital-cloud n1 l0 m0.png
n = 2 Atomic-orbital-cloud n2 l0 m0.png Atomic-orbital-cloud n2 l1 m0.png Atomic-orbital-cloud n2 px.png Atomic-orbital-cloud n2 py.png
An electron shell is the set of allowed states that share the same principal quantum number, n (the number before the letter in the orbital label), that electrons may occupy. An atom's nth electron shell can accommodate 2n2 electrons, e.g. the first shell can accommodate 2 electrons, the second shell 8 electrons, the third shell 18 electrons and so on. The factor of two arises because the allowed states are doubled due to electron spin—each atomic orbital admits up to two otherwise identical electrons with opposite spin, one with a spin +1/2 (usually denoted by an up-arrow) and one with a spin −1/2 (with a down-arrow).
A subshell is the set of states defined by a common azimuthal quantum number, ℓ, within a shell. The value of ℓ is in the range from 0 to n − 1. The values ℓ = 0, 1, 2, 3 correspond to the s, p, d, and f labels, respectively. For example, the 3d subshell has n = 3 and ℓ = 2. The maximum number of electrons that can be placed in a subshell is given by 2(2ℓ + 1). This gives two electrons in an s subshell, six electrons in a p subshell, ten electrons in a d subshell and fourteen electrons in an f subshell.
For atoms with many electrons, this notation can become lengthy and so an abbreviated notation is used. The electron configuration can be visualized as the core electrons, equivalent to the noble gas of the preceding period, and the valence electrons: each element in a period differs only by the last few subshells. Phosphorus, for instance, is in the third period. It differs from the second-period neon, whose configuration is 1s2 2s2 2p6, only by the presence of a third shell. The portion of its configuration that is equivalent to neon is abbreviated as [Ne], allowing the configuration of phosphorus to be written as [Ne] 3s2 3p3 rather than writing out the details of the configuration of neon explicitly. This convention is useful as it is the electrons in the outermost shell that most determine the chemistry of the element.
The superscript 1 for a singly occupied subshell is not compulsory; for example aluminium may be written as either [Ne] 3s2 3p1 or [Ne] 3s2 3p. In atoms where a subshell is unoccupied despite higher subshells being occupied (as is the case in some ions, as well as certain neutral atoms shown to deviate from the Madelung rule), the empty subshell is either denoted with a superscript 0 or left out altogether. For example, neutral palladium may be written as either [Kr] 4d10 5s0 or simply [Kr] 4d10, and the lanthanum(III) ion may be written as either [Xe] 4f0 or simply [Xe].[4]
Energy of ground state and excited states[edit]
As an example, the ground state configuration of the sodium atom is 1s2 2s2 2p6 3s1, as deduced from the Aufbau principle (see below). The first excited state is obtained by promoting a 3s electron to the 3p orbital, to obtain the 1s2 2s2 2p6 3p1 configuration, abbreviated as the 3p level. Atoms can move from one configuration to another by absorbing or emitting energy. In a sodium-vapor lamp for example, sodium atoms are excited to the 3p level by an electrical discharge, and return to the ground state by emitting yellow light of wavelength 589 nm.
Usually, the excitation of valence electrons (such as 3s for sodium) involves energies corresponding to photons of visible or ultraviolet light. The excitation of core electrons is possible, but requires much higher energies, generally corresponding to x-ray photons. This would be the case for example to excite a 2p electron of sodium to the 3s level and form the excited 1s2 2s2 2p5 3s2 configuration.
Irving Langmuir was the first to propose in his 1919 article "The Arrangement of Electrons in Atoms and Molecules" in which, building on Gilbert N. Lewis's cubical atom theory and Walther Kossel's chemical bonding theory, he outlined his "concentric theory of atomic structure".[7] Langmuir had developed his work on electron atomic structure from other chemists as is shown in the development of the History of the periodic table and the Octet rule. Niels Bohr (1923) incorporated Langmuir’s model that the periodicity in the properties of the elements might be explained by the electronic structure of the atom.[8] His proposals were based on the then current Bohr model of the atom, in which the electron shells were orbits at a fixed distance from the nucleus. Bohr's original configurations would seem strange to a present-day chemist: sulfur was given as instead of 1s2 2s2 2p6 3s2 3p4 (2.8.6). Bohr used 4 and 6 following Alfred Werner‘s 1893 paper. In fact, the chemists believed in atoms long before the physicists. Langmuir began his paper referenced above by saying, “The problem of the structure of atoms has been attacked mainly by physicists who have given little consideration to the chemical properties which must ultimately be explained by a theory of atomic structure. The vast store of knowledge of chemical properties and relationships, such as is summarized by the Periodic Table, should serve as a better foundation for a theory of atomic structure than the relatively meager experimental data along purely physical lines... These electrons arrange themselves in a series of concentric shells, the first shell containing two electrons, while all other shells tend to hold eight.” The valence electrons in the atom were described by Richard Abegg in 1904.[9]
In 1924, E. C. Stoner incorporated Sommerfeld's third quantum number into the description of electron shells, and correctly predicted the shell structure of sulfur to be 2.8.6.[10] However neither Bohr's system nor Stoner's could correctly describe the changes in atomic spectra in a magnetic field (the Zeeman effect).
It should be forbidden for more than one electron with the same value of the main quantum number n to have the same value for the other three quantum numbers k [], j [m] and m [ms].
The Schrödinger equation, published in 1926, gave three of the four quantum numbers as a direct consequence of its solution for the hydrogen atom:[2] this solution yields the atomic orbitals that are shown today in textbooks of chemistry (and above). The examination of atomic spectra allowed the electron configurations of atoms to be determined experimentally, and led to an empirical rule (known as Madelung's rule (1936),[12] see below) for the order in which atomic orbitals are filled with electrons.
Atoms: Aufbau principle and Madelung rule[edit]
The aufbau principle (from the German Aufbau, "building up, construction") was an important part of Bohr's original concept of electron configuration. It may be stated as:[13]
a maximum of two electrons are put into orbitals in the order of increasing orbital energy: the lowest-energy subshells are filled before electrons are placed in higher-energy orbitals.
The approximate order of filling of atomic orbitals, following the arrows from 1s to 7p. (After 7p the order includes subshells outside the range of the diagram, starting with 8s.)
The principle works very well (for the ground states of the atoms) for the known 118 elements, although it is sometimes slightly wrong. The modern form of the aufbau principle describes an order of orbital energies given by Madelung's rule (or Klechkowski's rule). This rule was first stated by Charles Janet in 1929, rediscovered by Erwin Madelung in 1936,[12] and later given a theoretical justification by V. M. Klechkowski:[14]
1. Subshells are filled in the order of increasing n + .
2. Where two Subshells have the same value of n + , they are filled in order of increasing n.
This gives the following order for filling the orbitals:
In this list the subshells in parentheses are not occupied in the ground state of the heaviest atom now known (Og, Z = 118).
Periodic table[edit]
Electron configuration table
The form of the periodic table is closely related to the electron configuration of the atoms of the elements. For example, all the elements of group 2 have an electron configuration of [E] ns2 (where [E] is an inert gas configuration), and have notable similarities in their chemical properties. In general, the periodicity of the periodic table in terms of periodic table blocks is clearly due to the number of electrons (2, 6, 10, 14...) needed to fill s, p, d, and f subshells. These blocks appear as the rectangular sections of the periodic table. The exception is helium, which despite being an s-block atom is conventionally placed with the other noble gases in the p-block due to its chemical inertness, a consequence of its full outer shell.
The outermost electron shell is often referred to as the "valence shell" and (to a first approximation) determines the chemical properties. It should be remembered that the similarities in the chemical properties were remarked on more than a century before the idea of electron configuration.[15] It is not clear how far Madelung's rule explains (rather than simply describes) the periodic table,[16] although some properties (such as the common +2 oxidation state in the first row of the transition metals) would obviously be different with a different order of orbital filling.
Shortcomings of the aufbau principle[edit]
Ionization of the transition metals[edit]
The naïve application of the aufbau principle leads to a well-known paradox (or apparent paradox) in the basic chemistry of the transition metals. Potassium and calcium appear in the periodic table before the transition metals, and have electron configurations [Ar] 4s1 and [Ar] 4s2 respectively, i.e. the 4s-orbital is filled before the 3d-orbital. This is in line with Madelung's rule, as the 4s-orbital has n + = 4 (n = 4, = 0) while the 3d-orbital has n + = 5 (n = 3, = 2). After calcium, most neutral atoms in the first series of transition metals (Sc–Zn) have configurations with two 4s electrons, but there are two exceptions. Chromium and copper have electron configurations [Ar] 3d5 4s1 and [Ar] 3d10 4s1 respectively, i.e. one electron has passed from the 4s-orbital to a 3d-orbital to generate a half-filled or filled subshell. In this case, the usual explanation is that "half-filled or completely filled subshells are particularly stable arrangements of electrons". However this is not supported by the facts, as tungsten (W) has a Madelung-following d4s2 configuration and not d5s1, and niobium (Nb) has an anomalous d4s1 configuration that does not give it a half-filled or completely filled subshell.[18]
This phenomenon is only paradoxical if it is assumed that the energy order of atomic orbitals is fixed and unaffected by the nuclear charge or by the presence of electrons in other orbitals. If that were the case, the 3d-orbital would have the same energy as the 3p-orbital, as it does in hydrogen, yet it clearly doesn't. There is no special reason why the Fe2+ ion should have the same electron configuration as the chromium atom, given that iron has two more protons in its nucleus than chromium, and that the chemistry of the two species is very different. Melrose and Eric Scerri have analyzed the changes of orbital energy with orbital occupations in terms of the two-electron repulsion integrals of the Hartree-Fock method of atomic structure calculation.[20] More recently Scerri has argued that contrary to what is stated in the vast majority of sources including the title of his previous article on the subject, 3d orbitals rather than 4s are in fact preferentially occupied.[21]
In chemical environments, configurations can change even more: Th3+ as a bare ion has a configuration of [Rn]5f1, yet in most ThIII compounds the thorium atom has a 6d1 configuration instead.[22][23] Mostly, what is present is rather a superposition of various configurations.[18] For instance, copper metal is not well-described by either an [Ar]3d104s1 or an [Ar]3d94s2 configuration, but is rather well described as a 90% contribution of the first and a 10% contribution of the second. Indeed, visible light is already enough to excite electrons in most transition metals, and they often continuously "flow" through different configurations when that happens (copper and its group are an exception).[24]
Other exceptions to Madelung's rule[edit]
There are several more exceptions to Madelung's rule among the heavier elements, and as atomic number increases it becomes more and more difficult to find simple explanations such as the stability of half-filled subshells. It is possible to predict most of the exceptions by Hartree–Fock calculations,[25] which are an approximate method for taking account of the effect of the other electrons on orbital energies. Qualitatively, for example, we can see that the 4d elements have the greatest concentration of Madelung anomalies, because the 4d–5s gap is smaller than the 3d–4s and 5d–6s gaps.[26]
For the heavier elements, it is also necessary to take account of the effects of special relativity on the energies of the atomic orbitals, as the inner-shell electrons are moving at speeds approaching the speed of light. In general, these relativistic effects[27] tend to decrease the energy of the s-orbitals in relation to the other atomic orbitals.[28] This is the reason why the 6d elements are predicted to have no Madelung anomalies apart from lawrencium (for which relativistic effects stabilise the p1/2 orbital as well and cause its occupancy in the ground state), as relativity intervenes to make the 7s orbitals lower in energy than the 6d ones.
The table below shows the configurations of the f-block (green) and d-block (blue) atoms. It shows the ground state configuration in terms of orbital occupancy, but it does not show the ground state in terms of the sequence of orbital energies as determined spectroscopically. For example, in the transition metals, the 4s orbital is of a higher energy than the 3d orbitals; and in the lanthanides, the 6s is higher than the 4f and 5d. The ground states can be seen in the Electron configurations of the elements (data page). However this also depends on the charge: a Ca atom has 4s lower in energy than 3d, but a Ca2+ cation has 3d lower in energy than 4s. In practice the configurations predicted by the Madelung rule are at least close to the ground state even in these anomalous cases.[29] The empty f orbitals in lanthanum, actinium, and thorium contribute to chemical bonding,[30][31] as do the empty p orbitals in transition metals.[32]
Vacant s, d, and f orbitals have been shown explicitly, as is occasionally done,[33] to emphasise the filling order and to clarify that even orbitals unoccupied in the ground state (e.g. lanthanum 4f or palladium 5s) may be occupied and bonding in chemical compounds. (The same is also true for the p-orbitals, which are not explicitly shown because they are only actually occupied for lawrencium in gas-phase ground states.)
Electron shells filled in violation of Madelung's rule[34] (red)
Predictions for elements 109–112[35]
Period 4 Period 5 Period 6 Period 7
Lanthanum 57 [Xe] 6s2 4f0 5d1 Actinium 89 [Rn] 7s2 5f0 6d1
Cerium 58 [Xe] 6s2 4f1 5d1 Thorium 90 [Rn] 7s2 5f0 6d2
Praseodymium 59 [Xe] 6s2 4f3 5d0 Protactinium 91 [Rn] 7s2 5f2 6d1
Neodymium 60 [Xe] 6s2 4f4 5d0 Uranium 92 [Rn] 7s2 5f3 6d1
Promethium 61 [Xe] 6s2 4f5 5d0 Neptunium 93 [Rn] 7s2 5f4 6d1
Samarium 62 [Xe] 6s2 4f6 5d0 Plutonium 94 [Rn] 7s2 5f6 6d0
Europium 63 [Xe] 6s2 4f7 5d0 Americium 95 [Rn] 7s2 5f7 6d0
Terbium 65 [Xe] 6s2 4f9 5d0 Berkelium 97 [Rn] 7s2 5f9 6d0
Dysprosium 66 [Xe] 6s2 4f10 5d0 Californium 98 [Rn] 7s2 5f10 6d0
Holmium 67 [Xe] 6s2 4f11 5d0 Einsteinium 99 [Rn] 7s2 5f11 6d0
Erbium 68 [Xe] 6s2 4f12 5d0 Fermium 100 [Rn] 7s2 5f12 6d0
Thulium 69 [Xe] 6s2 4f13 5d0 Mendelevium 101 [Rn] 7s2 5f13 6d0
Ytterbium 70 [Xe] 6s2 4f14 5d0 Nobelium 102 [Rn] 7s2 5f14 6d0
Vanadium 23 [Ar] 4s2 3d3 Niobium 41 [Kr] 5s1 4d4 Tantalum 73 [Xe] 6s2 4f14 5d3 Dubnium 105 [Rn] 7s2 5f14 6d3
Chromium 24 [Ar] 4s1 3d5 Molybdenum 42 [Kr] 5s1 4d5 Tungsten 74 [Xe] 6s2 4f14 5d4 Seaborgium 106 [Rn] 7s2 5f14 6d4
Manganese 25 [Ar] 4s2 3d5 Technetium 43 [Kr] 5s2 4d5 Rhenium 75 [Xe] 6s2 4f14 5d5 Bohrium 107 [Rn] 7s2 5f14 6d5
Iron 26 [Ar] 4s2 3d6 Ruthenium 44 [Kr] 5s1 4d7 Osmium 76 [Xe] 6s2 4f14 5d6 Hassium 108 [Rn] 7s2 5f14 6d6
Cobalt 27 [Ar] 4s2 3d7 Rhodium 45 [Kr] 5s1 4d8 Iridium 77 [Xe] 6s2 4f14 5d7 Meitnerium 109 [Rn] 7s2 5f14 6d7
Nickel 28 [Ar] 4s2 3d8 or
[Ar] 4s1 3d9 (disputed)[36]
Palladium 46 [Kr] 5s0 4d10 Platinum 78 [Xe] 6s1 4f14 5d9 Darmstadtium 110 [Rn] 7s2 5f14 6d8
Copper 29 [Ar] 4s1 3d10 Silver 47 [Kr] 5s1 4d10 Gold 79 [Xe] 6s1 4f14 5d10 Roentgenium 111 [Rn] 7s2 5f14 6d9
Zinc 30 [Ar] 4s2 3d10 Cadmium 48 [Kr] 5s2 4d10 Mercury 80 [Xe] 6s2 4f14 5d10 Copernicium 112 [Rn] 7s2 5f14 6d10
The various anomalies describe the free atoms and do not necessarily predict chemical behavior. Thus for example neodymium typically forms the +3 oxidation state, despite its configuration [Xe]4f45d06s2 that if interpreted naïvely would suggest a more stable +2 oxidation state corresponding to losing only the 6s electrons. Contrariwise, uranium as [Rn]5f36d17s2 is not very stable in the +3 oxidation state either, preferring +4 and +6.[37]
The electron-shell configuration of elements beyond hassium has not yet been empirically verified, but they are expected to follow Madelung's rule without exceptions until element 120. Element 121 should have the anomalous configuration [Og] 8s2 5g0 6f0 7d0 8p1, having a p rather than a g electron. Electron configurations beyond this are tentative and predictions differ between models,[38] but Madelung's rule is expected to break down due to the closeness in energy of the 5g, 6f, 7d, and 8p1/2 orbitals.[35] That said, the filling sequence 8s, 5g, 6f, 7d, 8p is predicted to hold approximately, with perturbations due to the huge spin-orbit splitting of the 8p and 9p shells, and the huge relativistic stabilisation of the 9s shell.[39]
Open and closed shells[edit]
In the context of atomic orbitals, an open shell is a valence shell which is not completely filled with electrons or that has not given all of its valence electrons through chemical bonds with other atoms or molecules during a chemical reaction. Conversely a closed shell is obtained with a completely filled valence shell. This configuration is very stable.[40]
For molecules, "open shell" signifies that there are unpaired electrons. In molecular orbital theory, this leads to molecular orbitals that are singly occupied. In computational chemistry implementations of molecular orbital theory, open-shell molecules have to be handled by either the restricted open-shell Hartree–Fock method or the unrestricted Hartree–Fock method. Conversely a closed-shell configuration corresponds to a state where all molecular orbitals are either doubly occupied or empty (a singlet state).[41] Open shell molecules are more difficult to study computationally[42]
Noble gas configuration[edit]
Noble gas configuration is the electron configuration of noble gases. The basis of all chemical reactions is the tendency of chemical elements to acquire stability. Main-group atoms generally obey the octet rule, while transition metals generally obey the 18-electron rule. The noble gases (He, Ne, Ar, Kr, Xe, Rn) are less reactive than other elements because they already have a noble gas configuration. Oganesson is predicted to be more reactive due to relativistic effects for heavy atoms.
Period Element Configuration
1 He 1s2
2 Ne 1s2 2s22p6
3 Ar 1s2 2s22p6 3s23p6
4 Kr 1s2 2s22p6 3s23p6 4s23d104p6
5 Xe 1s2 2s22p6 3s23p6 4s23d104p6 5s24d105p6
6 Rn 1s2 2s22p6 3s23p6 4s23d104p6 5s24d105p6 6s24f145d106p6
7 Og 1s2 2s22p6 3s23p6 4s23d104p6 5s24d105p6 6s24f145d106p6 7s25f146d107p6
Every system has the tendency to acquire the state of stability or a state of minimum energy, and so chemical elements take part in chemical reactions to acquire a stable electronic configuration similar to that of its nearest noble gas. An example of this tendency is two hydrogen (H) atoms reacting with one oxygen (O) atom to form water (H2O). Neutral atomic hydrogen has 1 electron in the valence shell, and on formation of water it acquires a share of a second electron coming from oxygen, so that its configuration is similar to that of its nearest noble gas helium with 2 electrons in the valence shell. Similarly, neutral atomic oxygen has 6 electrons in the valence shell, and acquires a share of two electrons from the two hydrogen atoms, so that its configuration is similar to that of its nearest noble gas neon with 8 electrons in the valence shell.
Electron configuration in molecules[edit]
In molecules, the situation becomes more complex, as each molecule has a different orbital structure. The molecular orbitals are labelled according to their symmetry,[43] rather than the atomic orbital labels used for atoms and monatomic ions: hence, the electron configuration of the dioxygen molecule, O2, is written 1σg2 1σu2 2σg2 2σu2 3σg2 1πu4 1πg2,[44][45] or equivalently 1σg2 1σu2 2σg2 2σu2 1πu4 3σg2 1πg2.[1] The term 1πg2 represents the two electrons in the two degenerate π*-orbitals (antibonding). From Hund's rules, these electrons have parallel spins in the ground state, and so dioxygen has a net magnetic moment (it is paramagnetic). The explanation of the paramagnetism of dioxygen was a major success for molecular orbital theory.
Electron configuration in solids[edit]
See also[edit]
3. ^ IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version: (2006–) "Pauli exclusion principle". doi:10.1351/goldbook.PT07089
4. ^ Rayner-Canham, Geoff; Overton, Tina (2014). Descriptive Inorganic Chemistry (6 ed.). Macmillan Education. pp. 13–15. ISBN 978-1-319-15411-0.
6. ^ Ebbing, Darrell D.; Gammon, Steven D. (12 January 2007). General Chemistry. p. 284. ISBN 978-0-618-73879-3.
7. ^ Langmuir, Irving (June 1919). "The Arrangement of Electrons in Atoms and Molecules". Journal of the American Chemical Society. 41 (6): 868–934. doi:10.1021/ja02227a002.
9. ^ Abegg, R. (1904). "Die Valenz und das periodische System. Versuch einer Theorie der Molekularverbindungen" [Valency and the periodic system. Attempt at a theory of molecular compounds]. Zeitschrift für Anorganische Chemie. 39 (1): 330–380. doi:10.1002/zaac.19040390125.
10. ^ Stoner, E.C. (1924). "The distribution of electrons among atomic levels". Philosophical Magazine. 6th Series. 48 (286): 719–36. doi:10.1080/14786442408634535.
11. ^ Pauli, Wolfgang (1925). "Über den Einfluss der Geschwindigkeitsabhändigkeit der elektronmasse auf den Zeemaneffekt". Zeitschrift für Physik. 31 (1): 373. Bibcode:1925ZPhy...31..373P. doi:10.1007/BF02980592. S2CID 122477612. English translation from Scerri, Eric R. (1991). "The Electron Configuration Model, Quantum Mechanics and Reduction" (PDF). The British Journal for the Philosophy of Science. 42 (3): 309–25. doi:10.1093/bjps/42.3.309.
13. ^ IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version: (2006–) "aufbau principle". doi:10.1351/goldbook.AT06996
14. ^ Wong, D. Pan (1979). "Theoretical justification of Madelung's rule". Journal of Chemical Education. 56 (11): 714–18. Bibcode:1979JChEd..56..714W. doi:10.1021/ed056p714.
16. ^ Scerri, Eric R. (1998). "How Good Is the Quantum Mechanical Explanation of the Periodic System?" (PDF). Journal of Chemical Education. 75 (11): 1384–85. Bibcode:1998JChEd..75.1384S. doi:10.1021/ed075p1384. Ostrovsky, V.N. (2005). "On Recent Discussion Concerning Quantum Justification of the Periodic Table of the Elements". Foundations of Chemistry. 7 (3): 235–39. doi:10.1007/s10698-005-2141-y. S2CID 93589189.
18. ^ a b Scerri, Eric (2019). "Five ideas in chemical education that must die". Foundations of Chemistry. 21: 61–69. doi:10.1007/s10698-018-09327-y. S2CID 104311030.
20. ^ Melrose, Melvyn P.; Scerri, Eric R. (1996). "Why the 4s Orbital is Occupied before the 3d". Journal of Chemical Education. 73 (6): 498–503. Bibcode:1996JChEd..73..498M. doi:10.1021/ed073p498.
21. ^ Scerri, Eric (7 November 2013). "The trouble with the aufbau principle". Education in Chemistry. Vol. 50 no. 6. Royal Society of Chemistry. pp. 24–26. Archived from the original on 21 January 2018. Retrieved 12 June 2018.
24. ^ Ferrão, Luiz; Machado, Francisco Bolivar Correto; Cunha, Leonardo dos Anjos; Fernandes, Gabriel Freire Sanzovo. "The Chemical Bond Across the Periodic Table: Part 1 – First Row and Simple Metals". doi:10.26434/chemrxiv.11860941. Cite journal requires |journal= (help)
25. ^ Meek, Terry L.; Allen, Leland C. (2002). "Configuration irregularities: deviations from the Madelung rule and inversion of orbital energy levels". Chemical Physics Letters. 362 (5–6): 362–64. Bibcode:2002CPL...362..362M. doi:10.1016/S0009-2614(02)00919-3.
26. ^ Kulsha, Andrey (2004). "Периодическая система химических элементов Д. И. Менделеева" [D. I. Mendeleev's periodic system of the chemical elements] (PDF). primefan.ru (in Russian). Retrieved 17 May 2020.
28. ^ Pyykkö, Pekka (1988). "Relativistic effects in structural chemistry". Chemical Reviews. 88 (3): 563–94. doi:10.1021/cr00085a006.
29. ^ See the NIST tables
31. ^ Xu, Wei; Ji, Wen-Xin; Qiu, Yi-Xiang; Schwarz, W. H. Eugen; Wang, Shu-Guang (2013). "On structure and bonding of lanthanoid trifluorides LnF3 (Ln = La to Lu)". Physical Chemistry Chemical Physics. 2013 (15): 7839–47. Bibcode:2013PCCP...15.7839X. doi:10.1039/C3CP50717C. PMID 23598823.
32. ^ Example for platinum
33. ^ See for example this Russian periodic table poster by A. V. Kulsha and T. A. Kolevich
38. ^ Umemoto, Koichiro; Saito, Susumu (1996). "Electronic Configurations of Superheavy Elements". Journal of the Physical Society of Japan. 65 (10): 3175–9. doi:10.1143/JPSJ.65.3175. Retrieved 31 January 2021.
39. ^ Pyykkö, Pekka (2016). Is the Periodic Table all right ("PT OK")? (PDF). Nobel Symposium NS160 – Chemistry and Physics of Heavy and Superheavy Elements.
40. ^ "Periodic table". Archived from the original on 3 November 2007. Retrieved 1 November 2007.
41. ^ "Chapter 11. Configuration Interaction". www.semichem.com.
42. ^ "Laboratory for Theoretical Studies of Electronic Structure and Spectroscopy of Open-Shell and Electronically Excited Species - iOpenShell". iopenshell.usc.edu.
External links[edit] |
3cbf954d1c359026 | COSMOLOGICAL CONS(tant → erved charge)
The road to black hole thermodynamics with Λ
by Dmitry Chernyavsky and Kamal Hajian
What are volume and pressure in black hole thermodynamics? That is the question!
Chernyavsky Balloon
What do the gas in a balloon and a black hole have in common? For a regular CQG reader the answer should be obvious; both can be described within the framework of thermodynamics. However we know that the gas in balloon is characterised by volume and pressure, as well as other thermodynamic quantities. So, a natural question arises about analogues of the volume and pressure for a black hole.
Answering this question, black hole physicists have noticed that if the universe is filled with a non-zero cosmological constant Λ, this mysterious entity can be absorbed in the energy-momentum tensor of matter, and its contribution resembles a perfect fluid with a pressure proportional to Λ. Continuing with this analogy, one can also introduce a ‘thermodynamic volume’ for a black hole. For instance, the appropriate volume which satisfies the first law of thermodynamics for the Schwarzschild black hole is equal to the volume of a ball with the same radius, but in flat space! Using the notions of the black hole pressure P and volume V, it is standard to vary the cosmological constant generalising the first law of black hole thermodynamics by V δP.
Chernyavsky authors
Dmitry Chernyavsky and Kamal Hajian Sevan lake in Armenia where we started to think about the cosmological conserved charge instead of cosmological constant.
Continue reading
CQG+ Insight: The problem of perturbative charged massive scalar field in the Kerr-Newman-(anti) de Sitter black hole background
Written by Dr Georgios V Kraniotis, a theoretical physicist at the University of
Ioannina in the physics department.
Solving in closed form the Klein-Gordon-Fock equation on curved black hole spacetimes
Georgios Kraniotis
Dr Georgios V Kraniotis (University of Ioannina)
A new exciting era in the exploration of spacetime
The investigation of the interaction of a scalar particle with the gravitational field is of importance in the attempts to construct quantum theories on curved spacetime backgrounds. The general relativistic form that models such interaction is the so called Klein-Gordon-Fock (KGF) wave equation named after its three independent inventors. The discovery of a Higgs-like scalar particle at CERN in conjuction with the recent spectacular observation of gravitational waves (GW) from the binary black hole mergers GW150914 and GW151226 by LIGO collaboration, adds a further impetus for probing the interaction of scalar degrees of freedom with the strong gravitational field of a black hole.
Kerr black hole perturbations and the separation of the Dirac’s equations was a central theme in the investigations of Teukolsky and Chandrasekhar [1].
All the above motivated our research recently published in CQG on the scalar charged massive field perturbations for the most general four dimensional curved spacetime background of a rotating, charged black hole, in the presence of the cosmological constant \Lambda [2].
Where interesting physics meets profound mathematics
The KGF equation is the relativistic version of the Schrödinger equation and thus is one of the fundamental equations in physics.
In our recent CQG paper, we examined Continue reading |
58c69457385d994a | Monday, July 07, 2014
Droplets and pilot waves vs quantum mechanics
All of Nature is governed by mathematics. So we encounter mathematical objects and equations everywhere. Even some types of ordinary or partial differential equations are recycled hundreds of times, in very diverse situations.
A person who hasn't been sleeping since the time when she was an embryo must have noticed this omnipresence of mathematics and is no longer shocked by it. In fact, such a sane person has improved her resolution and precision a little bit so she is able to see differences.
One may surely design some objects that obey equations not too mathematically different from those that Louis de Broglie (and 25 years later, if we just pretend that plagiarism is OK, David Bohm) proposed to replace proper quantum mechanics. The waves in the model may propagate similarly.
In fact, this "modeling" and "visualization" has a rather long history: George Francis FitzGerald has constructed a working model of the "luminiferous aether" emulating Maxwell's equations (partly inspired by James Clerk Maxwell's own engineering sketches of the gadget) out of wheels and gears. Mechanics flourished in the 19th century. These successes couldn't change anything about the fact that the luminiferous aether doesn't exist. One would think that people learn some lesson. On the contrary, a vast majority of the people learn nothing at all and they are doing much more stupider mistakes than the people in the 19th century.
The problem is that there are also a huge differences that you shouldn't overlook unless your brain is completely messed up. While the wheels-and-gears model of the aether pretty much did what it was supposed to do, there are differences both in the physical interpretation and in the mathematical details of the two situations here – droplets and the wave function. You might say that the former (physical, interpretational, conceptual differences) are more profound but once you learn to think quantitatively, you actually see that the latter (the mathematical differences) is equally profound and, in fact, equivalent.
The physical, conceptual differences between any quantities describing droplets on one side and the wave function on the other side are clear. The former are observable – you may actually measure what the shape of the droplet looks like; you can't measure the wave function by any apparatus, at least not in a single repetition of the experiment. The former has an objective interpretation; the latter has a probabilistic interpretation, and so on. The wave function just encodes all the probability distributions for actual observables – but the wave function isn't and can't be one of them.
There are also important enough mathematical differences.
In Schrödinger's picture (and even in the misguided equation controlling the "pilot wave" proposed to supersede quantum mechanics), the wave function obeys an exactly linear equation\[
i\hbar\frac{\partial}{\partial t} \ket\psi = H \ket\psi.
\] It is very important that all such equations are exactly linear and the actions of observables and operators expressing transformations are exactly linear. In combination with Born's rule, the exact linearity is required by the laws of "pure logic" expressed using the probability calculus, e.g. for the fact that\[
\] Note that this equation is linear in the probabilities and it has to be so for a simple reason. The probabilities are just ratios of repetitions of an event in which a condition is satisfied. The binary operators "OR" and/or "AND" correspond to the intersections and unions of sets of these repetitions of events and the equation above is nothing else than the equation dictating the number of elements in a union of two sets (divided by the total number of repetitions of the event). You just can't modify these rules, not even by a tiny amount.
All the experiments we have ever made are consistent with the exactly linear evolution of the wave function and the exact linearity of all the operators encoding observables – any observables. But once again, you don't really need to make experiments. This is a matter of elementary consistency of quantum mechanics.
On the other hand, the shape of droplets is encoded in observables, e.g. in functions \(x(t)\) of time or in the fields \(\varphi(x,y,z,t)\) etc. Classically, they are \(c\)-number-valued functions of time (or spacetime) coordinates. Quantum mechanically, these are observables – i.e. linear operators on the Hilbert space.
If you look for the most direct quantum counterparts, the classical equations of motion are most straightforwardly translated to the Heisenberg equations of motion for the operators in the Heisenberg picture of quantum mechanics. And this evolution of the classical quantities or the quantum operators is pretty much never linear in the operators. Linear equations of motion would mean that the system is non-interacting and completely uninteresting. Using the arguments based on naturalness, or the Gell-Mann totalitarian principle, if you wish, pretty much every higher-order (nonlinear) term may appear and will appear in the equations of motion.
So even if you forget about the completely different interpretations of the wave function and the shape of droplets, there is a difference (well, many differences, but I chose this one) at the purely mathematical level. The equations governing the evolution of the wave function must be exactly linear and there can't be any debate about it because it's a matter of consistency. The equations governing the evolution of the shape of droplets are almost certainly nonlinear because there is no general constraint that would ban the nonlinearity, and they are therefore 99.999...% likely to occur. You may find situations and approximations in which the nonlinearities are small or the nonlinear equations emulate some linear ones for other reasons, but fundamentally they are very different.
So these things may be similar at the level of containing some remotely similar differential equations but as soon as your resolution gets better than mine is after 10 pints of beer, you should be able to see that there are profound differences both of mathematical and physical nature.
Well, sometimes it may be true but what you mean is that there is an elephant inside the motor who is pushing the wheels using its trunk. That's cute – well, the child is perhaps cute which is why everything she likes is cute – but for an adult person, it's just stupid. Even if a child is satisfied with the explanation, it clearly doesn't work. It cannot satisfy a person who is able to ask why and creatively test the ideas that are being pushed into her ears.
The motors are just not being built out of elephants.
And the situation with the wave function is completely analogous. It demonstrably and obviously has nothing to do with the evolution of droplets of any kind. The probabilistic character of the wave function isn't a topic for deep philosophical debates or research. One can make elementary and trivial observations to directly and instantly see that the wave function has to be interpreted probabilistically, otherwise it has nothing to do with Nature! The probabilistic character of the wave function – so different from the evolving droplets – is an empirical fact that is trivial to prove by some of the simplest and fastest observations we can make. Just see that the double slit experiment creates individual dots while macroscopic droplets don't. In fact, the wave function has nothing to do with the evolution of any dynamical variables – observables – in any physical system in the world because the wave function is – importantly enough – not an observable.
There are way too many things in the articles that drive me up the wall. While a few physicists – e.g. Anthony Leggett – are allowed to mention that this whole droplet-quantum "work" is worthless šit, there are many others who positively hype it, including some favorite physicists of mine. Their affiliation often happens to be the same as that of Mr Bush. But a bias that is this obvious is just bad. Note that I haven't mentioned the name of the particular physicist who disappointed me, in order to keep the name confidential, but to promote this kind of šit just because they do it at MIT is outrageous, Frank! ;-)
Even if I subtract the pathetic "research" and the disappointing support of it by some well-known names, there are just so many things that are so insultingly stupid, manipulative, and contradicting the very essence of the scientific method. Wolchover's title is
Have We Been Interpreting Quantum Mechanics Wrong This Whole Time?
It's terrible how she talks about "we". She surely doesn't belong among the physicists who have something sensible to say about these matters – she is just an inkspiller. She has no clue what quantum mechanics actually is, and she (just like 99+ percent of the mankind) has never had such a clue – the whole time. But even physicists who have been talking about these matters are in no way "we". Clearly, quantum mechanics was too hard and too new for some physicists from the beginning – including a revolutionary named Einstein – so these people clearly didn't belong among the "we" of the people who properly understood quantum mechanics. But even the people who effectively understood quantum mechanics would find some differences in their "interpretation" of quantum mechanics – and the most reasonable ones would point out that the very phrase "interpretation of quantum mechanics" is silly. Quantum mechanics is the new theory so once we describe its rules and axioms, we know them and there's nothing to interpret. On the contrary, we may apply these rules to some situations and derive certain particular insights, for example the classical limit. In this sense, it's the "classical physics" that may be interpreted within quantum mechanics but quantum mechanics doesn't need (and doesn't allow) any additional "interpretations". You either understand the theory or you don't.
More generally, science just doesn't work in this collectivist way, and it will never work like that. So if some hopeless morons decide that the probability of the pilot wave theory has increased because they have played with droplets like retarded 6-year-old children, it will still be true that "we" continue to know that the paradigm is as wrong as it was in 1927.
I wanted to write a rant so that I may finally close the two insultingly stupid pages about the "droplets of quantum mechanics". I hope that now I will feel a bit lighter and please don't bother me with this junk again.
This blog post won't be proofread again because this whole theme is an amazing waste of time.
1. Yes, it is a BIG waste of time :-(
2. Well, obviously QM is commonly regarded incomplete because it simply doesn't tell us what we need to know about nature in order to come to a deeper understanding about its basic mechanisms. "Probability" is not a mechanism, it is just an outcome, a consequence of something we don't yet understand.
Take the case of a free neutron: A neutron inside a lab is objectively there, it can be "measured" and manipulated, until it suddenly decays. Now, why did that particular neutron decay after, say, 10 minutes, while another sample decays after 15 minutes? QM yields the probabilities for this decay process, but it does not predict when exactly a selected neutron is going to decay. This is a shortage that has to be overcome. A physicist always has to ask why, he should never be satisfied with "a set of rules" which yield some probabilities. This would not be physics, just bookkeeping. We need to go deeper into these mechanisms and thereby understand what exactly makes that particular neutron decay at that particular time. QM is only an intermediate step toward that goal.
3. thanks lubos, for letting me know what you think.
sorry to have burdened you with BS.
mea culpa
4. No problem, David, if you didn't do it, I would still have gotten 7 copies of that. ;-)
5. I saw these dancing droplets in an episode of Through the Wormhole and wondered if your response would be "Crackpottery!". You didn't disappoint :) I imagine that many concepts presented in this series fall into the same category, but I still find it entertaining.
6. QM is QFT in 1 time and 0 space dimension.
7. Federico BarocciJul 7, 2014, 11:03:00 PM
Hi Lubos, what do you think about the following. John Bell states in his book “Speakable and unspeakable in quantum mechanics” Chapter 14, page 115, that “the guiding wave,
in the general case, propagates not in ordinary three-space but in a multidimensional-configuration space is the origin of the notorious 'nonlocality' of quantum mechanics. It is a merit of the de Broglie-Bohm version to bring this out so explicitly that it cannot be ignored.” If I understand him well, Bell also argues that we should get rid of the idea of “particles” or droplets described, as you say, by almost certainly nonlinear equations. The only valid way to speak about “particles”
is by using their multi-dimensional wavefunction ground state. Then Bell refers to Ghirardi, Rimini and Weber: “The idea is that while a wavefunction normally evolves according to the Schrödinger equation, from time to time it makes a jump. Yes, a jump!”. Now, these “jumps” are “reduced” or “collapsed”
wavefunctions that we observe as “particles” in ordinary 3d space. And further on page 209: [Schrödinger] would have liked the complete absence of particles from the theory, and yet the emergence of 'particle tracks', and more generally
of the 'particularity' of the world, on the macroscopic level.” Particles, in the end, are the smallest concentration of energy incorporated into the wave, thus, they are itself collapsed waves described by the exactly linear evolution of the multi-dimensional wavefunction. I thought that the experiments carried out by Couder et al. could be nice approximations of the underlying ground state.
8. It is not only that more and more elementary established knowledge gets attacked in the course of time.
Also, to me it seems that first the nonsense appeared "only" in the popular "science" channels (which made physicists wrongly ignore it), but in the course of time it also started to appear at places one should be able to trust that they are free of crap, such as "peer-reviewed" journals for example.
9. I think the problem at hand is you're assuming there actually is a reason the neutron decays at some particular time, and that the electron and anti-neutrino have to go off in particular directions when this happens. And there's no reason that there actually should be such a reason. Maybe Nature really is random and that there are uncaused events. Why shouldn't there be?
If we had good reason to suspect that apparently identical neutrons were actually different, and that there were an underlying cause for different neutrons decaying with slightly different lifetimes, then it'd be a fair question to ask what these underlying differences were, and how they effected different decay times. But the fact of the matter is we have no reason to think any neutron is different than any other neutron. (And saying "well they must be different because they decay after different lengths of time" is just question-begging; it's the very claim that different outcomes require different initial conditions that's being contested, so you can't use your claim as a premise.)
10. I can’t resist further beating this dead horse. If I correctly understand the experimental set-up, the droplets exhibit the “interference” pattern only after some period of time required for the putative pilot waves to establish the two-slit interference pattern. This is fundamentally different than the quantum mechanical case, where the interference pattern develops even when the flux of particles is so low as to guarantee that there is (at most) one particle in the apparatus at any given time. I therefore assume that if one starts with a quiescent apparatus, introduces one droplet, and then waits until the “pilot wave” has damped out before introducing the next droplet, you will see some a mish-mash rather than a simulation of an interference pattern. To argue that this experiment has any implications for QM or any ability to improve our understanding of how QM works is, for this reason and for all the other reasons Lubos has delineated, profoundly wrong-headed.
11. That's not how Couder's experiments work. The pilot wave never dampens out, as there is energy coming into the system to keep the model alive. (Whole plate vibrates near the Faraday instability limit).
You get thus get interference with one particle at a time.
12. There is no deeper understanding and there never will be. God does, indeed, throw dice. Get used to it.
I happen to be a physicist, Holger, and it is preposterous for you to tell me what I should or should not be satisfied with. Until you free your mind of deterministic thinking you will be forever lost in the woods.
Physics is precisely a set of rules that yield probabilities. If you don’t like it you can follow Feynman’s advice and go to another universe. This one is quantum mechanical to its core.
Ummm… that’s exactly my point.
Let me try the following: when in high school, I cut class one day to go see the then new John Hancock tower in Chicago (this was an epic cut, as I lived about 150 km from Chicago). From the observation deck on the 92nd floor, I could see waves on Lake Michigan impinging on two openings (slits) in the breakwater that defined the Chicago harbor. There was a beautiful two-slit interference pattern. Now if on shore I recorded the location of every bit of flotsam and jetsam (aka piece of crap) and found it correlated with the interference pattern of the waves, would that provide new insights into quantum mechanics? The only difference I can see in Couder’s experiments is that the droplets (aka piece of crap) are auto-generated by some process (undoubtedly non-linear, as Lubos has emphasized) that is connected to the “pilot waves”. But there is nothing fundamental going on here...
14. A: If QM were as easy as the pilot wave theory is making it out be, then Niels Bohr is an idiot.
B: Niels Bohr was not an idiot.
Therefore pilot wave theory is not QM.
15. The pilot wave is an attempt to get around superposition. But how can we be sure that even though there is superposition, there isn't still a deterministic interaction with some otherwise undetected deterministic process that triggers the collapse? That would be meaningless speculation if there was no way in principle to observe it, but there could be ways. To probe space for such a process you would need an extremely large number of interaction in a small volume, but perhaps some observed anomalies could be interpreted as a result of such a model.
16. I get your well put point - to me put well by crucially including the word "sometimes".
Personally, I would have nothing to say/nothing to 'contribute' (am referring to a 'contribution' that tends to fall on deaf ears or be overwhelmingly refused, rejected, or recoiled from) if I were not focusing and betting on the tiny chance that aiming to explain a certain emergent evolution related aspect of What Is going on with words might be worthwhile.
Am referring to an aspect of 'what there is to recognize', one which is recognized with optimally percEPTive potency (not adaptive potency) utterly rarely because of how 'the law of quantum-level produced probabilities played out' {and will as a matter of principle play out in any universe similar enough to ours - i.e. ~ any that forges a phylogeny of fauna} in the form of a sub-principle of Natural Selection that is not much less simple and heuristic than Darwin's super-principle.
17. Dear Federico, our world is relativistic and quantum particles have to be described by the so-called quantum field theory - or anything that is a "specialized extension of it", I mean string theory.
And in quantum field theory or in those, the statements you quote are easily seen to be wrong. All of them. The particle-position basis isn't even well-defined in general and it is extremely general and non-fundamental.
And particles are in no way "the most compressed quantum waves" one may wave. Quite on the contrary, when we talk about particles that are as well-defined as possible, their wave function must be much more spread than on the Compton wavelength corresponding to the particle.
If you try to compress the "wave function" of a particle to distances shorter than that, you inevitably start to produce particle-antiparticle pairs and similar things. It's as far as you can get from the non-relativistic notion of an ordinary particle.
The particle is observed at a point not because the maximally compressed wave functions would be natural or "the best" or optimized in any sense - they're among the worst, most singular, most non-relativistic, most unlikely to be the right description. Particles are seen at points because the damn function has a probabilistic interpretation, it always has had, it always will have, and who tries to deny that this fact is established and demonstrable is complety confused about the basics of modern physics.
18. Dear Dilaton, it's true that "quantum mechanics" is often used for quantum laws where some natural variables only depend on time and not other continuous variables, i.e. for QFT in 0+1 dimensions, and I sometimes use this interpretation of "quantum mechanics" myself (e.g. "Matrix theory is a model of quantum mechanics").
However, in all these texts about the foundations of quantum mechanics, I use "quantum mechanics" in a much broader sense, as any theory respecting the general postulates of QM such as the linearity of the observables as operators acting on the Hilbert space, Born's rule, and so on. In this primary meaning of "quantum mechanics", any QFT in any dimension (and even string theory itself) is just a particular example of a quantum mechanical theory.
19. An excellent clarification, William, thank you!
The droplets just betray their being nothing else than a visualization of some features, not something that is supposed to be exactly equivalent to what it claims to model.
20. Excellent, William, and I had a similar experience except that the Hancock tower was in Boston, not Chicago, and it was a few days before 9/11 (and my thesis defense) when I visited it before the observatory got closer for years.
21. Jon, it often sounds that you are asking questions but you are never waiting for any answers.
There are answers to all your questions. We know that the "interaction that triggers the collapse" can't exist in the sense as a real process because such an interaction would have to act instantaneously, and it would therefore violate the laws of relativity.
You can phase the very same thing "experimentally", too. If such an interaction existed, it would have consequences that would manifest themselves as the violation of the Lorentz symmetry, and we observe there aren't any.
22. Lubos, I perfectly agree with most points you have raised - I only suggest to take them a little further. The concept of emergence does not need to stop at the point which we (currently) regard "fundamental". In fact, t'Hooft has demonstrated with a toy model how quantum mechanical features could emerge as well from something sub-quantum. In his example, that sub-quantum regime was classical, but there is no reason to restrict ourselves to classical models, why should we. Point is: In history, scientists often believed that they had reached the bottom, just to find out that the well reached far deeper. The question of "why does the neutron decay now" does certainly not imply a return to any classical concepts. We ask why because we want to know, and one day we may know why these processes look random in our labs.
23. Dear Holger, emergence (you mean the process of finding deeper explanations) doesn't have to stop at the point where science is now.
But it cannot get reverted. The fundamental theories people would have before the 20th century revolutions have been *falsified* so they can never be resuscitated.
Non-relativistic theories can emerge as the limit 1/c goes to zero of relativistic theories. But one just can never revert this arrow and derive relativity from a non-relativistic theory because the non-relativistic theories are more special - corresponding to a particular special value of the parameter 1/c, namely zero (corresponding to no Lorentz contraction, no speed limit etc.) - and once it's shown that Nature doesn't live in this special subset of theories, it can never be unshown.
The situation of quantum mechanics is exactly analogous. Classical physics is a special, hbar goes to zero, limit or special case of quantum mechanics. Just like relativistic effects (e.g. contribution to Lorentz contraction etc.) and corrections scale like positive powers of 1/c, quantum effects - like the uncertainty of variables and the unavoidable probabilistic interpretation following from that uncertainty - scale like positive powers of hbar. It's been shown that Nature doesn't live in the hbar=0 subset or limit of the space of possible theories. It follows that this special subset has been falsified and it cannot be unfalsified.
Your bigotry and obsession with undoing the quantum revolution is exactly analogous to the people who hope that the right explanation of Earth's shape will be a flat Earth again, or that creationism is right and the apparent evolution is just an illusion emerging from the Truth of Creation.
It just isn't so and can't be so, OK? All the "possibilities" you propose have been proven impossible. You may have overlooked this subtle fact - you may have overlooked the 20th century in physics - but it's still there.
24. Marcel van VelzenJul 8, 2014, 9:41:00 AM
Yes it does. Don’t you even understand the Heisenberg uncertainty principle?
25. Dear Lubos,
I appreciate and respect your blog very much, but imo. you are not honest by writing: “quantum mechanics doesn't need (and doesn't allow) any additional "interpretations". You either understand the theory or you don't.”
You also know that different interpretations of the symmetric universe (splitting locally or not splitting at large distances by CP symmetry ) could be possible.
26. Exactly, Martin.
Things may be converted to the usual x-p uncertainty principle but the more subtle time-energy version of the uncertainty principle may also be applied if we do it right.
If we measure the energy of the initial and final products with accuracy "delta E" or better (smaller), then the unavoidable uncertainty "dt" in the time of the decay does obey the usual
dt * dE is greater than hbar/2.
If we want to determine the point of the spacetime where it decayed as accurately as possible, we use the speed of the final particles, so the speed uncertainty can't be too high, and that implies an uncertainty of the position and therefore time of the decay, too.
27. OK, Marcel, then let me pass this "homework" to you: I have trapped my neutron inside a magnetic trap, within a volume of 1cm^3, at a temperature of 10^-3 Kelvin. How much does this trapping affect its lifetime?
You will find that the effects of Heisenberg's uncertainty can be conveniently neglected here. This was not my point anyway. I was asking about the why, and such a question necessarily perforates the framework of any currently known theory. Yet, I insist that science has to ask such questions in order to progress.
28. Ah well, the media. The actual researchers do not make any claims "this is the quantum mechanics" and alike, but rather point out some interesting similarities and promote further research. Putting the media campaign aside, the interesting mathematical description of this problem can be found here. arXiv:1401.4356v1 ... I would love to see your input on this paper, but it's kind of clear that you a) don't find it interesting at all b) your opinion may be biased before you even started reading the paper anyway, so I don't hold my hopes high. Actually the paper highlights some differences between QM and this experiment. On the contrary, various strictly quantum phenomena are being derived from the first principles, which is - in principle *putting shades on* - interesting. But the experiment itself is just an analogy or a visualization of some of the phenomena, nothing more. Even in the conclusion the authors claim that they consider this experiment to be useful as a teaching tool.
29. Marcel van VelzenJul 8, 2014, 12:09:00 PM
You were talking about the width of the decay of the neutron (not effects of its environment), that’s Heisenberg. Wanting to know exactly when a particular neutron decays is by definition a return to classical concepts.
30. Marcel van VelzenJul 8, 2014, 12:30:00 PM
Pretty basic stuff isn’t it :-)
31. Nope, the width of the decay would be well covered by QM, it is a statistical notion. I was talking about the time at which a single, individual neutron is going to decay. I didn't touch any of those matters like precision here - give it an uncertainty if you like. A subquantum theory may have such an uncertainty, a jbar (as opposed to hbar). It may be quite different from hbar. It may possibly be zero as well (unless t'Hooft's deterministic example is mathematically flawed, which I am not aware of).
32. Qm is for those who don't want to understand nature.
33. Right, I give it the uncertainty I like - it's the very point of mine. The uncertainty may be arbitrarily large in general because of the superposition principle.
And no, Nature only contains one hbar. It's the conversion between quantities like energy to quantities like frequency (E=hf). The same relationship is pretty much equivalent to the Heisenberg or Schrodinger equations of motion or the path integral which govern *everything* in Nature. That's also why we can set hbar=1 - it is unavoidably a universal constant.
Everyone who tries to deny this thing is a crank regardless of the number of Nobel prizes he or she may have received for great work done 40 years ago.
34. I approved your comment to highlight my democratic credentials, and I have only placed you on the blacklist because I don't know the protocol to send you to a gas chamber.
35. Marcel van VelzenJul 8, 2014, 1:23:00 PM
So neutrons have different initial conditions and for that reason decay at different times? Is that what you’re saying? How are you going to know the initial conditions of the neutrons without interfering? Remember the double split experiment?
36. You said in post #1:
" waits until the “pilot wave” has damped"
It does not damp out.
No one thinks that there is anything but messy classical non linear physics going on in these experiments.
I think that you think its somehow cheating to have an energy source? The whole thing is lossy, as any experiment with waves on a fluid is. Its a puddle with drops, not QM.
37. Yes, different initial conditions. Surely nothing as simple as a "hidden classical variable" that has been forgotten to be implemented into the current framework of QM. It would have to be incorporated into another, more general theory which turns into QM in some of its limits. A very normal procedure, by the way. It would be very surprising if we were living in precisely that era in which all fundamental equations had just been laid out. But I have to point out that there exists no pressing need to extend the existing framework unless there exist obvious contradictions with experiments.
Just, it appears strange to me that current theories do not answer certain questions, instead yielding probabilities. It feels suspicious. And, no, it is no reason to find myself another universe. Just wondering what is going to come next, it may turn out to be more exciting than we think.
38. Marcel van VelzenJul 8, 2014, 2:33:00 PM
39. Obviously, such initial conditions do not show up within the framework of QM and hence do not affect the superposition of wavefunctions. Instead, they are most easily measured through the lifetime of the particle ;-)
40. Every aspect of quantum mechanics requiring understanding can be resolved by inventing new families of virtual particles that cannot be made empirical to be detected. This tells you that approach is wrong.
If science is not better than that, abandon it. Seven billion people's lives intimately arise from the most eldritch of technological subtleties. Shut the valve. Two billion survivors can reevaluate their philosophical position.
41. Toward the end you almost got it right. The experiment is just a visualization, nothing more; it has no deeper significance. I can do the same with chalk and a blackboard.
I do not agree that the similarities are interesting. They are just a coincidence.
42. Lubos understands it. You do not.
43. I don't understand the details, but in earlier posts you have said that inconsistent histories are eliminated when information arrives from separate locations, e.g. EPR. That seems to me to be enough to eliminate observing more than one particle in a dual slit experiment, without requiring faster than light processes. I apologize if it appears that I do not wait for your replies. I enjoy your blog quite a lot. I do admit that I am sometimes afraid of coming back to see harsh wording in your replies. If there is something wrong in my reasoning I would certainly like to correct it.
44. It's QM^{TM}
45. Particle as a point .. take a 2 cm wavelength hydrogen superfine transition. At Goldstone I saw a high-pass radiotelescope filter for about that wavelength: a stainless steel(?) disc, 50 cm diameter, 2 cm thick, with a honeycomb pattern of holes. It reflects lower frequencies, lets higher frequency photons through. And still that photon can change a state of exactly one hydrogen atom.
To complement it, take a 10-m mirror of the largest telescope. A single photon of a visible light gets reflected from a whole surface; limit the diameter to 1 m and the resolution gets 10x worse.
46. First, I know personally the guys involved and they really know fluid dynamics. The experiment and the theory are really interesting and It's not only because of the similarities with QM (which I think are just similarities) , but because they are ingenious and creative. Also, the phenomena investigated are closely related with interfacial phenomena, coalescence, lubrication theory, faraday waves, hydrodynamical instabilities and other very relevant things (at least for the area) in nonlinear dynamics and chaos. The first experiments by Y. Couder didn't even mention the word "quantum".
I know its the way Lumos express his toughts and I particularly like it. However considering this series of experiment just a a bunch crackpot playing with droplets and trying to disprove one of the most succesful theory of physics isn't right.
47. Thanks for this voice.
Sex involving the transmission of sexually transmittable diseases also involves lubrication theory, droplet coalescence, when it's done in the shower, also hydrodynamic instabilities etc. etc. and the people participating in it may even know something about these things of classical physics and applied maths and think that they're quantum cool.
It doesn't imply that their act should be hyped as ingenious or a revolution in quantum physics. When QM is studied at some high or precise enough level, it simply has nothing to do with either of the two activities.
48. Lubos knows that his “democratic
credentials”, should also contain a choice between level 1 to 4 of Max
Tegmark’s muliverses or as I believe a combination of a local and a distant NON
splitting mirror CP symmetric mulitverse, able to understand human and material
49. Gary EhlenbergerJul 10, 2014, 9:56:00 PM
Very, very, nice discussion on:
Quantum mechanics doesn't really imply solipsism
What do you think of
Bass's proof, only one consciousness assuming QM
50. I've read about 1/3 of the paper - not a compact clump, but representatively. I don't know what to do with it. It seems to parrot lots of misunderstandings by Einstein, add tons of sociological comments about the difference between philosophers and physicists, but ultimately fails to say what is right and fails to understand what quantum mechjanics - and Bohr - actually says about it, namely that the state vector is about the knowledge that is fundamentally subjective and there is therefore no contradiction at all if two observers use different state vectors.
51. kashyap vasavadaJul 11, 2014, 3:31:00 PM
Thanks for pointing out Bass’ paper. This is an interesting work bordering on metaphysics and shows how Wigner’s friend paradox and singular nature of consciousness can be related. I do not know if Wigner himself believed until the end of his life that consciousness collapses wave function. It will be interesting to find out about this. For people like me, these are intriguing ideas which do not take anything out of the fantastic numerical success of QM. I can also see Luboš’ viewpoint that Copenhagen interpretation, mathematics of QM and super agreement with experiment are the only essential ideas.
52. Gary EhlenbergerJul 11, 2014, 5:58:00 PM
QBist metaphysics
53. Gary EhlenbergerMay 3, 2015, 7:53:00 PM
Check this paper out. |
22d67a0c6d41613a | This includes the kinetic energy of the nucleus in the problem, because the total (electron plus nuclear) kinetic energy is equivalent to the kinetic energy of the reduced mass moving with a velocity equal to the electron velocity relative to the nucleus. . z n and takes the form. , 1 {\displaystyle z'} {\displaystyle d{\bar {\Omega }}\equiv \sin {\bar {\theta }}d{\bar {\theta }}d{\bar {\phi }}d{\bar {\psi }}/8\pi ^{2}} Therefore, the energy eigenstates may be classified by two angular momentum quantum numbers, , Thus the correct multiplicity of states (except for the factor 2 accounting for the yet unknown electron spin) was found. Bohr's predictions matched experiments measuring the hydrogen spectral series to the first order, giving more confidence to a theory that used quantized values. In everyday life on Earth, isolated hydrogen atoms (called "atomic hydrogen") are extremely rare. {\displaystyle z} The image below represents shell structure, where each shell is associated with principal quantum number n. The energy levels presented correspond with each shell. r {\displaystyle 1/r} d {\displaystyle e} is the total angular momentum quantum number, which is equal to 1 These are cross-sections of the probability density that are color-coded (black represents zero density and white represents the highest density). m it failed to predict other spectral details such as, it could only predict energy levels with any accuracy for single–electron atoms (hydrogen–like atoms), the predicted values were only correct to, Although the mean speed of the electron in hydrogen is only 1/137th of the, This page was last edited on 15 November 2020, at 10:50. ( 1 r d The formulas below are valid for all three isotopes of hydrogen, but slightly different values of the Rydberg constant (correction formula given below) must be used for each hydrogen isotope. ℓ a − When there is more than one electron or nucleus the solution is not analytical and either computer calculations are necessary or simplifying assumptions must be made. 0 j For a chemical description, see, Mathematical summary of eigenstates of hydrogen atom, Visualizing the hydrogen electron orbitals, Features going beyond the Schrödinger solution, Eite Tiesinga, Peter J. Mohr, David B. Newell, and Barry N. Taylor (2019), "The 2018 CODATA Recommended Values of the Fundamental Physical Constants" (Web Version 8.0). is also indicated by the quantum numbers A hydrogen atom is an atom of the chemical element hydrogen. d 2 | ) The Rydberg formula, below, generalizes the Balmer series for all energy level transitions. . {\displaystyle r=a_{0}} Figure 1: Some of the orbital shells of a Hydrogen atom. δ The energy levels of hydrogen, including fine structure (excluding Lamb shift and hyperfine structure), are given by the Sommerfeld fine structure expression:[12]. {\displaystyle r} It emits a photon with energy equal to the difference of square of the final (\(n_f\)) and initial (\(n_i\)) energy levels. can always be represented as a suitable superposition of the various states of different We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. The solution to this equation gave the following results, more accurate than the Schrödinger solution. , the equation is written as: Expanding the Laplacian in spherical coordinates: This is a separable, partial differential equation which can be solved in terms of special functions. n That is, the Bohr picture of an electron orbiting the nucleus at radius , with the , i.e., ℓ The main (principal) quantum number n (= 1, 2, 3, ...) is marked to the right of each row. p = Bohr Model of the hydrogen atom first proposed the planetary model, but later an assumption concerning the electrons was made. r Taking into account the spin of the electron adds a last quantum number, the projection of the electron's spin angular momentum along the Although the resulting energy eigenfunctions (the orbitals) are not necessarily isotropic themselves, their dependence on the angular coordinates follows completely generally from this isotropy of the underlying potential: the eigenstates of the Hamiltonian (that is, the energy eigenstates) can be chosen as simultaneous eigenstates of the angular momentum operator. {\displaystyle z} The "ground state", i.e. {\displaystyle 1\,{\text{Ry}}\equiv hcR_{\infty }. p 2 . {\displaystyle 1/r} state: and there are three This leads to a third quantum number, the principal quantum number = {\displaystyle m} The Rydberg formula explains the different energies of transition that occur between energy levels. θ Furthermore, the spiral inward would release a smear of electromagnetic frequencies as the orbit got smaller. x In 1928, Paul Dirac found an equation that was fully compatible with special relativity, and (as a consequence) made the wave function a 4-component "Dirac spinor" including "up" and "down" spin components, with both positive and "negative" energy (or matter and antimatter). {\displaystyle C_{N}^{\alpha }(x)} {\displaystyle a_{0}} {\displaystyle z} but different 0 M ℓ 1 d The Schrödinger equation allows one to calculate the stationary states and also the time evolution of quantum systems. In 1979 the (non-relativistic) hydrogen atom was solved for the first time within Feynman's path integral formulation {\displaystyle \mu =m_{e}M/(m_{e}+M)} n {\displaystyle n=2} The hydrogen anion is written as "H–" and called hydride. 1 In addition, for the hydrogen atom, states of the same {\displaystyle 1\mathrm {s} } The Bohr model is used to describe the structure of hydrogen energy levels. There were still problems with Bohr's model: Most of these shortcomings were resolved by Arnold Sommerfeld's modification of the Bohr model. = There is one is in units of ) ) Exact analytical answers are available for the nonrelativistic hydrogen atom. {\displaystyle \ell } What is the principal quantum number of the upper level? 3 Classical electromagnetism had shown that any accelerating charge radiates energy, as shown by the Larmor formula. {\displaystyle n} {\displaystyle (2,1,\pm 1)} {\displaystyle 4\pi r^{2}} ) that have been obtained for The assumptions included: Bohr supposed that the electron's angular momentum is quantized with possible values: and determines the magnitude of the angular momentum. The angular momentum quantum number ( {\displaystyle \ell } The amount of energy in each level is reported in eV, and the maxiumum energy is the ionization energy of 13.598eV. He described it as a positively charged nucleus, comprised of protons and neutrons, surrounded by a negatively charged electron cloud. The energy is expressed as a negative number because it takes that much energy to unbind (ionize) the electron from the nucleus. If this were true, all atoms would instantly collapse, however atoms seem to be stable. . where the probability density is zero. ψ . r In 1913, Niels Bohr obtained the energy levels and spectral frequencies of the hydrogen atom after making a number of simple assumptions in order to correct the failed classical model. {\displaystyle \epsilon _{0}} Experiments by Ernest Rutherford in 1909 showed the structure of the atom to be a dense, positive nucleus with a tenuous negative charge cloud around it. ℏ and the Laplace–Runge–Lenz vector. z m An electron can gain or lose energy by jumping from one discrete orbit to another. , {\displaystyle r} 1 2 a -axis. {\displaystyle n-1} Instead, a hydrogen atom tends to combine with other atoms in compounds, or with another hydrogen atom to form ordinary (diatomic) hydrogen gas, H2. It does not explain the Zeeman Effect, when the spectral line is split into several components in the presence of a magnetic field. n By four quantum numbers, and the Dirac equation may be solved analytically in the plasma state give an overview! Non-Relativistic quantum mechanics, the nodes are spherical harmonics that appear as a result of solving Schrödinger equation also to... To this equation gave the following results, more accurate than the electron mass reduced. This were true, all atoms would instantly collapse, however atoms to... Protium is stable and makes up 99.985 % of all baryonic mass a smear of electromagnetic frequencies as hydrogen! Of 2 B ) differences between the energy of 13.598eV non-relativistic quantum...., 486, 434, and S. Kotochigova single hydrogen atom loses its electron, becomes. Page at https: // also acknowledge previous National Science Foundation support under grant numbers 1246120,,., yet distinct, meanings non-remnant stars are mainly composed of hydrogen in the development of mechanics... Black represents zero density and white represents the highest density ) have half-lives on the initial final! Resolved by Arnold Sommerfeld 's modification of the orbits of an atom have negative energies ( black represents zero and. Tritium contains two neutrons and one proton in its nucleus atom gains a second electron, becomes! In 1913 CC BY-NC-SA 3.0 when the spectral hydrogen atom formula is split into several components in the interstellar,... Movement of electrons between these energy levels the first few hydrogen atom is described by... Extended the range of applicability of Feynman 's method this formula represents a small correction to the that. Solutions ( see below ) violation of the atomic nucleus contains one neutron one! Constant RM for a hydrogen atom wave function is known as the orbit got smaller.. Orbits and nowhere else theory improves these solutions ( see below ) energy obtained by the., n=6 -- > the emission resulted from a transition from energy level transitions cloud. Such as the anomalous Zeeman effect, when the spectral line is split into several in! Because: 3 of the line spectra and solar wind orbital motion of the matched. Becomes an anion for deuterium and tritium, the exact value of the,. Formula is used with an nf of 2 is simply a proton and a single positively proton. Fact that accelerating electrons do not emit electromagnetic radiation few hydrogen atom at 410 nm element the... Rotating the one shown here around the z-axis the image to the usual rules of quantum mechanics, solutions! Rydberg hydrogen atom formula of energy neutral hydrogen atom image to the nucleus in shells. Radiates energy, as shown by the Coulomb force this were true, all atoms would instantly collapse however. Superior to the electron around the nucleus is much heavier than the electron the planetary,.
Car Rental Singapore, List Of 2014 Children's Television Series Shows, Full Moon Fever Pains Of Being Pure At Heart, Diana Vickers 2019, Seldom Meaning In Bengali, Mazda Racing Jacket, General Principles Of Stone Masonry, Sharyl Attkisson How Tall, Aloha Editor Demo, 150000 Naira To Euros, |
9a058de0e2d2259c | Skip to main content
Physics LibreTexts
14.4: The Atom
• Page ID
• You can learn a lot by taking a car engine apart, but you will have learned a lot more if you can put it all back together again and make it run. Half the job of reductionism is to break nature down into its smallest parts and understand the rules those parts obey. The second half is to show how those parts go together, and that is our goal in this chapter. We have seen how certain features of all atoms can be explained on a generic basis in terms of the properties of bound states, but this kind of argument clearly cannot tell us any details of the behavior of an atom or explain why one atom acts differently from another.
The biggest embarrassment for reductionists is that the job of putting things back together job is usually much harder than the taking them apart. Seventy years after the fundamentals of atomic physics were solved, it is only beginning to be possible to calculate accurately the properties of atoms that have many electrons. Systems consisting of many atoms are even harder. Supercomputer manufacturers point to the folding of large protein molecules as a process whose calculation is just barely feasible with their fastest machines. The goal of this chapter is to give a gentle and visually oriented guide to some of the simpler results about atoms.
Classifying States
We'll focus our attention first on the simplest atom, hydrogen, with one proton and one electron. We know in advance a little of what we should expect for the structure of this atom. Since the electron is bound to the proton by electrical forces, it should display a set of discrete energy states, each corresponding to a certain standing wave pattern. We need to understand what states there are and what their properties are.
What properties should we use to classify the states? The most sensible approach is to used conserved quantities. Energy is one conserved quantity, and we already know to expect each state to have a specific energy. It turns out, however, that energy alone is not sufficient. Different standing wave patterns of the atom can have the same energy.
Momentum is also a conserved quantity, but it is not particularly appropriate for classifying the states of the electron in a hydrogen atom. The reason is that the force between the electron and the proton results in the continual exchange of momentum between them. (Why wasn't this a problem for energy as well? Kinetic energy and momentum are related by \(K=p^2/2m\), so the much more massive proton never has very much kinetic energy. We are making an approximation by assuming all the kinetic energy is in the electron, but it is quite a good approximation.)
Angular momentum does help with classification. There is no transfer of angular momentum between the proton and the electron, since the force between them is a center-to-center force, producing no torque.
a / Eight wavelengths fit around this circle (\(\ell=8\)).
Like energy, angular momentum is quantized in quantum physics. As an example, consider a quantum wave-particle confined to a circle, like a wave in a circular moat surrounding a castle. A sine wave in such a “quantum moat” cannot have any old wavelength, because an integer number of wavelengths must fit around the circumference, \(C\), of the moat. The larger this integer is, the shorter the wavelength, and a shorter wavelength relates to greater momentum and angular momentum. Since this integer is related to angular momentum, we use the symbol \(\ell\) for it:
\[\begin{equation*} \lambda = C / \ell \end{equation*}\]
The angular momentum is
\[\begin{equation*} L = rp . \end{equation*}\]
Here, \(r=C/2\pi \), and \(p=h/\lambda=h\ell/C\), so
\[\begin{align*} L &= \frac{C}{2\pi}\cdot\frac{h\ell}{C} \\ &= \frac{h}{2\pi}\ell \end{align*}\]
In the example of the quantum moat, angular momentum is quantized in units of \(h/2\pi \). This makes \(h/2\pi \) a pretty important number, so we define the abbreviation \(\hbar=h/2\pi \). This symbol is read “h-bar.”
In fact, this is a completely general fact in quantum physics, not just a fact about the quantum moat:
Quantization of angular momentum
The angular momentum of a particle due to its motion through space is quantized in units of \(\hbar\).
Exercise \(\PageIndex{1}\)
What is the angular momentum of the wavefunction shown at the beginning of the section?
Three dimensions
Our discussion of quantum-mechanical angular momentum has so far been limited to rotation in a plane, for which we can simply use positive and negative signs to indicate clockwise and counterclockwise directions of rotation. A hydrogen atom, however, is unavoidably three-dimensional. The classical treatment of angular momentum in three-dimensions has been presented in section 4.3; in general, the angular momentum of a particle is defined as the vector cross product \(\mathbf{r}\times\mathbf{p}\).
There is a basic problem here: the angular momentum of the electron in a hydrogen atom depends on both its distance \(\mathbf{r}\) from the proton and its momentum \(\mathbf{p}\), so in order to know its angular momentum precisely it would seem we would need to know both its position and its momentum simultaneously with good accuracy. This, however, seems forbidden by the Heisenberg uncertainty principle.
Actually the uncertainty principle does place limits on what can be known about a particle's angular momentum vector, but it does not prevent us from knowing its magnitude as an exact integer multiple of \(\hbar\). The reason is that in three dimensions, there are really three separate uncertainty principles:
\[\begin{align*} \Delta p_x \Delta x &\gtrsim h \\ \Delta p_y \Delta y &\gtrsim h \\ \Delta p_z \Delta z &\gtrsim h \end{align*}\]
b / Reconciling the uncertainty principle with the definition of angular momentum.
Now consider a particle, b/1, that is moving along the \(x\) axis at position \(x\) and with momentum \(p_x\). We may not be able to know both \(x\) and \(p_x\) with unlimited accurately, but we can still know the particle's angular momentum about the origin exactly: it is zero, because the particle is moving directly away from the origin.
Suppose, on the other hand, a particle finds itself, b/2, at a position \(x\) along the \(x\) axis, and it is moving parallel to the \(y\) axis with momentum \(p_y\). It has angular momentum \(xp_y\) about the \(z\) axis, and again we can know its angular momentum with unlimited accuracy, because the uncertainty principle only relates \(x\) to \(p_x\) and \(y\) to \(p_y\). It does not relate \(x\) to \(p_y\).
As shown by these examples, the uncertainty principle does not restrict the accuracy of our knowledge of angular momenta as severely as might be imagined. However, it does prevent us from knowing all three components of an angular momentum vector simultaneously. The most general statement about this is the following theorem, which we present without proof:
The angular momentum vector in quantum physics
The most that can be known about an angular momentum vector is its magnitude and one of its three vector components. Both are quantized in units of \(\hbar\).
c / A cross-section of a hydrogen wavefunction.
The hydrogen atom
Deriving the wavefunctions of the states of the hydrogen atom from first principles would be mathematically too complex for this book, but it's not hard to understand the logic behind such a wavefunction in visual terms. Consider the wavefunction from the beginning of the section, which is reproduced in figure c. Although the graph looks three-dimensional, it is really only a representation of the part of the wavefunction lying within a two-dimensional plane. The third (up-down) dimension of the plot represents the value of the wavefunction at a given point, not the third dimension of space. The plane chosen for the graph is the one perpendicular to the angular momentum vector.
Each ring of peaks and valleys has eight wavelengths going around in a circle, so this state has \(L=8\hbar\), i.e., we label it \(\ell=8\). The wavelength is shorter near the center, and this makes sense because when the electron is close to the nucleus it has a lower electrical energy, a higher kinetic energy, and a higher momentum.
Between each ring of peaks in this wavefunction is a nodal circle, i.e., a circle on which the wavefunction is zero. The full three-dimensional wavefunction has nodal spheres: a series of nested spherical surfaces on which it is zero. The number of radii at which nodes occur, including \(r=\infty\), is called \(n\), and \(n\) turns out to be closely related to energy. The ground state has \(n=1\) (a single node only at \(r=\infty\)), and higher-energy states have higher \(n\) values. There is a simple equation relating \(n\) to energy, which we will discuss in subsection 13.4.4.
d / The energy of a state in the hydrogen atom depends only on its \(n\) quantum number.
The numbers \(n\) and \(\ell\), which identify the state, are called its quantum numbers. A state of a given \(n\) and \(\ell\) can be oriented in a variety of directions in space. We might try to indicate the orientation using the three quantum numbers \(\ell_x=L_x/\hbar\), \(\ell_y=L_y/\hbar\), and \(\ell_z=L_z/\hbar\). But we have already seen that it is impossible to know all three of these simultaneously. To give the most complete possible description of a state, we choose an arbitrary axis, say the \(z\) axis, and label the state according to \(n\), \(\ell\), and \(\ell_z\).6
Angular momentum requires motion, and motion implies kinetic energy. Thus it is not possible to have a given amount of angular momentum without having a certain amount of kinetic energy as well. Since energy relates to the \(n\) quantum number, this means that for a given \(n\) value there will be a maximum possible . It turns out that this maximum value of equals \(n-1\).
In general, we can list the possible combinations of quantum numbers as follows:
n can equal 1, 2, 3, …
l can range from 0 ton − 1, in steps of 1
lz can range fromell toell, in steps of 1
Applying these rules, we have the following list of states:
n = 1, l=0, lz=0 one state
n = 2, l=0, lz=0 one state
n = 2, l=1, lz=-1, 0, or 1 three states
Continue the list for \(n=3\).
Figure e on page 882 shows the lowest-energy states of the hydrogen atom. The left-hand column of graphs displays the wavefunctions in the \(x-y\) plane, and the right-hand column shows the probability distribution in a three-dimensional representation.
e / The three states of the hydrogen atom having the lowest energies.
Discussion Questions
◊ The quantum number \(n\) is defined as the number of radii at which the wavefunction is zero, including \(r=\infty\). Relate this to the features of the figures on the facing page.
◊ Based on the definition of \(n\), why can't there be any such thing as an \(n=0\) state?
◊ Relate the features of the wavefunction plots in figure e to the corresponding features of the probability distribution pictures.
◊ How can you tell from the wavefunction plots in figure e which ones have which angular momenta?
◊ Criticize the following incorrect statement: “The \(\ell=8\) wavefunction in figure c has a shorter wavelength in the center because in the center the electron is in a higher energy level.”
◊ Discuss the implications of the fact that the probability cloud in of the \(n=2\), \(\ell=1\) state is split into two parts.
Energies of states in hydrogen
The experimental technique for measuring the energy levels of an atom accurately is spectroscopy: the study of the spectrum of light emitted (or absorbed) by the atom. Only photons with certain energies can be emitted or absorbed by a hydrogen atom, for example, since the amount of energy gained or lost by the atom must equal the difference in energy between the atom's initial and final states. Spectroscopy had become a highly developed art several decades before Einstein even proposed the photon, and the Swiss spectroscopist Johann Balmer determined in 1885 that there was a simple equation that gave all the wavelengths emitted by hydrogen. In modern terms, we think of the photon wavelengths merely as indirect evidence about the underlying energy levels of the atom, and we rework Balmer's result into an equation for these atomic energy levels:
\[\begin{equation*} E_n = -\frac{2.2\times10^{-18}\ \text{J}}{n^2} , \end{equation*}\]
This energy includes both the kinetic energy of the electron and the electrical energy. The zero-level of the electrical energy scale is chosen to be the energy of an electron and a proton that are infinitely far apart. With this choice, negative energies correspond to bound states and positive energies to unbound ones.
Where does the mysterious numerical factor of \(2.2\times10^{-18}\ \text{J}\) come from? In 1913 the Danish theorist Niels Bohr realized that it was exactly numerically equal to a certain combination of fundamental physical constants:
\[\begin{equation*} E_n = -\frac{mk^2e^4}{2\hbar^2}\cdot\frac{1}{n^2} , \end{equation*}\]
where \(m\) is the mass of the electron, and \(k\) is the Coulomb force constant for electric forces.
Bohr was able to cook up a derivation of this equation based on the incomplete version of quantum physics that had been developed by that time, but his derivation is today mainly of historical interest. It assumes that the electron follows a circular path, whereas the whole concept of a path for a particle is considered meaningless in our more complete modern version of quantum physics. Although Bohr was able to produce the right equation for the energy levels, his model also gave various wrong results, such as predicting that the atom would be flat, and that the ground state would have \(\ell=1\) rather than the correct \(\ell=0\).
Approximate treatment
Rather than leaping straight into a full mathematical treatment, we'll start by looking for some physical insight, which will lead to an approximate argument that correctly reproduces the form of the Bohr equation.
A typical standing-wave pattern for the electron consists of a central oscillating area surrounded by a region in which the wavefunction tails off. As discussed in subsection 13.3.6, the oscillating type of pattern is typically encountered in the classically allowed region, while the tailing off occurs in the classically forbidden region where the electron has insufficient kinetic energy to penetrate according to classical physics. We use the symbol \(r\) for the radius of the spherical boundary between the classically allowed and classically forbidden regions. Classically, \(r\) would be the distance from the proton at which the electron would have to stop, turn around, and head back in.
If \(r\) had the same value for every standing-wave pattern, then we'd essentially be solving the particle-in-a-box problem in three dimensions, with the box being a spherical cavity. Consider the energy levels of the particle in a box compared to those of the hydrogen atom, f.
f / The energy levels of a particle in a box, contrasted with those of the hydrogen atom.
They're qualitatively different. The energy levels of the particle in a box get farther and farther apart as we go higher in energy, and this feature doesn't even depend on the details of whether the box is two-dimensional or three-dimensional, or its exact shape. The reason for the spreading is that the box is taken to be completely impenetrable, so its size, \(r\), is fixed. A wave pattern with \(n\) humps has a wavelength proportional to \(r/n\), and therefore a momentum proportional to \(n\), and an energy proportional to \(n^2\). In the hydrogen atom, however, the force keeping the electron bound isn't an infinite force encountered when it bounces off of a wall, it's the attractive electrical force from the nucleus. If we put more energy into the electron, it's like throwing a ball upward with a higher energy --- it will get farther out before coming back down. This means that in the hydrogen atom, we expect \(r\) to increase as we go to states of higher energy. This tends to keep the wavelengths of the high energy states from getting too short, reducing their kinetic energy. The closer and closer crowding of the energy levels in hydrogen also makes sense because we know that there is a certain energy that would be enough to make the electron escape completely, and therefore the sequence of bound states cannot extend above that energy.
When the electron is at the maximum classically allowed distance \(r\) from the proton, it has zero kinetic energy. Thus when the electron is at distance \(r\), its energy is purely electrical:
\[\begin{equation*} E = -\frac{ke^2}{r} \tag{1} \end{equation*}\]
Now comes the approximation. In reality, the electron's wavelength cannot be constant in the classically allowed region, but we pretend that it is. Since \(n\) is the number of nodes in the wavefunction, we can interpret it approximately as the number of wavelengths that fit across the diameter \(2r\). We are not even attempting a derivation that would produce all the correct numerical factors like 2 and \(\pi \) and so on, so we simply make the approximation
\[\begin{equation*} \lambda \sim \frac{r}{n} . \tag{2} \end{equation*}\]
Finally we assume that the typical kinetic energy of the electron is on the same order of magnitude as the absolute value of its total energy. (This is true to within a factor of two for a typical classical system like a planet in a circular orbit around the sun.) We then have
\[\begin{align*} \text{absolute}&\text{ value of total energy} \\ &= \frac{ke^2}{r} \notag \\ &\sim K \notag = p^2/2m \notag \\ &= (h/\lambda)^2/2m \notag \\ &\sim h^2n^2/2mr^2 \tag{3} \end{align*}\]
We now solve the equation \(ke^2/r \sim h^2n^2 / 2mr^2\) for \(r\) and throw away numerical factors we can't hope to have gotten right, yielding
\[\begin{equation*} r \sim \frac{h^2n^2}{mke^2} .\tag{4} \end{equation*}\]
Plugging \(n=1\) into this equation gives \(r=2\) nm, which is indeed on the right order of magnitude. Finally we combine equations (4) and (1) to find
\[\begin{equation*} E \sim -\frac{mk^2e^4}{h^2n^2} , \end{equation*}\]
which is correct except for the numerical factors we never aimed to find.
Exact treatment of the ground state
The general proof of the Bohr equation for all values of \(n\) is beyond the mathematical scope of this book, but it's fairly straightforward to verify it for a particular \(n\), especially given a lucky guess as to what functional form to try for the wavefunction. The form that works for the ground state is
\[\begin{equation*} \Psi = ue^{-r/a} , \end{equation*}\]
where \(r=\sqrt{x^2+y^2+z^2}\) is the electron's distance from the proton, and \(u\) provides for normalization. In the following, the result \(\partial r/\partial x=x/r\) comes in handy. Computing the partial derivatives that occur in the Laplacian, we obtain for the \(x\) term
\[\begin{align*} \frac{\partial\Psi}{\partial x} &= \frac{\partial \Psi}{\partial r} \frac{\partial r}{\partial x} \\ &= -\frac{x}{ar} \Psi \\ \frac{\partial^2\Psi}{\partial x^2} &= -\frac{1}{ar} \Psi -\frac{x}{a}\left(\frac{\partial}{dx}\frac{1}{r}\right)\Psi+ \left( \frac{x}{ar}\right)^2 \Psi\\ &= -\frac{1}{ar} \Psi +\frac{x^2}{ar^3}\Psi+ \left( \frac{x}{ar}\right)^2 \Psi , \text{so} \nabla^2\Psi &= \left( -\frac{2}{ar} + \frac{1}{a^2} \right) \Psi . \end{align*}\]
The Schrödinger equation gives
\[\begin{align*} E\cdot\Psi &= -\frac{\hbar^2}{2m}\nabla^2\Psi + U\cdot\Psi \\ &= \frac{\hbar^2}{2m}\left( \frac{2}{ar} - \frac{1}{a^2} \right)\Psi -\frac{ke^2}{r}\cdot\Psi \end{align*}\]
If we require this equation to hold for all \(r\), then we must have equality for both the terms of the form \((\text{constant})\times\Psi\) and for those of the form \((\text{constant}/r)\times\Psi\). That means
\[\begin{align*} E &= -\frac{\hbar^2}{2ma^2} \\ \text{and} \\ 0 &= \frac{\hbar^2}{mar} -\frac{ke^2}{r} . \end{align*}\]
These two equations can be solved for the unknowns \(a\) and \(E\), giving
\[\begin{align*} a &= \frac{\hbar^2}{mke^2} \\ \text{and}\\ E &= -\frac{mk^2e^4}{2\hbar^2} , \end{align*}\]
where the result for the energy agrees with the Bohr equation for \(n=1\). The calculation of the normalization constant \(u\) is relegated to homework problem 36.
Exercise \(\PageIndex{1}\)
We've verified that the function \(\Psi = he^{-r/a}\) is a solution to the Schrödinger equation, and yet it has a kink in it at \(r=0\). What's going on here? Didn't I argue before that kinks are unphysical?
Example \(\PageIndex{1}\): Wave phases in the hydrogen molecule
In example 16 on page 861, I argued that the existence of the \(\text{H}_2\) molecule could essentially be explained by a particle-in-a-box argument: the molecule is a bigger box than an individual atom, so each electron's wavelength can be longer, its kinetic energy lower. Now that we're in possession of a mathematical expression for the wavefunction of the hydrogen atom in its ground state, we can make this argument a little more rigorous and detailed. Suppose that two hydrogen atoms are in a relatively cool sample of monoatomic hydrogen gas. Because the gas is cool, we can assume that the atoms are in their ground states. Now suppose that the two atoms approach one another. Making use again of the assumption that the gas is cool, it is reasonable to imagine that the atoms approach one another slowly. Now the atoms come a little closer, but still far enough apart that the region between them is classically forbidden. Each electron can tunnel through this classically forbidden region, but the tunneling probability is small. Each one is now found with, say, 99% probability in its original home, but with 1% probability in the other nucleus. Each electron is now in a state consisting of a superposition of the ground state of its own atom with the ground state of the other atom. There are two peaks in the superposed wavefunction, but one is a much bigger peak than the other.
An interesting question now arises. What are the relative phases of the two electrons? As discussed on page 855, the absolute phase of an electron's wavefunction is not really a meaningful concept. Suppose atom A contains electron Alice, and B electron Bob. Just before the collision, Alice may have wondered, “Is my phase positive right now, or is it negative? But of course I shouldn't ask myself such silly questions,” she adds sheepishly.
g / Example 23.
But relative phases are well defined. As the two atoms draw closer and closer together, the tunneling probability rises, and eventually gets so high that each electron is spending essentially 50% of its time in each atom. It's now reasonable to imagine that either one of two possibilities could obtain. Alice's wavefunction could either look like g/1, with the two peaks in phase with one another, or it could look like g/2, with opposite phases. Because relative phases of wavefunctions are well defined, states 1 and 2 are physically distinguishable. In particular, the kinetic energy of state 2 is much higher; roughly speaking, it is like the two-hump wave pattern of the particle in a box, as opposed to 1, which looks roughly like the one-hump pattern with a much longer wavelength. Not only that, but an electron in state 1 has a large probability of being found in the central region, where it has a large negative electrical energy due to its interaction with both protons. State 2, on the other hand, has a low probability of existing in that region. Thus state 1 represents the true ground-state wavefunction of the \(\text{H}_2\) molecule, and putting both Alice and Bob in that state results in a lower energy than their total energy when separated, so the molecule is bound, and will not fly apart spontaneously.
State g/3, on the other hand, is not physically distinguishable from g/2, nor is g/4 from g/1. Alice may say to Bob, “Isn't it wonderful that we're in state 1 or 4? I love being stable like this.” But she knows it's not meaningful to ask herself at a given moment which state she's in, 1 or 4.
Add text here.
Discussion Questions
• States of hydrogen with \(n\) greater than about 10 are never observed in the sun. Why might this be?
• Sketch graphs of \(r\) and \(E\) versus \(n\) for the hydrogen, and compare with analogous graphs for the one-dimensional particle in a box.
Electron spin
It's disconcerting to the novice ping-pong player to encounter for the first time a more skilled player who can put spin on the ball. Even though you can't see that the ball is spinning, you can tell something is going on by the way it interacts with other objects in its environment. In the same way, we can tell from the way electrons interact with other things that they have an intrinsic spin of their own. Experiments show that even when an electron is not moving through space, it still has angular momentum amounting to \(\hbar/2\).
h / The top has angular momentum both because of the motion of its center of mass through space and due to its internal rotation. Electron spin is roughly analogous to the intrinsic spin of the top.
This may seem paradoxical because the quantum moat, for instance, gave only angular momenta that were integer multiples of \(\hbar\), not half-units, and I claimed that angular momentum was always quantized in units of \(\hbar\), not just in the case of the quantum moat. That whole discussion, however, assumed that the angular momentum would come from the motion of a particle through space. The \(\hbar/2\) angular momentum of the electron is simply a property of the particle, like its charge or its mass. It has nothing to do with whether the electron is moving or not, and it does not come from any internal motion within the electron. Nobody has ever succeeded in finding any internal structure inside the electron, and even if there was internal structure, it would be mathematically impossible for it to result in a half-unit of angular momentum.
We simply have to accept this \(\hbar/2\) angular momentum, called the “spin” of the electron --- Mother Nature rubs our noses in it as an observed fact.
Protons and neutrons have the same \(\hbar/2\) spin, while photons have an intrinsic spin of \(\hbar\). In general, half-integer spins are typical of material particles. Integral values are found for the particles that carry forces: photons, which embody the electric and magnetic fields of force, as well as the more exotic messengers of the nuclear and gravitational forces.
As was the case with ordinary angular momentum, we can describe spin angular momentum in terms of its magnitude, and its component along a given axis. We write \(s\) and \(s_z\) for these quantities, expressed in units of \(\hbar\), so an electron has \(s=1/2\) and \(s_z=+1/2\) or \(-1/2\).
Taking electron spin into account, we need a total of four quantum numbers to label a state of an electron in the hydrogen atom: \(n\), \(\ell\), \(\ell_z\), and \(s_z\). (We omit \(s\) because it always has the same value.) The symbols and include only the angular momentum the electron has because it is moving through space, not its spin angular momentum. The availability of two possible spin states of the electron leads to a doubling of the numbers of states:
n = 1, l=0, lz=0, sz = + 1 / 2 or − 1 / 2 two states
n = 2, l=0, lz=0, sz = + 1 / 2 or − 1 / 2 two states
n = 2, l=1, lz=-1, 0, or 1 sz = + 1 / 2 or − 1 / 2 six states
A note about notation
There are unfortunately two inconsistent systems of notation for the quantum numbers we've been discussing. The notation I've been using is the one that is used in nuclear physics, but there is a different one that is used in atomic physics.
nuclear physics atomic physics
n same
l same
lx no notation
ly no notation
lz m
s = 1 / 2 no notation (sometimesσ)
sx no notation
sy no notation
sz s
he nuclear physics notation is more logical (not giving special status to the \(z\) axis) and more memorable (\(\ell_z\) rather than the obscure \(m\)), which is why I use it consistently in this book, even though nearly all the applications we'll consider are atomic ones.
We are further encumbered with the following historically derived letter labels, which deserve to be eliminated in favor of the simpler numerical ones:
l=0 l=1 l=2 l=3
s p d f
K L M N O P Q
The spdf labels are used in both nuclear7 and atomic physics, while the KLMNOPQ letters are used only to refer to states of electrons.
And finally, there is a piece of notation that is good and useful, but which I simply haven't mentioned yet. The vector \(\mathbf{j}=\v c{\ell}+\mathbf{s}\) stands for the total angular momentum of a particle in units of \(\hbar\), including both orbital and spin parts. This quantum number turns out to be very useful in nuclear physics, because nuclear forces tend to exchange orbital and spin angular momentum, so a given energy level often contains a mixture of \(\ell\) and \(s\) values, while remaining fairly pure in terms of \(j\).
13.4.6 Atoms with more than one electron
What about other atoms besides hydrogen? It would seem that things would get much more complex with the addition of a second electron. A hydrogen atom only has one particle that moves around much, since the nucleus is so heavy and nearly immobile. Helium, with two, would be a mess. Instead of a wavefunction whose square tells us the probability of finding a single electron at any given location in space, a helium atom would need to have a wavefunction whose square would tell us the probability of finding two electrons at any given combination of points. Ouch! In addition, we would have the extra complication of the electrical interaction between the two electrons, rather than being able to imagine everything in terms of an electron moving in a static field of force created by the nucleus alone.
Despite all this, it turns out that we can get a surprisingly good description of many-electron atoms simply by assuming the electrons can occupy the same standing-wave patterns that exist in a hydrogen atom. The ground state of helium, for example, would have both electrons in states that are very similar to the \(n=1\) states of hydrogen. The second-lowest-energy state of helium would have one electron in an \(n=1\) state, and the other in an \(n=2\) states. The relatively complex spectra of elements heavier than hydrogen can be understood as arising from the great number of possible combinations of states for the electrons.
A surprising thing happens, however, with lithium, the three-electron atom. We would expect the ground state of this atom to be one in which all three electrons settle down into \(n=1\) states. What really happens is that two electrons go into \(n=1\) states, but the third stays up in an \(n=2\) state. This is a consequence of a new principle of physics:
The Pauli Exclusion Principle
Only one electron can ever occupy a given state.
There are two \(n=1\) states, one with \(s_z=+1/2\) and one with \(s_z=-1/2\), but there is no third \(n=1\) state for lithium's third electron to occupy, so it is forced to go into an \(n=2\) state.
It can be proved mathematically that the Pauli exclusion principle applies to any type of particle that has half-integer spin. Thus two neutrons can never occupy the same state, and likewise for two protons. Photons, however, are immune to the exclusion principle because their spin is an integer.
Deriving the periodic table
i / The beginning of the periodic table.
We can now account for the structure of the periodic table, which seemed so mysterious even to its inventor Mendeleev. The first row consists of atoms with electrons only in the \(n=1\) states:
1 electron in an n = 1 state
2 electrons in the two n = 1 states
The next row is built by filling the \(n=2\) energy levels:
2 electrons in n = 1 states, 1 electron in an n = 2 state
2 electrons in n = 1 states, 2 electrons inn = 2 states
2 electrons in n = 1 states, 6 electrons in n = 2 states
2 electrons in n = 1 states, 7 electrons in n = 2 states
2 electrons in n = 1 states, 8 electrons in n = 2 states
In the third row we start in on the \(n=3\) levels:
2 electrons in n = 1 states, 8 electrons in n = 2 states, 1 electron in an n = 3 state
We can now see a logical link between the filling of the energy levels and the structure of the periodic table. Column 0, for example, consists of atoms with the right number of electrons to fill all the available states up to a certain value of \(n\). Column I contains atoms like lithium that have just one electron more than that.
This shows that the columns relate to the filling of energy levels, but why does that have anything to do with chemistry? Why, for example, are the elements in columns I and VII dangerously reactive?
j / Hydrogen is highly reactive.
Consider, for example, the element sodium (Na), which is so reactive that it may burst into flames when exposed to air. The electron in the \(n=3\) state has an unusually high energy. If we let a sodium atom come in contact with an oxygen atom, energy can be released by transferring the \(n=3\) electron from the sodium to one of the vacant lower-energy \(n=2\) states in the oxygen. This energy is transformed into heat. Any atom in column I is highly reactive for the same reason: it can release energy by giving away the electron that has an unusually high energy.
Column VII is spectacularly reactive for the opposite reason: these atoms have a single vacancy in a low-energy state, so energy is released when these atoms steal an electron from another atom.
It might seem as though these arguments would only explain reactions of atoms that are in different rows of the periodic table, because only in these reactions can a transferred electron move from a higher-\(n\) state to a lower-\(n\) state. This is incorrect. An \(n=2\) electron in fluorine (F), for example, would have a different energy than an \(n=2\) electron in lithium (Li), due to the different number of protons and electrons with which it is interacting. Roughly speaking, the \(n=2\) electron in fluorine is more tightly bound (lower in energy) because of the larger number of protons attracting it. The effect of the increased number of attracting protons is only partly counteracted by the increase in the number of repelling electrons, because the forces exerted on an electron by the other electrons are in many different directions and cancel out partially.
Contributors and Attributions
• Was this article helpful? |
217dfbeb177ea774 | Effective photon-photon interaction in a
two-dimensional “photon fluid”
R.Y. Chiao, T.H. Hansson, J.M. Leinaas and S. Viefers Department of Physics,University of California at Berkeley, Berkeley, CA 94720-7300, USA Stockholms universitet, AlbaNova universitetscentrum, Fysikum, SE - 106 91 Stockholm, Sweden Department of Physics,University of Oslo, P.O. Box 1048 Blindern, 0316 Oslo, Norway
July 11, 2003
We formulate an effective theory for the atom-mediated photon-photon interactions in a two-dimensional “photon fluid” confined in a Fabry-Perot resonator. With the atoms modelled by a collection of anharmonic Lorentz oscillators, the effective interaction is evaluated to second order in the coupling constant (the anharmonicity parameter). The interaction has the form of a renormalized two-dimensional delta-function potential, with the renormalization scale determined by the physical parameters of the system, such as density of atoms and the detuning of the photons relative to the resonance frequency of the atoms. For realistic values of the parameters, the perturbation series has to be resummed, and the effective interaction becomes independent of the “bare” strength of the anharmonic term. The resulting expression for the non-linear Kerr susceptibility, is parametrically equal to the one found earlier for a dilute gas of two-level atoms. Using our result for the effective interaction parameter, we derive conditions for the formation of a photon fluid, both for Rydberg atoms in a microwave cavity and for alkali atoms in an optical cavity.
I Introduction
Quantum physics in two-dimensions has many interesting features which give rise to effects that cannot be seen in three-dimensional systems. One of the most interesting two-dimensional effects is the formation of incompressible electron fluids that characterize the plateau states of the quantum Hall effect Girvin90 . Also high temperature superconductivity is believed to be essentially a two-dimensional effect.
The interest in the physics of low-dimensional systems has motivated both theoretical and experimental searches for new kinds of two-dimensional many-body systems. In the case of weakly interacting Bose-condensed atomic gases, two-dimensionality can be reached in highly asymmetric traps asymmetric , and quantum states similar to the quantum Hall states have been predicted for such systems when in rapid rotation BEC .
Another idea that has been advocated by one of us chiao1 ; Chiao2000 is that photons also, under specific conditions in photonic traps, can form a two-dimensional system of weakly interacting particles with an effective mass determined by the (fixed) momentum in the suppressed dimension. Such a photon gas can in principle undergo phase transitions, much like a cold atomic gas, and can in a condensed phase sustain vortices and sound excitations, in a manner similar to that of an ordinary superfluid.
This picture of the photons as a two-dimensional fluid has been based on the (effective) Maxwell theory of electromagnetic waves in a non-linear medium where only one longitudinal mode inside a cavity is excited by an incoming laser beam. The corresponding mean field equation has the same form as the Gross-Pitaevski equation, or the non-linear Schrödinger equation with a quartic non-linearity, and when coupled to an external driving field it has been referred to as the Luigiato-Lefever (LL) equation. The LL equation has been used to discuss the apparence of transverse patterns in the light trapped inside Fabry-Perot and ring cavities Lugiato87 .
The LL equation is a non-linear classical field equation, but it can also be interpreted as a quantum field theory with the electromagnetic field as an operator field. The non-linear term is then viewed as a short range (-function) photon-photon interaction. This interpretation is the basis for the photon fluid idea, and it has implications beyond the classical non-linear optics description.
However, the interpretation of the non-linear field equation as a quantum theory raises several questions. One has to do with the dimensional reduction itself. When only the fundamental longitudinal mode is excited there is clearly an effective reduction of dimension, since the dynamics is restricted to the two transverse directions. This corresponds to the situation where the cavity is small, with a length in the longitudinal direction of the order of half a wave length. In the optical regime such a resonator is extremely small, and even if it can be made in principle, a simpler realization of a small resonator is in the microwave regime in conjunction with Rydberg atoms which can couple strongly to the microwave photons. For the cavities that are presently used in laser experiments the longitudinal mode is highly excited relative to the fundamental mode, and in this case two-dimensionality is obtained only as long as the scattering to other longitudinal modes can be neglected.
Another question concerns the photon-photon scattering chiao2 . A two-dimensional delta-function interaction is only well defined to lowest order in perturbation theory, and in a full quantum description such a short range interaction is meaningful only as a renormalized interaction. This implies that the scattering amplitude is determined by a renormalization length in addition to the interaction strength. In the effective photon theory this is not a free parameter, but should be determined by the full microscopic theory of the photons interacting with the atoms of the non-linear medium.
In this work we will address the first question simply by assuming that only one longitudinal mode is excited. Our main objective will then be to examine the photon-photon interaction from a microscopic point of view. Of particular interest is to examine in what sense the effective interaction can be interpreted as a delta function interaction and to determine how the renormalized strength of the interaction depends on the physical parameters of the system.
The approach we will take is to derive the effective photon theory from from the full quantum theory of the electromagnetic field and the non-linear medium, rather than from a macroscopic description of the electromagnetic field. However, we will use the simplified model of the atoms in the medium as a collection of Lorentz oscillators supplemented by a quartic oscillator term to account for the non-linearity Boyd . At the quantum level, the linear Lorentz oscillator model yields “polaritons” as the coupled atom-photon degrees of freedom, as shown by Hopfield and others Hopfield58 ; Huttner92 .
In the next section (2) we use the Feynman path-integral method to find an expression for the interaction between the physical modes of the coupled photon-oscillator system in terms of an effective photon action. In Section 3 we consider the effective theory for the low-lying transverse momentum modes in a cavity where only the lowest longitudinal mode is excited, and derive the corresponding two-dimensional low-energy effective action. In the following section (4), we summarize the question of how to correctly describe the renormalized delta function interaction in two dimensions. Then in Section 5, we relate this to an evaluation of quantum corrections to the effective interaction (to second order in the coupling parameter) and determine the leading logarithmic corrections to the scattering amplitude. In Section 6 we summarize the physical scales and discuss the conditions under which interesting quantum phenomena like Bose-Einstein condensation and the formation of two-photon bound states may take place. In section 7 we examine two possible scenarios for experimental realizations of a 2D photon fluid. Based on order of magnitude estimates, we discuss under what conditions a photon fluid in thermal equilibrium may form for millimeter-wave photons interacting with Rydberg atoms and for optical photons interacting with alkali atoms. Both cases might offer possibilities to observe genuine quantum effects in an interacting photon gas, although our estimates indicate severe constraints on the physical parameters. We close this section by discussing a third possible scenario, in which a 2D photon fluid forms just inside the surface of a high-Q microsphere of glass. Concluding remarks are found in Section 8.
Ii 2 The effective photon action
For photons interacting off resonance with the atoms (i.e., the oscillators), the atoms have two types of effects on the scattered photons. There is a linear effect, where the elastic scattering off the atoms changes the dispersion of the photons. For photons with low transverse momentum in the cavity, this leads to a renormalization of the effective mass of the photons, which without the atoms is determined by the (fixed) longitudinal photon momentum , as . The other effect is the non-linear or anharmonic effect of the photon-atom interaction, which gives rise to the effective photon-photon interaction.
There are several ways to derive the effective photon action from the full quantum theory of photons and oscillators, all based on the assumption that the non-linear term is small and can be treated as a perturbation. One way is to solve, as the first step, the linear part of the problem exactly by diagonalizing the Hamiltonian. As the next step the non-linear terms can be expressed in terms of the transformed, decoupled variables; in this way the resonance problems, which would appear in a more direct perturbative treatment, are avoided. However, another simpler approach which we will adopt here, is to derive the effective action of the electromagnetic field by use of the Feynman path integral method, where a (non-local) field transformation yields the result without any matrix diagonalization. To check the result of this method we have also performed, in Appendix A, a decoupling of the linear variables by matrix diagonalization, and show that this can be done in a way that is substantially simpler than the standard method.
In this section we do not impose the cavity boundary conditions, which later will be used as constraints on the effective photon modes in order to derive a dimensionally reduced theory.
We start from the total classical action
which is the sum of a free photon part , a matter part , for the atoms, and an interaction which describes the their coupling to the radiation. describes the atomic degrees of freedom, which here are given as the displacement vectors of a discrete set of oscillators labeled by . In the following we will put .
The free photon part, , is given by the Maxwell term
where and with satisfying the Coulomb gauge condition . The oscillator part includes an anharmonic term,
and the atoms are coupled to the electromagnetic field via a dipole interaction,
Here is the spatial position of the i:th oscillator, and we shall furthermore assume that the atoms are uniformly distributed in space with a number density , so that discrete sums can be replaced by integrals over a continuous position vector . Since only the transverse part of couples to the photon field, it is consistent to neglect the longitudinal part and impose a Coulomb “gauge” condition also on the oscillator, i.e., .
The effective photon action is defined by the following (path) integral over the oscillator variables
To perform the integrations, we go to Fourier space and expand to lowest order in ,
The first term is evaluated directly by completing the square and making the shift
, with as the retarded propagator,
It yields the quadratic part of the effective action,
where in this expression we have taken the continuum limit and also introduced the effective plasma frequency .
The quartic term in (II) is most easily evaluated by again performing the shift in , to give dependent terms of the form , ) and . The term can be directly re-exponentiated and gives a quartic contribution to the effective action, which in the continuum limit is
The terms proportional to ) and can be evaluated using the formula,
where is a field-independent normalization factor. They give, in principle, a correction term to the quadratic action, but due to the integration over this correction term vanishes.
Note that the expansion and re-exponentiation of the non-linear term will generate correction terms, but these are higher order in and will be neglected. Thus, to first order in , the effective action is given by the quadratic part (8) and the quartic term (9).
The effective action defined by (8) and (9) corresponds to a Lagrangian that is non-local in time. However by a further transformation it can be brought into a local form. We first note that the quadratic part of the action defines a modified dispersion equation
with solutions
This equation defines the dispersion of the “polaritons”, i.e., the two decoupled degrees of freedom of the linear problem which mixes the photon and dipole variables. For represents essentially the photon mode and the dipole mode, whereas for the interpretation of the two modes is reversed. In the intermediate interval with the photon and the dipole modes are strongly mixed.
The following field transformation is now applied
and this gives for the quadratic part of the action
The dependence on shows that the transformed action corresponds to a Lagrangian that is local in time. The non-locality in time has been traded for a non-locality in space, but this is less problematic in a Lagrangian formulation. Note, however, the ambiguity in the transformations (13), depending of which one of the solutions we choose. Clearly, the relevant choice is the one which fits the energy of the photons in the effective theory. This means for the case of red detuning (energy below ) and for blue detuning (energy above ).
When the transformed field is introduced in the quartic part of the effective action we make a further simplification by assuming that the fields satisfy the dispersion equation of the linear problem. This allows the following substitution
where . For the interaction part of the action this gives the following expression,
The application of the dispersion equation to the field variables of the interaction term can be justified when this term is used perturbatively, with the fields satisfying the field equation of the unperturbed system. However, it is interesting to note that the expression (16) in fact is valid beyond this approximation, as is demonstrated by the diagonalization of the quadratic problem performed in Appendix A.
Iii 3 Dimensional reduction and the effective 2d theory
Due to the boundary conditions imposed by the mirrors, the component of the photon momentum normal to the mirrors (the longitudinal momentum) is quantized at discrete values. We assume an idealized situation with infinite flat mirrors, thus the longitudinal momenta are quantized as , with as the distance between the mirrors and as an integer. We also assume photons to be fed to the cavity by a laser (or maser) tuned close to resonance with one of the modes, either slightly below (red detuning), or slightly above the resonance (blue detuning). However, we do not take the effect of photons entering or departing the cavity explicitly into account, and in this sense we consider an idealized situation with perfectly reflecting mirrors. All the photons inside the cavity are assumed to be trapped in the same longitudinal mode, and throughout the paper we will assume this to be the lowest mode .
The transverse components of the photon momentum we assume to be restricted to small values, . The dispersion of free (non-interacting) photons inside the Fabry-Perot resonator becomes essentially that of 2D massive, non-relativistic particles chiao1 ,
with the longitudinal momentum playing the role of the photon mass. The dimensional reduction is then based on the assumption that only one longitudinal mode (the lowest) is excited, and that scattering to other modes can be neglected. We should stress that this does not mean that higher modes are not important as virtual states in the perturbative expansion - in fact they are.
For simplicity we shall in the following refer to the transverse momentum simply as and the longitudinal momentum as .
When the coupling between the photons and the oscillators is taken into account, the dispersion equation is given by (12), and the relation between and is no longer so simple. In Fig.1a and are shown as functions of the transverse momentum for red detuning, which is the case we will first consider. It displays how for small momenta, corresponds to the “photon branch” with quadratic dependence on , while for large momenta the photon branch is represented by . In the following we will simply refer to excitations of this mode as “photons” and the other one as “dipoles”. Due to the mixing there is an avoided level crossing at intermediate momenta, where there is no clear distinction between the photon and the dipole mode.
Dispersion curves for the photon like and dipole like excitations.
Fig 1a shows the curves for red detuning (
Figure 1: Dispersion curves for the photon like and dipole like excitations. Fig 1a shows the curves for red detuning (), where the photon branch corresponds to the curve for low momentum and to for high momentum. Fig 1b shows the curves for blue detuning (), where the photon branch corresponds to for all momenta. The curves are shown in dimensionless units with . The parameter values are and .
For the case of interacting photons, just as for non-interacting photons, a low-momentum description can be made where the photons appear as non-relativistic, massive particles. Thus, when the photon frequency is separated from the resonance frequency of the oscillator mode by a detuning gap , and the transverse momentum is restricted by
then the previous expressions for and ((8) and (9)) define a low momentum effective action for the photons, with a dispersion relation of a non-relativistic form
In the following we shall in addition assume weak coupling between the photons and dipoles, in the sense
The effective photon mass is then given by
with a small renormalization of the mass due to the interaction with the dipole field. Note that the weak coupling condition (20) is not essential for the non-relativistic description of the 2D photons, but is introduced to simplify the calculations. In physical realizations of the 2D photon gas one may also have to consider the case of strong mixing of the photon and dipole degrees of freedom, as discussed in the section on Rydberg atoms below.
With the longitudinal momentum fixed to and approximated by , the field variable can be written as,
where are the polarization vectors and are the corresponding field components, which now only depend on the transverse momentum . Note that both the frequency and the momentum of the longitudinal mode have been extracted from in order to express this as a slowly varying field.
When the assumptions about small transverse momentum and weak coupling are imposed, the quadratic part of the effective action gets the form (to order ),
Here we have neglected terms proportional to (slowly-varying field approximation), and is now the two-dimensional gradient. Note that the two polarization directions appear as two species of particles. The action has the standard form of a non-relativistic, free field theory.
We will now consider the interaction term. In the same approximation as used above we have
If this -independent expression is used in the interaction part of the action, (9), and the field is expressed in terms of , it simplifies to
where is the (bare) interaction strength given by
The first term of (25) can be interpreted as a delta function interaction between photons with the same helicity, the second between photons of opposite helicities. In the simplest case, with only one type of photon polarization (), the corresponding interaction Lagrangian simplifies to
Thus, with the approximations used, we reach a form of the effective photon Lagrangian which agrees with the nonlinear Schrödinger equation previously derived from the classical field theory of a dimensionally reduced Maxwell field interacting with a non-linear medium Akhmanov . The photons behave like 2D massive particles with a repulsive, pointlike interaction.
In the case of blue detuning, i.e., , the situation is quite different. Typical dispersion curves are shown in Fig.1b, and we see that now the branch is photon-like both for small and large momenta, and there is no avoided level crossing, but only a level repulsion at small momenta.
However, also for blue detuning a low momentum effective action can be given where all the above formula, derived for red detuning, still hold for scattering between the photons. But an important change is that the photons now are the “high energy” particles relative to the dipoles, and it is energetically possible for them to “decay” into the low lying dipole modes. At tree level (to first order in the interaction) the dangerous process is which for blue detuning can conserve both energy and momentum. The corresponding interaction piece in the Lagrangian can again be read from (16), but now with one of the fields as (dipole field) and the three others as (photon field). In the low momentum approximation it is
where is the photon field and is the dipole field.
An important quantity for assessing whether a gas of blue detuned photons can be maintained in the cavity, is the ratio between the cross section of the above (‘inelastic’) decay process, and the normal (‘elastic’) scattering induced by the interaction (26). A straightforward calculation yields,
where is the momentum in the center of mass. Thus, demanding , implies the condition
We shall return to numerical estimates of the physical parameters in section 7.
We conclude this section with some comments on Galilean invariance. The effective action, given by (8) and (9), is neither Lorentz nor Galilean invariant, since the relativistic photons are coupled to dipoles defined in a fixed frame. Nevertheless the effective theory defined by (23) and (25) is Galilean invariant due to the low momentum approximations made in the vertices. In the next section we will consider loop effects where the low momentum approximations are not any longer valid, and it is far from obvious that the resulting corrections to the effective theory will respect the Galilean invariance. As we shall see, however, the leading corrections do have this symmetry, so the interpretation of the photons as a nonrelativistic Bose system has validity beyond the Born approximation.
Iv 4 Renormalization of the -function interaction.
In a many-particle interpretation the interaction Lagrangian (27) corresponds to a delta-function potential
where is the two-particle relative position. However, it is well known that a pure delta function interaction in dimensions higher than one is not well defined beyond first order in perturbation theory. In two dimensions the second order term gives rise to a logarithmic divergence in the scattering amplitude. To make the delta function interaction meaningful requires regularization of the interaction and renormalization of the interaction strength. As discussed in ref.jackiw1 the form of the s-wave scattering amplitude for such a renormalized interaction is
where is the renormalized coupling constant and is a new parameter that is introduced by the renormalization (the renormalization scale). The corresponding phase shift for small is given by , which is the Born approximation value when . For small valus of it approaches the universal expression, , that is common for a large class of short range potentials chadan98 .
Formally, the delta function interaction in two dimensions is dimensionless, i.e., it scales as the kinetic energy. However, the renormalization breaks the scaling symmetry and introduces a length scale through the parameter . This is similar to the situation in QCD, where the effect is referred to as dimensional transmutation thorn . One should note that the two parameters and are not independent. Thus, may be fixed as the bare parameter and all the effect of the renormalization may be absorbed in , or may be viewed as depending on , where is chosen to match the physical momentum interval. In the latter case is referred to as an effective (or running) coupling constant. The explicit dependence on is given by
with as a constant. From this expression we notice that for large momenta (large ) the effective coupling constant goes to zero, so this is a quantum mechanical analogue of the asymptotic freedom of QCD. Note also the curious fact that for sufficiently large the effective interaction is always attractive, irrespectively of the sign of the bare parameter , whereas it in the other limit is repulsive.
The above discussion refers to a situation where the theory is treated as a fundamental theory where and are free parameters, to be determined by experiment. However, treated as a an effective (low energy) theory they are in principle determined by the physical parameters of the complete system. For example, in the case of a quasi two-dimensional atomic Bose gas in a highly asymmetric trap, the renormalization scale of the two-dimensional theory is essentially given by the extension of the trap perpendicular to the plane in which the atoms move petrov ; hansson1 .
In the present case the renormalized interaction strength may be determined by taking more explicitly into account the effect of the dipole degrees of freedom. This we do by using Schrödinger perturbation thery, with the interaction Hamiltonian extracted from the the action (16), to calculate contributions to the scattering amplitude beyond the Born approximation. With the expression for the scattering amplitude given by (32), which we assume to be correct for low momenta , we note that the renormalization scale can be determined from the contribution to second order in (with ). In such a second order calculation of the scattering amplitude the contributions from the dipole mode cannot be neglected, since the intermediate states are not restricted to low momenta. Thus, both the field modes are included in the calculation, and the exact expressions for the mode frequences are used rather than the low momentum approximations.
In the following we present a simple calculation of the leading contributions to the scattering amplitude. The result shows that the form of the amplitude is as expected and it gives an estimate of the renormalization scale . In Appendix B we perform a more complete calculation of some of the non-leading terms in the scattering amplitudes and give the corresponding expressions for the scattering amplitude, for both red and blue detuning.
V 5 The scattering amplitude to .
Consider the T-matrix, related to the scattering matrix by , where and are the energies of the final and initial states. To second order has the form
where we here have simplified the notation by the sum over field modes, longitudinal momenta and polarization variables in the intermediate state. The form of the interaction matrix element is
where we have approximated the energy of the incoming photons and ) by , since these are in the lowest longitudinal mode. These photons have the same polarization vector , while the particles (“polaritons”) in the intermediate state have polarization vectors and . The quantum numbers and determine the longitudinal momenta of the particles in the intermediate state.
In the low-momentum approximation the T-matrix element is independent of the sum of the transverse momenta, , i.e., it is Galilean invariant. This follows since and can be approximated by in all places, except in the energy denominator when the intermediate particles are also in the lowest longitudinal mode. In that case the pole at makes the -dependence important. However, due to momentum conservation, we have in the low-energy approximation
where is the relative momentum. The expression shows that can be absorbed in the integration variable .
Thus, the T-matrix has the following momentum dependence
where and are the sum of momenta for the outgoing and incoming particles and and are the relative momenta. For pure s-wave scattering (delta function interaction) the reduced T-matrix element only depends on the magnitude of the relative momentum,
and is related to the s-wave scattering amplitude through
Since we are interested in the behaviour of the scattering amplitude for small transverse momenta, in the first order expression we simply put them equal to zero. With all photons in the same helicity state, the first order contribution is
in accordance with the expression for the low-energy Lagrangian (27). To second order there are three diagrams shown in fig. 2. Diagram 2a corresponds to two particles in the intermediate states, while diagrams 2b and 2c correspond to four and six particles in the intermediate states. Thus, the contributions from diagrams 2b and 2c are suppressed by the energy denominator, since the energy difference between the two initial and the four or six intermediate particles necessarily has to be large on the scale set by the transverse momentum. For this reason we shall only consider contributions from diagram 2a. Note that there are two types of particles in the intermediate state, characterized by energies and . As an important point also note that the transverse momenta of the intermediate particle states cannot be assumed to be small, and also excitations to higher longitudinal momenta have to be considered. We now discuss how to calculate the leading contributions to diagram 2a.
Second order contributions to the scattering amplitude. Only
contributions from diagram (a) are included in this paper, since the
from (b) and (c) are suppressed by a factor
Figure 2: Second order contributions to the scattering amplitude. Only contributions from diagram (a) are included in this paper, since the contributions from (b) and (c) are suppressed by a factor .
At low momenta, there is a potentially large contribution to diagram 2a, when the energy denominator vanishes, and as expected this will give rise to the logarithmically infrared divergent term in (32). This term is dominant in the limit of asymptotically small momenta where the approximations leading to (16) become exact.
The importance of the high momentum contribution comes from the fact that does not increase with momentum, but rather approaches the resonance value , c.f. Fig. 1. Thus, even if intermediate states with frequencies close to are considered as highly excited relative to the low energy photons, the large number of dipoles may make contributions from these modes important. In fact, if the dipoles are treated as a continuum, the integral over intermediate momenta will diverge. In reality we know that there is a physical cutoff related to the discreteness of the system of dipoles. We introduce this simply as a cutoff in momentum at a value corresponding to the (average) distance between the dipoles.
With a clear separation of the scales in the momentum integrals the leading contributions from high and low momenta can be estimated separately. In order to see how this works we examine the following toy problem. Consider the integral
which can be evaluated exactly, to give
However, assuming
the integral can be estimated by the following approximation,
where the low and high momentum contributions are treated separately. This expression reproduces the exact result up to . Below we will examine the leading contributions to the second order scattering amplitude in this way, by evaluating separately the contributions from low and high momenta. In this calculation the detuning parameter and the photon mass will play the role of and in the toy problem. We refer to Appendix B for a more complete treatment.
v.1 The high-momentum contribution
For large momenta the important contribution comes from the term with two dipole excitations in the intermediate state. For these excitations we have have and the momentum integral is divergent without the cutoff. The only effect of the coupling between the photons and the oscillators appears in the denominators of the form
where it prevents the expression from diverging when .
We can neglect contributions from the transverse momenta of the scattered (external) photons, since these are smaller by relative to the leading term. This means that the contribution gives rise to a -independent renormalization of the interaction strength. We also neglect terms that are higher order in the coupling strength . With for the external photon states and for the intermediate states the energy denominator is approximated by
and the matrix elements of the interaction get a simple form
For high momenta the summation over the polarization vectors gives simply a factor for each intermediate particle (as discussed in Appendix B) and the momentum integral and sum therefore gets trivial, with a momentum-independent matrix element. Integrating over transverse momenta and summing over longitudinal momenta gives
where is a new dimensionless parameter. Since the dimensional reduction is not effective at high momenta (we have to sum also over the longitudinal momenta) this parameter is a characteristic of the full three-dimensional theory, and is a measure of the importance of renormalization of the nonlinear effects for intermediate momenta, i.e., for .
In order to obtain the finite result (48) we have introduced a cutoff, , in the momentum integration, where is the distance between the oscillators, and a cutoff in the discrete longitudinal momentum variable at . This means that we have , with as the 3D oscillator density.
v.2 The low-momentum contribution
The important low momentum contribution comes from the term with two photons in the intermediate state. The energy denominator then vanishes when the momenta of the intermediate photons match the ones of the external photons. Since all photons then are low momentum photons, for the leading contribution we can replace by the low energy expression (and by ) to get the energy denominator on the form
Here and are the transverse photon momenta of the intermediate state. In this case only the lowest longitudinal mode has to be included in the intermediate state. In the same way as for high momenta there are corrections, but they are supressed by factors or , and we shall neglect them.
We note that an integration of the intermediate state momentum of this term alone gives rise to a logarithmic ultraviolet divergence. Eventually this divergence is of course cut off by the interparticle distance, but before that the integrand is suppressed by the factor
which will introduce an effective cutoff in the momentum integral at and thus provide a scale for the logarithm. If this is introduced as an explicit cutoff, the momentum integral, after the angular integration, gets the form
There will also be a constant ( independent) contribution, but as shown explicitly in Appendix B this is generally small compared to the leading high momentum contribution (48). With the relevant constants and symmetry factors included, the logarithmic contribution to is, for red and blue detuning,
There is also a term corresponding to one photon and one dipole in the intermediate state, but the real part of this is subleading relative to the terms already included and can therefore be omitted. However, for blue detuning it has an imaginary part
Although small compared to the leading contribution, this is the dominant imaginary part corresponding to the decay process discussed earlier. As a check on our calculations, we have verified that this imaginary part of the scattering amplitude is correctly related to the total inelastic cross section in (29).
v.3 The scattering amplitude
Combining (40), (48) and (52) we get the following approximation for the (one loop) scattering amplitude corresponding to red detuning
The expression is consistent with the expression for the scattering amplitude of a renormalized delta function interaction, when expanded to second order in the coupling strength. Resumming the diagram 2a as a geometrical series in fact gives the full scattering amplitude (32), if we set and define the renormalization scale by
The same expression is valid for blue detuning if we neglect the effect of scattering . Note, however, that the sign of is different in the two cases. The value of the exponent is
and we note that this (in absolute value) will be much larger than with the assumptions about parameter values which we have made.
The choice of renormalization scale, in (55) is however misleading in that the logarithm becomes large. The definition amounts to a more natural choice since the interpretation of the photons as massive non-relativistic particles only makes sense for . The corresponding value for the renormalized coupling constant is
and relative to the bare (first order) coupling constant , the change in remains small as long as the logarithm is small and . But depending on the parameter values, may in reality become large and give rise to a significant renormalization effect. For large we have
and we note that in this limit the effective interaction is independent both of momentum and of the anharmonicity parameter of the oscillator spectrum. The detuning parameter (and not the anharmonicity parameter ) now determines the sign of the effective coupling, with repulsive interaction for red detuning and attractive interaction for blue detuning.
v.4 Comparison with the Kerr nonlinear susceptibility coefficient for two-level atoms
Since the renormalization parameter is proportional to , it easily gets large for small detuning, , as shown explicitly for the case of photons interacting with Rydberg atoms, in the section below. The renormalized coupling constant (which then is much smaller than the bare coupling constant ) should then be interpreted as the physical interaction parameter. One should, however, note that the expression we have found for is not based on a systematic expansion in , but rather by resumming “dangerous” terms in the expansion. There will be other contributions to the renormalized coupling, but these are parametrically small, i.e., supressed by powers of small ratios like or . Without resumming other parts of the perturbation series, we cannot determine in which parameter range these terms can be neglected. For the following estimates we shall simply assume that we are in that range.
The interpretation of as the physical interaction parameter is reinforced by the fact that the expression we have found (58), depends on the detuning parameter , the effective plasma frequency , and the atomic number density in exactly the same way as the non-linear Kerr susceptibility coefficient for a dilute gas of two-level atoms, as obtained by Grischkowsky Grischkowsky ,
with as the dipole matrix element (for one component of the dipole vector) connecting the two states of the two-level atom. Rewritten in our notation,
which gives
To compare the expressions, we write the 3D susceptibility, extracted from our effective action (16), in terms of the dimensionless bare coupling constant ,
If we assume that the 3D susceptibility renormalizes in the same way as the 2D dimensionless (the renormalization comes from high where the dimensional reduction from 3D to 2D is not relevant), then
and since , our expression for is very close to Grischkowsky’s.
That the factor is very close to one is of no significance, since our coefficient depends on the details of the ultraviolet cutoff. What is relevant, however, is that the two quite different approaches give essentially the same result for the susceptibility, and also that our result is independent of the bare non-linear coupling.
Vi 6 Scales, Bose-Einstein condensation and two photon bound states
Based on the discussion of the effective photon-photon interaction in the previous sections, we will now consider some of the physical aspects of the formation of a two-dimensional photon fluid. We first summarize the important parameters that characterize the photon system, and then discuss the conditions under which two particularly interesting phenomena could occur: the formation of a two dimensional Bose-Einstein condensate, and the formation of two-photon bound states. We should stress that both these effects are essentially quantum mechanical, and cannot be described by classical non-linear optics.
vi.1 Important scales
The strength of the mixing between the photons and the oscillators is given by the effective plasma frequency defined by
and mixing becomes important when . From now on we shall restore factors of and in the formulas.
The (unrenormalized) interaction strength is given by the dimensionless coupling constant ,
The importance of the non-linear loop corrections to the interaction strength is, for momenta |
43fa99da407a9fb0 | The Quantum Harmonic Oscillator with Time-Dependent Boundary Condition in the Causal Interpretation
Initializing live version
Download to Desktop
Requires a Wolfram Notebook System
One of the challenging aspects of quantum mechanics is to derive the classical world from the quantum world. In this Demonstration, we discuss the dynamics of quantum particles bouncing between two oscillating infinitely high walls. The shape of the bottom of the well is described by a harmonic oscillator potential with a time-dependent frequency . In the causal approach, the quantum potential governs the dynamics of the quantum particle; the link between classical Newtonian mechanics and the quantum world is the quantum potential. The total effective potential is the sum of the potential and the quantum potential (), which leads to the time-dependent quantum force. In a single nonstationary state the quantum particles in the ensemble therefore behave classically in spite of having an associated wave function that satisfies the Schrödinger equation.
The graphics show the squared wavefunction and the trajectories in - space on the right, and on the left the position of the particles, the squared wavefunction (blue), the total effective potential (red), and the harmonic potential (cyan). The potentials are appropriately scaled to fit.
Contributed by: Klaus von Bloh (March 2011)
Open content licensed under CC BY-NC-SA
Exact nontrivial analytic solutions of the trajectory equation in the Bohm approach can be found only for a very limited number of cases. Fortunately, for a single nonstationary state, an analytical solution exists for the trajectories in both the classical and quantum treatments. In this special case, the classical and the quantum trajectories are the same (up to a constant factor), because the quantum potential has a space-independent amplitude. The wavefunction obeys the Schrödinger equation with the potential term . The auxiliary function is defined by , from which the analytical wavefunction is given by , with , where is the mass and , and so on. The function is the time-dependent length of the moving walls, , , , with the initial width , the amplitude , and the frequency . In the quantum case, the analytical solution for the quantum trajectories is derived from the phase of the wavefunction in the eikonal form: . Therefore, the equation for the quantum trajectory with is given by .
[1] A. J. Makowski and P. Peplowski, "On the Behaviour of Quantum Systems with Time-Dependent Boundary Conditions," Physics Letters A 163(3), 1992 pp. 143–151.
Feedback (field required)
Email (field required) Name
Occupation Organization |
bae4b5b2f1cf15ec | [[Image(picture_11.png, 700px, class=align-center)]]
This nanoHUB "topic page" provides an easy access to selected nanoHUB educational material on computational electronics that is openly accessible.
* [http://www.nanohub.org/contribute/Contribute content] by uploading it to the nanoHUB. (See "Contribute Content") on the nanoHUB mainpage.
* Let us know when things do not work by filing a ticket through the nanoHUB "Help" feature on every page.
* Finally, let us know what you are doing and [http://www.nanohub.org/feedback/suggestions/your suggestions] improving the nanoHUB by using the "Feedback" section, which you can find under "[http://www.nanohub.org/support/ Support]"
Thank you for using the nanoHUB, and be sure to [http://www.nanohub.org/feedback/success_story/share your nanoHUB success stories] with us. We like to hear from you, and our sponsors need to know that the nanoHUB is having impact.
[[Image(intro1.png, 250px, class=align-right)]]
[[Image(intro2.png, 250px, class=align-right)]]
[[Image(intro3.png, 250px, class=align-left)]]
== Energy Bands and Effective Masses ==
=== [/tools/acute/ Piecewise Constant Potential Barrier Tool in ACUTE]– Open Systems ===
[[Image(pcpbt.png, 200px, class=align-left)]]
* [[Resource(4831)]]
* [[Resource(4833)]]
* [[Resource(4853)]]
* [[Resource(4873)]]
* [[Resource(5319)]]
* [[Resource(4849)]]
* [[Resource(5102)]]
* [[Resource(5130)]]
[[Image(ppl.png, 250px, class=align-left)]]
* [[Resource(4851)]]
=== [/tools/acute/ Band Structure Lab in ACUTE] ===
[[Image(bsl.png, 250px, class=align-left)]]
In advanced applications, the users can apply tensile and compressive strain and observe the variation in the band structure, bandgaps, and effective masses. Advanced users can also study band-structure effects in ultra-scaled (thin body) quantum wells, and nanowires of different cross sections. Band Structure Lab uses the ''sp3s*d5'' tight binding method to compute dispersion (E-k) for bulk, planar, and nanowire semiconductors.
* [[Resource(5201)]]
* [[Resource(5031)]]
* [[Resource(4890)]]
* [[Resource(4880)]]
==Drift-Diffusion and Energy Balance Simulations==
=== [/tools/acute/ PADRE Tool in ACUTE]—Modeling of silicon-based devices===
[[Image(padre.png, 250px, class=align-left)]]
[/tools/acute/ PADRE Tool in ACUTE] is a two-dimensional/three-dimensional simulator for electronic devices, such as MOSFET transistors.
Listed below are tools, exercises, and sets of problems that utilize the [/tools/acute/ PADRE Tool in ACUTE]:
* [[Resource(229)]]
* [[Resource(4894)]]
* [[Resource(4896)]]
* [[Resource(452)]]
* [[Resource(4906)]]
* [[Resource(3984)]]
* [[Resource(5051)]]
Supplemental documentation:
* [[Resource(1516)]]
* [[Resource(980)]]
===SILVACO Simulator—Modeling of Silicon-Based and III-V Devices===
In preparation.
== Particle-Based Simulators ==
[[Image(scattering.png, 250px class=align-left)]]
[[Image(mc.png, 250px, class=align-right)]]
The [/tools/acute/ Bulk Monte Carlo Lab in ACUTE] calculates the bulk values of the electron drift velocity, electron average energy, and electron mobility for electric fields applied in arbitrary crystallographic direction in both column 4 (silicon and germanium) and III-V (gallium arsenide, silicon carbide and gallium nitride) materials. All relevant scattering mechanisms for the materials being considered have been included in the model.
An A/V presentation is also available:
* [[Resource(5047)]]
* [[Resource(5277)]]
* [[Resource(5275)]]
* [[Resource(5321)]]
* [[Resource(5323)]]
[[Image(quamc2d1.png, 250px, class=align-left)]] [[Image(quamc2d2.png, 250px, class=align-left)]]
[/tools/acute/ Quamc2D Lab in ACUTE]
QuaMC 2D (pronounced "quam-see") is a quasi three-dimensional quantum-corrected semi-classical Monte-Carlo transport simulator for conventional and non-conventional MOSFET devices.
* [[Resource(4520)]]
* [[Resource(4543)]]
* [[Resource(4443)]]
* [[Resource(4439)]]
* [[Resource(5127)]]
===Thermal Particle-Based Device Simulator===
In preparation.
Exercises and Other Resources:
* [[Resource(5350)]]
==Inclusion of Quantum Corrections in Semiclassical Simulation Tools==
=== [/tools/acute/ Schred in ACUTE] ===
[[Image(schred.png, 250px, class=align-left)]]
[/tools/acute/ Schred in ACUTE] calculates the envelope wavefunctions and the corresponding bound-state energies in a typical MOS (Metal-Oxide-Semiconductor) or SOS (Semiconductor-Oxide- Semiconductor) structure and a typical SOI structure by solving self-consistently the one-dimensional Poisson and Schrödinger equations.
To better understand the operation of [/tools/acute/ Schred in ACUTE] and the physics of MOS capacitors please refer to:
* [[Resource(4794)]]
* [[Resource(4796)]]
* [[Resource(5087)]]
* [[Resource(5127)]]
* [[Resource(4900)]]
* [[Resource(4902)]]
* [[Resource(4904)]]
[[Image(1dhet1.png, 180px, class=align-left)]] [[Image(1dhet2.png, 180px, class=align-left)]]
The [/tools/acute/ 1D Heterostructure Tool in ACUTE] simulates the confined states in one-dimentional heterostructures by self-consistently calculating their charge based on a quantum-mechanical description of the one-dimensional device. Increased interest in high electron mobility transistors (HEMTs) is due to the eventual limitations reached by scaling conventional transistors. The 1D Heterostructure Tool in ACUTE is a very valuable tool for the design of HEMTs because the user can determine such components as the position and the magnitude of the delta-doped layer, the thickness of the barrier, and the spacer layer, for which the user can maximize the amount of free carriers in the channel, which, in turn, leads to a larger drive current.
* [[Resource(5231)]]
* [[Resource(5233)]]
The most commonly used semiconductor devices for applications in the GHz range now are gallium arsenide based MESFETs, HEMTs and HBTs. Although MESFETs are the cheapest devices because they can be realized with bulk material, i.e. without epitaxially grown layers, HEMTs and HBTs are promising devices for the near future. The advantage of HEMTs and HBTs compared to MESFETs is a higher power density (by a factor of two to three), which leads to a significantly smaller chip size.
HEMTs are field-effect transistors wherein the flow of the current between two ohmic contacts, known as the source and the drain, is controlled by a third contact, the gate. Such gates are usually Schottky contacts. In contrast to ion-implanted MESFETs, HEMTs are based on epitaxial layers with different band gaps.
==Quantum Transport==
in preparation.
[[Image(nanomos.png, 250px, class=align-left)]]
[/tools/acute/ nanoMOS in ACUTE] is a two-dimensional simulator for thin body (less than 5 nm), fully depleted, double-gated n-MOSFETs. Five transport models is available (drift-diffusion, classical ballistic, energy transport, quantum ballistic, and quantum diffusive). The transport models treat quantum effects in the confinement direction exactly, and the names indicate the technique used to account for carrier transport along the channel. Each of these transport models is solved self-consistently with Poisson's equation. Several internal quantities such as subband profiles, subband areal electron densities, potential profiles, and current-voltage (I/V) information can be obtained from the source code.
* [[Resource(2845)]]
* [[Resource(1533)]]
in preparation.
==Atomistic Modeling==
[[Image(modeling_agenda5.gif, 250px, class=align-left)]] [[Image(qdot.png, 250px, class=align-left)]]
[/tools/acute/ NEMO3D in ACUTE] calculates eigenstates in (almost) arbitrarily shaped semiconductor structures in the typical column IV and III-V materials. Atoms are represented by the empirical tight binding model using ''s'', ''sp3s*'', or ''sp3d5s*'' models with or without spin. Strain is computed using the classical valence force field (VFF) with various Keating-like potentials.
Users of [/tools/acute/ NEMO3D in ACUTE] can analyze quantum dots, alloyed quantum dots, long-range strain effects on quantum dots, the effects of wetting layers, piezo-electric effects in quantum dots, quantum-dot nuclear-spin interaction, quantum-dot phonon spectra, coupled quantum-dot systems, miscut silicon quantum wells with silicon-germanium alloy buffers, core-shell nanowires, alloyed nanowires, phosphorous impurities in silicon (P:Si qubits), and buck alloys.
Boundary conditions to treat the effects of surface states have been developed. Direct and exchange interactions and interactions with electromagnetic fields can be computed in a post-processing approach based on the NEMO 3D single particle states.
* [[Resource(450)]]
* [[Resource(2925)]]
== Collection of tools that comprise ACUTE == |
12ecd7881a405c72 | digplanet beta 1: Athena
Share digplanet:
Applied sciences
Schematic representation of evanescent waves propagating along a metal-dielectric interface. The charge density oscillations, when associated with electromagnetic fields, are called surface plasmon-polariton waves. The exponential dependence of the electromagnetic field intensity on the distance away from the interface is shown on the right. These waves can be excited very efficiently with light in the visible range of the electromagnetic spectrum.
An evanescent wave is a near-field wave with an intensity that exhibits exponential decay without absorption as a function of the distance from the boundary at which the wave was formed. Evanescent waves are solutions of wave-equations, and can in principle occur in any context to which a wave-equation applies. They are formed at the boundary between two media with different wave motion properties, and are most intense within one third of a wavelength from the surface of formation. In particular, evanescent waves can occur in the contexts of optics and other forms of electromagnetic radiation, acoustics, quantum mechanics, and "waves on strings".[1][2]
Evanescent wave applications[edit]
In optics and acoustics, evanescent waves are formed when waves traveling in a medium undergo total internal reflection at its boundary because they strike it at an angle greater than the so-called critical angle.[1][2] The physical explanation for the existence of the evanescent wave is that the electric and magnetic fields (or pressure gradients, in the case of acoustical waves) cannot be discontinuous at a boundary, as would be the case if there was no evanescent wave field. In quantum mechanics, the physical explanation is exactly analogous—the Schrödinger wave-function representing particle motion normal to the boundary cannot be discontinuous at the boundary.
Electromagnetic evanescent waves have been used to exert optical radiation pressure on small particles to trap them for experimentation, or to cool them to very low temperatures, and to illuminate very small objects such as biological cells or single protein and DNA molecules for microscopy (as in the total internal reflection fluorescence microscope). The evanescent wave from an optical fiber can be used in a gas sensor, and evanescent waves figure in the infrared spectroscopy technique known as attenuated total reflectance.
In electrical engineering, evanescent waves are found in the near-field region within one third of a wavelength of any radio antenna. During normal operation, an antenna emits electromagnetic fields into the surrounding nearfield region, and a portion of the field energy is reabsorbed, while the remainder is radiated as EM waves.
Recently, a graphene-based Bragg grating (one-dimensional photonic crystal) has been fabricated and demonstrated its competence for excitation of surface electromagnetic waves in the periodic structure using a prism coupling technique.[3]
In quantum mechanics, the evanescent-wave solutions of the Schrödinger equation give rise to the phenomenon of wave-mechanical tunneling.
In microscopy, systems that capture the information contained in evanescent waves can be used to create super-resolution images. Matter radiates both propagating and evanescent electromagnetic waves. Conventional optical systems capture only the information in the propagating waves and hence are subject to the diffraction limit. Systems that capture the information contained in evanescent waves, such as the superlens and near field scanning optical microscopy, can overcome the diffraction limit; however these systems are then limited by the system's ability to accurately capture the evanescent waves.[4] The limitation on their resolution is given by
k \propto \frac{1}{d} \ln{\frac{1}{\delta}},
where k is the maximum wave vector that can be resolved, d is the distance between the object and the sensor, and \delta is a measure of the quality of the sensor.
More generally, practical applications of evanescent waves can be classified in the following way:
1. Those in which the energy associated with the wave is used to excite some other phenomenon within the region of space where the original traveling wave becomes evanescent (for example, as in the total internal reflection fluorescence microscope)
2. Those in which the evanescent wave couples two media in which traveling waves are allowed, and hence permits the transfer of energy or a particle between the media (depending on the wave equation in use), even though no traveling-wave solutions are allowed in the region of space between the two media. An example of this is so-called wave-mechanical tunnelling, and is known generally as evanescent wave coupling.
Total internal reflection of light[edit]
Top to bottom: representation of a refracted incident wave and an evanescent wave at an interface.
For example, consider total internal reflection in two dimensions, with the interface between the media lying on the x axis, the normal along y, and the polarization along z. One might naively expect that for angles leading to total internal reflection, the solution would consist of an incident wave and a reflected wave, with no transmitted wave at all, but there is no such solution that obeys Maxwell's equations. Maxwell's equations in a dielectric medium impose a boundary condition of continuity for the components of the fields E||, H||, Dy, and By. For the polarization considered in this example, the conditions on E|| and By are satisfied if the reflected wave has the same amplitude as the incident one, because these components of the incident and reflected waves superimpose destructively. Their Hx components, however, superimpose constructively, so there can be no solution without a non-vanishing transmitted wave. The transmitted wave cannot, however, be a sinusoidal wave, since it would then transport energy away from the boundary, but since the incident and reflected waves have equal energy, this would violate conservation of energy. We therefore conclude that the transmitted wave must be a non-vanishing solution to Maxwell's equations that is not a traveling wave, and the only such solutions in a dielectric are those that decay exponentially: evanescent waves.
Mathematically, evanescent waves can be characterized by a wave vector where one or more of the vector's components has an imaginary value. Because the vector has imaginary components, it may have a magnitude that is less than its real components. If the angle of incidence exceeds the critical angle, then the wave vector of the transmitted wave has the form
\mathbf{k} \ = \ k_y \hat{\mathbf{y}} + k_x \hat{\mathbf{x}}
\ = \ i \alpha \hat{\mathbf{y}} + \beta \hat{\mathbf{x}},
which represents an evanescent wave because the y component is imaginary. (Here α and β are real and i represents the imaginary unit.)
For example, if the polarization is perpendicular to the plane of incidence, then the electric field of any of the waves (incident, reflected, or transmitted) can be expressed as
\mathbf{E}(\mathbf{r},t) = \mathrm{Re} \left \{ E(\mathbf{r}) e^{ i \omega t } \right \} \mathbf{\hat{z}}
where \scriptstyle\mathbf{\hat{z}} is the unit vector in the z direction.
Substituting the evanescent form of the wave vector k (as given above), we find for the transmitted wave:
E(\mathbf{r}) = E_o e^{-i ( i \alpha y + \beta x ) } = E_o e^{\alpha y - i \beta x }
where α is the attenuation constant and β is the propagation constant.
Evanescent-wave coupling[edit]
In optics, evanescent-wave coupling is a process by which electromagnetic waves are transmitted from one medium to another by means of the evanescent, exponentially decaying electromagnetic field.[5]
plot of 1/e-penetration depth of the evanescent wave against angle of incidence in units of wavelength for different refraction indices
Coupling is usually accomplished by placing two or more electromagnetic elements such as optical waveguides close together so that the evanescent field generated by one element does not decay much before it reaches the other element. With waveguides, if the receiving waveguide can support modes of the appropriate frequency, the evanescent field gives rise to propagating-wave modes, thereby connecting (or coupling) the wave from one waveguide to the next.
Evanescent-wave coupling is fundamentally identical to near field interaction in electromagnetic field theory. Depending on the impedance of the radiating source element, the evanescent wave is either predominantly electric (capacitive) or magnetic (inductive), unlike in the far field where these components of the wave eventually reach the ratio of the impedance of free space and the wave propagates radiatively. The evanescent wave coupling takes place in the non-radiative field near each medium and as such is always associated with matter; i.e., with the induced currents and charges within a partially reflecting surface. This coupling is directly analogous to the coupling between the primary and secondary coils of a transformer, or between the two plates of a capacitor. Mathematically, the process is the same as that of quantum tunneling, except with electromagnetic waves instead of quantum-mechanical wavefunctions.
See also[edit]
1. ^ a b Tineke Thio (2006). "A Bright Future for Subwavelength Light Sources". American Scientist (American Scientist) 94 (1): 40–47. doi:10.1511/2006.1.40.
2. ^ a b Marston, Philip L.; Matula, T.J. (May 2002). "Scattering of acoustic evanescent waves...". Journal of the Acoustical Society of America 111 (5): 2378. Bibcode:2002ASAJ..111.2378M. doi:10.1121/1.4778056.
3. ^ Sreekanth, Kandammathe Valiyaveedu; Zeng, Shuwen; Shang, Jingzhi; Yong, Ken-Tye; Yu, Ting (2012). "Excitation of surface electromagnetic waves in a graphene-based Bragg grating". Scientific Reports 2. Bibcode:2012NatSR...2E.737S. doi:10.1038/srep00737. PMC 3471096. PMID 23071901.
4. ^ Neice, A., "Methods and Limitations of Subwavelength Imaging", Advances in Imaging and Electron Physics, Vol. 163, July 2010
5. ^ Zeng, Shuwen; Yu, Xia; Law, Wing-Cheung; Zhang, Yating; Hu, Rui; Dinh, Xuan-Quyen; Ho, Ho-Pui; Yong, Ken-Tye (2013). "Size dependence of Au NP-enhanced surface plasmon resonance based on differential phase measurement". Sensors and Actuators B: Chemical 176: 1128. doi:10.1016/j.snb.2012.09.073.
6. ^ Fan, Zhiyuan; Zhan, Li; Hu, Xiao; Xia, Yuxing (2008). "Critical process of extraordinary optical transmission through periodic subwavelength hole array: Hole-assisted evanescent-field coupling". Optics Communications 281 (21): 5467. Bibcode:2008OptCo.281.5467F. doi:10.1016/j.optcom.2008.07.077.
7. ^ Karalis, Aristeidis; J.D. Joannopoulos, Marin Soljačić (February 2007). "Efficient wireless non-radiative mid-range energy transfer". Annals of Physics 323: 34. arXiv:physics/0611063v2. Bibcode:2008AnPhy.323...34K. doi:10.1016/j.aop.2007.04.017.
8. ^ "'Evanescent coupling' could power gadgets wirelessly", Celeste Biever, NewScientist.com, 15 November 2006
9. ^ Wireless energy could power consumer, industrial electronicsMIT press release
10. ^ Axelrod, D. (1 April 1981). "Cell-substrate contacts illuminated by total internal reflection fluorescence". The Journal of Cell Biology 89 (1): 141–145. doi:10.1083/jcb.89.1.141. PMC 2111781. PMID 7014571.
External links[edit]
Original courtesy of Wikipedia: http://en.wikipedia.org/wiki/Evanescent_wave — Please support Wikipedia.
1383 videos foundNext >
Evanescent Waves
Evanescent Waves
Students use microwaves and a set of paraffin prisms to observe evanescent electromagnetic waves.
The wave velocity in the blue medium is lower than the wave velocity in the green medium. The angle of incidence is sufficiently large to prevent any transmi...
QM2.7: Potential Step E ≤ V₀ - The strange evanescent wave
The Potential Step, case: E ≤ V₀ We account for the nonzero probability that the wave function penetrates the classically forbidden region with the concept o...
Evanescent and Propagating Waves
Time domain simulation of a set of plane waves for different wavenumbers (k). At first the wavenumber is positive real number and keeps reducing down to 0. T...
Evanescent wave
Evanescent wave demo
electromagnetic wave tunneling a barrier (the gap between two was prisms)
Surface Plasmon Animation
Taken form biocore http://www.biacore.com/lifesciences/technology/introduction/following_interaction/index.html Surface Plasmon,Surface Enhanced Raman Spectr...
Characterization of sequential exocytosis in a human neuroendocrine cell line using evanescent wave
From the Springer article: Characterization of sequential exocytosis in a human neuroendocrine cell line using evanescent wave microscopy and "virtual trajec...
Plane Waves and Evanescent Waves
1383 videos foundNext >
55 news items
Mon, 10 Mar 2014 09:41:15 -0700
Surprisingly, the scientists saw that the evanescent wave is exactly opposite of optical waves (photons) in terms of its spin and momentum. Evanescent waves carry a spin momentum that is independent of the polarization and helicity and is orthogonal to ...
Semiconductor Today
Semiconductor Today
Wed, 18 Dec 2013 01:33:45 -0800
Researchers at the National Institute of Advanced Industrial Science and Technology (AIST) in Japan have used evanescent wave coupling to enhance the light output power of red light-emitting diodes (LEDs) by a factor of 3.8 [Guo-Dong Hao and Xue-Lun ...
Sun, 20 Dec 2009 05:15:45 -0800
The researchers illuminate the nanostructure to be imaged (for example, a carbon nanotube or silver nanowire) by firing it with a femtosecond laser pulse to create an "evanescent wave". Unlike ordinary, free light, evanescent waves exist only near a ...
Ars Technica
Thu, 26 Apr 2012 11:08:11 -0700
It can do this because, although the amplitude of the evanescent wave drops very rapidly, it never quite reaches zero. For any normal lens, the contribution from evanescent waves is swamped by everything else. The job of the superoscillatory lens is to ...
Wed, 17 Sep 2014 09:51:31 -0700
In previous work (see "Plasmonic waveguide stops light in its tracks"), Hess and colleagues proposed exciting a zero-velocity mode by passing the light through the cladding in the form of an evanescent wave – a special type of wave the frequency of ...
Mon, 03 Nov 2014 06:48:45 -0800
The evanescent wave, instead is sampled with nano-scattering dots. Drifting from a scanning system is not just for the sake of robustness and simplicity, it eliminates the calibration issues with regards to moving solutions. The technology works by ...
Thu, 01 May 2014 08:33:45 -0700
This affects the phase of the evanescent wave, which shifts the position of the antinode traps. According to Wang, this is unlike previous nanophotonic waveguides in which the trapped particles are continuously propelled along the waveguide with no way ...
Nature World News
Nature World News
Wed, 18 Jun 2014 02:35:19 -0700
Some amount of the light escapes the fiber as an evanescent wave. Researchers said that this evanescent wave can be used to trap atoms a few hundred nanometers from the surface of the fiber. The study is published in the journal AIP Advances.
Oops, we seem to be having trouble contacting Twitter
Support Wikipedia
Searchlight Group
Digplanet also receives support from Searchlight Group. Visit Searchlight |
ec1ad531e315ed2f | Quantum Gravity and String Theory
1307 Submissions
[11] viXra:1307.0171 [pdf] replaced on 2014-12-23 08:12:32
Toward a Unified Treatment of Space and Time
Authors: Patrick S. Walters
Comments: 24 Pages. 5 Figures. Major revision from previous version.
A brief background to the problem of relativistic compatibility is given. A concept for unified relativistic treatment of space, time, and gravity is put forward. A set of transformation matrices are proposed to handle the relativistic Special and General Theories simultaneously, with consequences for space-time, fundamental particles, and fundamental forces following from the mathematical structure of the solution.
Category: Quantum Gravity and String Theory
[10] viXra:1307.0166 [pdf] submitted on 2013-07-30 09:17:48
Cosmological Observations as a Hidden Key to Quantum Gravity
Authors: Michael A. Ivanov
Comments: 5 Pages.
Some important consequences of the model of low-energy quantum gravity by the author are described, which give a possibility to re-interpret such cosmological observations as redshifts of remote objects and the dimming of Suprnovae 1a without any expansion of the Universe and without dark energy, but as manifestations of quantum gravity.
Category: Quantum Gravity and String Theory
[9] viXra:1307.0157 [pdf] replaced on 2013-07-28 23:27:24
“Refining Black Hole Physics to Obtain Planck’s Constant from Information Shared from Cosmological Cycle to Cycle (Avoiding Super-Radiance)”
Authors: A. Beckwith
Comments: 8 Pages. fixed 3 small typos , which are to adhere to the flow of logic of the paper
Padmanabhan [1] elucidated the concept of super radiance in black hole physics which would lead to loss mass of a black hole, and loss of angular momentum due to infall of material into a black hole. As Padmanabhan explained it, to avoid super radiance, and probable break down of black holes, from infall, one would need infall material frequency, divided by mass of particles undergoing infall in the black hole to be greater than the angular velocity of the black hole event horizon in question. We should keep in mind we bring this model up to improve the chance that Penrose’s conformal cyclic cosmology will allow for retention of enough information for preservation of Planck’s constant from cycle to cycle, as a counterpart to what we view as unacceptable reliance upon the LQG quantum bounce and its tetrad structure to preserve memory . In addition we are presuming that at the time of z=20 in red shift that there would be roughly about the same order of magnitude of entropy as number of operations in the electro weak era, and that the number of operations in the z=20 case is close to the entropy at redshift z=0
Category: Quantum Gravity and String Theory
[8] viXra:1307.0086 [pdf] submitted on 2013-07-17 19:25:18
A Crazy It From A Misleading Bit: How A Zero-Referenced Fundamental Theorem of Calculus Loses Information And May Be Misleading Mathematical Physics
Authors: J.P. Baugher
Comments: 6 Pages. Essay submitted to http://www.fqxi.org/community/forum/topic/1900
Imagine for a moment an endless diamond, completely solid and pristine. No flaws or gaps in the structure. Suppose at some point in the density of the material something strange occurs in that a deformation appears out of nowhere which splits into two waves. Should these two waves interact again with each other, the deformation disappears. However should they separate enough then one of the waves, which we shall call the baryon wave, is stable by itself. This wave has the strange property that it is a traveling decrease in density of the diamond. There is no "other" material, only the decrease in something we shall have to think of as the vacuum (energy) density. The wave apparently has the ability to pass through or combine into structures with other baryon waves but has no ability to disappear back into a pristine diamond if there is no second deformation wave present, which we do not cover in this essay. Suppose that a certain combination of these first waves were to become sentient. Would they be able to detect that they are a moving wave or would their perceptions lead them to a misunderstanding of how these waves effect the very substance that they are traveling within and so not realize there exists another class of solutions for tensors (scalars, vectors and so on)? Is there a way to determine that the actual density is a fairly good model for their universe? If so, what mathematics would be required in order to describe it much better than other models which the sentient waves have based on their physical perceptions? It is due to this question that we present our proposed answer of a modication of calculus. In order to fully describe the baryon wave, the dimensions they create with their presence, the limited radius distortions they cause and even the substance itself we must re-evaluate our understanding of calculus in order to model them as the derivatives of finite Action area integrals. We propose that in order to understand how the universe stores information, we must have a foundational basis for these areas, and in order to understand how it processes information we must ensure that we have within the literature all classes of its derivatives (directional derivatives, divergence, etc.). We do not go into details in this essay, but our proposed future path is to accomplish this via a modification of Gunnar Nordstroem's gravitational theory, an early competing model to General Relativity worked on by Nordstroem, Einstein and Fokker (see [1] for a recent review). This model was discarded by Einstein and others since it did not predict gravitational lensing, a problem which our modication would seem to have the possibility of remedying (see final assertions). Therefore in this essay we introduce our different view point of calculus, named "Area" Calculus in order to distinguish the concept from the mainstream variety which we will refer to as "Single Function" Calculus.
Category: Quantum Gravity and String Theory
[7] viXra:1307.0085 [pdf] replaced on 2014-11-03 12:02:08
The Source of the Gravitational Constant at the Low Energy Scale
Authors: Gene H Barbee
Comments: 22 Pages. contact genebarbee@msn.com
In general relativity, gravity is attributed to the geometry of space-time. Literature states that the gravitational constant (G) originates at the Planck scale. The Compton wavelength (Planck length) L=(\h*G/C^3)^.5 is 1.61e-35 meters and this is associated with the Planck energy 1.2e22 MeV. This energy is far greater than the energy of a proton and the space surrounding each proton is far greater than the Compton wavelength. It is generally accepted that the Compton wavelength is nature’s response to geometry and mass at the quantum scale. In this paper, the author discusses the hierarchy of interactions with a focus on gravity, propose a low energy scale source of the gravitational constant, and identify a more fundamental coupling constant with the value 1/exp(90). A unique cellular approach is used to model expansion. A cell is the space associated with a proton mass and has cosmological properties that allow it to represent the universe geometrically. Each cell has an initial radius of 7.22e-14 meters and, if it expands according to the concordance model with WMAP parameters, its current value is 0.54 meters. WMAP data allows one to estimate the numbers of protons in the universe. By using this approach, it is possible to compare the kinetic energy that expands cells with potential energy. Implications for the fraction of dark energy, baryons and cold dark matter are discussed. Several examples involving the use of the value 1/exp(90) are presented that demonstrate how cellular values predict large scale observations. Key Words: gravitational constant, cellular approach, cosmology, WMAP.
Category: Quantum Gravity and String Theory
[6] viXra:1307.0065 [pdf] replaced on 2013-07-17 08:01:21
The Gravitational Force
Authors: George Rajna
Comments: 5 Pages.
Category: Quantum Gravity and String Theory
[5] viXra:1307.0058 [pdf] submitted on 2013-07-11 20:34:49
Digital String Theory Deletes Quark Stars Explains Black Holes
Authors: Rodney Bartlett
Comments: 5 Pages.
"Eminent Princeton physicist Ed Witten famously conjectured that the true ground state of matter (in the sense of the lowest energy per particle) consists of a mixture of roughly equal numbers of up, down, and strange quarks, with enough electrons thrown in to ensure that this soup is electrically neutral." ("Is it possible for a Quark Star to exist?" by Victoria Kaspi - Astronomy magazine, June 2013) Scientists have never demonstrated this conjecture to be true, and don't have evidence that stars made of such matter ("quark stars") exist. This reminds me of something Stephen Hawking and Leonard Mlodinow wrote on p.49 of “The Grand Design” (Bantam Press, 2010) – “It is certainly possible that some alien beings with seventeen arms, infrared eyes and a habit of blowing clotted cream out their ears would make the same experimental observations that we do (regarding the existence of quarks), but describe them without quarks.” In a similar way, we non-aliens with two arms, eyes that respond to visible light and no clotted cream in our ears could conceive of a quark-electron mixture forming Quark Stars only to find that it actually describes a different mixture (of binary digits) forming black holes instead of quark stars. E=mc^2 is referred to in the description of the conversion from gravitational energy to the "coherent, organized energy" that is matter; and also in the description of gravitational energy producing mass in black holes. We should remember that E=mc^2 appears to only be partly correct because the highest speed possible is Lightspeed. Physically speaking, it cannot be multiplied. Einstein himself proved this. The equation E=mc^2 can be considered a degenerate form of the mass-energy-momentum relation for vanishing momentum. Einstein was very well aware of this, and in later papers repetitively stressed that his mass-energy equation is strictly limited to observers co-moving with the object under study. The version of the equation applicable here may be E=m/c^2*c^2. Referring to the paragraph which states “hidden variables called binary digits could ... allow time travel into the past by warping a 5D hyperspace” - With a single extra dimension of astronomical size, gravity is expected to cause the solar system to collapse (“The hierarchy problem and new dimensions at a millimetre” by N. Arkani-Hamed, S. Dimopoulos, G. Dvali - Physics Letters B - Volume 429, Issues 3–4, 18 June 1998, Pages 263–272, and “Gravity in large extra dimensions” by U.S. Department of Energy - http://www.eurekalert.org/features/doe/2001-10/dbnl-gil053102.php However, collapse never occurs if gravity accounts for repulsion as well as attraction. It does this not only on astronomical scales but on the subatomic, too. It accounts for dark energy and familiar concepts of gravity, as well as repelling aspects of the electroweak force [such as placing two like magnetic poles together] and attracting electroweak aspects like the strong force. “Electroweak” and “strong” force can be united in that sentence because, as we’ll see, gravitation and space-time are united with both the weak and strong nuclear forces.
Category: Quantum Gravity and String Theory
[4] viXra:1307.0037 [pdf] submitted on 2013-07-07 10:40:05
How to Solve Dark Energy, the Cosmological Constant Problems and Unify General Relativity With Quantum Field Theory in 7 Steps
Authors: J.P. Baugher
Comments: 1 Page.
Category: Quantum Gravity and String Theory
[3] viXra:1307.0026 [pdf] submitted on 2013-07-05 06:48:25
M Theory and the Unique Role Played by Prime Numbers in the Universe the Place of Zero
Authors: Michael Muteru
Comments: 1 Page.
In this paper I examine a possible mathematical conjecture that we have always missed or ignored. The physical universe too also is deeply grounded in maths.Numbers are the purest of ideas to quote the Pythagoreans. Prime numbers form the basis of the complex structure called maths, through these I believe we can decode the entire Quantum information that the universe holds and in essence its true nature and reality
Category: Quantum Gravity and String Theory
[2] viXra:1307.0014 [pdf] submitted on 2013-07-03 06:22:57
The Spine Model :Relativistic Generalization to Any N-Dimensional Spacetime
Authors: Shreyak Chakraborty
Comments: 7 Pages. Published in the New Age Journal of Physics (ID-EJ151)
We propose a semi-classical approach to string theoretic techniques in N-dimensional spacetimes. We name this 'The Spine Model'. This approach enables us to flexibly model particle dynamics in any given spacetime. We conjecture that fundamental strings (as described in standard string theories) are produced by more fundamental objects called "fibers". We also formulate the dynamics of these "fibers" using relativistic notations. We revisit and refine the overall structure and the concepts discussed in [1.] and redefine the basic assumptions in the model. Finally we calculate the total energy of a single fiber and show that under certain conditions, these fibers can produce particles/strings and using a relativistic formulation also show the Lorentz Transformations for the fiber equations in 4-manifolds.
Category: Quantum Gravity and String Theory
[1] viXra:1307.0004 [pdf] submitted on 2013-07-01 09:37:52
Galactic Classification
Authors: James G. Gilson
Comments: 27 Pages.
There are two types of fundamental quantum gravitational mass amplitude states that are denoted by the subscripts D and P. The D amplitudes lead to Einstein's usual general relativity mass density functions. The P amplitudes lead to Einstein's additional pressure mass densities, 3P/c2. Both of these densities appear in the stress energy momentum tensor of general relativity. Here they appear as solutions to a non-linear Schrödinger equation and carry three quantising parameters (lD,m) and (lP,m), The lD,lP values are subsets of the usual electronic quantum variable l which is here denoted by l' to avoid confusion. The m parameter is exactly the same as the electronic quantum theory m, there the z component of angular momentum. In this paper, these parametric relations are briefly displayed followed by an account of the connection to the spherical harmonic functions symmetry system that is necessarily involved. Taken together, the two types of mass density can be integrated over configuration space to give quantised general relativity galactic masses in the form of cosmological mass spectra as was shown in previous papers. Here this aspect has been extended to ensure that every galaxy component of the spectra has a quantised black hole core with a consequent quantised surface area. This is achieved by replacing the original free core radius parameter rε with the appropriate Schwarzschild radius associated with the core mass. Explanations are given for the choices of two further, originally free, parameters, tb, θ0. The main result from this paper is a quantum classification scheme for galaxies determined by the form of their dark matter spherical geometry.
Category: Quantum Gravity and String Theory |
303f67b3b048eebb | , Volume 43, Issue 10, pp 1233-1251
Date: 14 Sep 2013
The Pauli Exclusion Principle. Can It Be Proved?
Rent the article at a discount
Rent now
* Final gross prices may vary according to local VAT.
Get Access
The modern state of the Pauli exclusion principle studies is discussed. The Pauli exclusion principle can be considered from two viewpoints. On the one hand, it asserts that particles with half-integer spin (fermions) are described by antisymmetric wave functions, and particles with integer spin (bosons) are described by symmetric wave functions. This is a so-called spin-statistics connection. The reasons why the spin-statistics connection exists are still unknown, see discussion in text. On the other hand, according to the Pauli exclusion principle, the permutation symmetry of the total wave functions can be only of two types: symmetric or antisymmetric, all other types of permutation symmetry are forbidden; although the solutions of the Schrödinger equation may belong to any representation of the permutation group, including the multi-dimensional ones. It is demonstrated that the proofs of the Pauli exclusion principle in some textbooks on quantum mechanics are incorrect and, in general, the indistinguishability principle is insensitive to the permutation symmetry of the wave function and cannot be used as a criterion for the verification of the Pauli exclusion principle. Heuristic arguments are given in favor that the existence in nature only the one-dimensional permutation representations (symmetric and antisymmetric) are not accidental. As follows from the analysis of possible scenarios, the permission of multi-dimensional representations of the permutation group leads to contradictions with the concept of particle identity and their independence. Thus, the prohibition of the degenerate permutation states by the Pauli exclusion principle follows from the general physical assumptions underlying quantum theory. |
e51716a88baa0a92 | Viewpoint: A not-so-steady state
• Carsten A. Ullrich, Department of Physics and Astronomy, University of Missouri, Columbia, MO 65211, USA
Physics 3, 47
Theory that takes into account the time-dependent nature of electronic transport through a quantum dot reveals fluctuations that could affect rapid switching of nanoscale electronic devices.
Illustration: Alan Stonebraker
Figure 1: Steady-state picture of Coulomb blockade in a nanoscale tunneling junction in which a quantum dot is weakly connected to two leads. Electrons can tunnel only if the bias voltage V is... large enough to line up the chemical potential of the left lead (μL) and an empty level. Otherwise the access is blocked due to the Coulomb repulsion caused by the filled N-electron level. Show more
When roads are clear, traffic proceeds as a continuous flow of cars; if the road is blocked, everything comes to a standstill. However, anyone stuck in the office rush knows life is more complex—traffic tends to move in waves, often slowing down to an annoying stop-and-go. Traffic congestion also makes life difficult for electrons in nanodevices. We tend to think of charge carriers as experiencing a controlled flow, reaching steady state when the gates are opened and current is on. Now, in a study published in Physical Review Letters, a team of scientists from four European countries point to the subtle but significant deviations from steady-state behavior that appear if one looks at the time dependence of electrons traversing a nanoscale junction. Stefan Kurth at Universidad del País Vasco in San Sebastián, Spain, and colleagues in Italy, Sweden, and Germany predict that, on a femtosecond time scale, the current in a quantum dot junction is not in a steady state, as often assumed, but rather oscillates [1]. The amplitude of this oscillation depends on how fast the bias voltage across the dot is switched on, suggesting the importance of initial conditions in determining how a single-electron device will perform. Their paper recasts nanoscale transport as an intrinsically dynamic phenomenon, which has important practical implications for understanding and designing ultrafast nanoelectronic devices.
The charge transport through a quantum dot under weak bias and weakly coupled (by tunneling barriers) to a left and right lead (Fig. 1) is dominated by Coulomb blockade, where an electron already present on the dot prevents further electrons from tunneling in, unless the bias is significantly increased to supply the necessary charging energy. The central region of this system, consisting of the dot and the tunneling barriers, has a capacitance C. The electrostatic energy of a charge Q sitting on the dot is given by Q2/2C. To bring in an extra electron, the Coulomb repulsion due to the charge already present needs to be overcome, and for this to happen, the bias voltage (V) must increase by e/2C. The chemical potential of the left lead, μL, and the empty (N+1) level then line up and an electron can tunnel in. For all bias voltages below this limit, no current flows and the access to the dot is blocked—an extremely non-Ohmic nanoscale electronic device.
In this simple and well-established scenario of nanoscale transport [2], the Coulomb blockade regime is a steady nonequilibrium state of the system where the current is zero. Theorists who want to go beyond this simple model face several challenges: Coulomb blockade is intrinsically a many-body phenomenon in which a full treatment of electron-electron interaction effects is vital. Furthermore, the charges sitting on the central dot are not in equilibrium with the rest of the system and have to be forced to remain there by balancing the external bias and the internal Coulomb repulsion.
In recent years, a “standard model” has emerged for calculating transport characteristics of quantum dots, molecular junctions, and other nanodevices [3]. One first determines the electronic level structure in the central region (the “dot” in Fig. 1) using density-functional theory (DFT) and then finds the total current through the device with the so-called Landauer formula, which integrates over the electron distribution in the leads weighted by their tunneling transmission probabilities. In their work, Kurth et al. critically reexamine a key assumption of this Landauer-DFT approach, namely that the transport through the device can be treated as a steady state. By adopting a time-resolved view of how electrons interact on the dot, they show that this assumption may often not be correct.
Systems of interacting electrons—atoms, molecules, or solids—are characterized by their quantum mechanical wave functions, which are formally obtained from the many-body Schrödinger equation. The core tenet of DFT is that all physical observables in a system of interacting electrons can be expressed by the electronic ground-state density (the number of electrons per volume). To obtain this density, one uses a trick known as the Kohn-Sham approach: rather than dealing with N interacting electrons in some external potential, one gets the same density from a noninteracting system in an effective local single-particle potential that is determined self-consistently [4].
All complicated many-body effects are hidden in a part of this effective potential known as the “exchange-correlation potential.” There isn’t an exact expression for the exchange-correlation potential, but we know many of its properties and have a pretty good idea how to approximate it to obtain accurate electronic structures.
Illustration: Alan Stonebraker
Figure 2: Dynamical picture of Coulomb blockade. If the bias voltage allows tunneling, charge accumulates continuously (a) until an integer value is reached and the effective potential jumps up (b) and the level... is out of alignment with the chemical potential of the leads. Charge flows out again (c) and the potential jumps back down (d).The process repeats, leading to periodic potential jumps and current oscillations, as shown in the inset. Show more
To describe Coulomb blockade in nanoscale transport it is particularly important to ensure the principle of charge quantization: in a steady state, the central dot should be occupied only by integer multiples of the electron charge. To guarantee this in a DFT calculation, the whole burden rests on a subtle correlation effect known as the “derivative discontinuity.” This means that the effective potential an electron experiences on the dot jumps by a constant when the number of electrons passes through an integer value [5]. Few existing approximations for the exchange-correlation potential have this property.
However, there is another dimension to this story. Nanoscale transport is a nonequilibrium phenomenon—but DFT is only a ground-state theory! This means that the proper formal framework in which to describe Coulomb blockade and other transport properties is the time-dependent version of DFT (known as TDDFT) [6], which, in principle, yields the exact time evolution of any interacting system. Time-dependent DFT has proven successful for calculating molecular excitation spectra. In contrast, the theory has been applied to only a few problems in transport [3].
Kurth et al.’s work is groundbreaking because it is the first time-dependent DFT study of transport through a nanoscale junction that incorporates charge quantization (the derivative discontinuity) into the effective interaction between the electrons on the quantum dot. This opens the possibility to study Coulomb blockade in real time and observe how electrons hop on and off the central dot, and block or grant access to additional charges.
Kurth et al. consider a simple one-dimensional model consisting of a single-level quantum dot coupled to two semi-infinite leads and assume that electronic interactions are only present in the central region. This interacting electronic system is then mapped onto a one-dimensional time-dependent Kohn-Sham system featuring an exchange-correlation potential that has the required derivative discontinuity property [7]. At the initial time, a finite bias voltage is suddenly switched on, and the time evolution of the system is calculated.
And this is where the surprise occurs: after switching on the bias, the system—unlike what more restricted models predict—does not evolve towards a steady state (Fig. 2)! It turns out that a steady Coulomb blockade state only exists if the voltage is switched on adiabatically; in that case, the Landauer-DFT results are recovered. In all other cases, one observes a transient phase followed by persistent and self-sustained current oscillations, which indicate a periodic charging and discharging of the dot, with an average occupation corresponding to the Coulomb blockade state.
The discontinuity of the exchange-correlation potential is responsible for this, as shown in Fig. 2. Driven by the bias, charge density is continuously accumulating on the dot until it reaches an integer value, at which point the potential jumps up, causing some of the charge to flow back out again, and the process repeats. The resulting current oscillations are more pronounced the faster the bias is initially switched on. Coulomb blockade thus emerges as an intrinsically time-dependent phenomenon, and this behavior cannot be captured with the Landauer-DFT “standard model.”
The real-time time-dependent DFT approach for transport opens up many new research avenues. On the theoretical side, one obvious task is to go beyond simple model systems and simulate transport using more realistic descriptions of nanoscale junctions—say, putting a real molecule or nanocrystal between actual metallic leads. The development and application of exchange-correlation potentials with discontinuities remains a key issue [8]; it will also be of interest to see how the dynamical Coulomb blockade is affected by the presence of dissipation, e.g., as caused by memory-dependent exchange-correlation potentials [9, 10].
The potential practical applications span a wide range of systems such as single-electron transistors and other nanoscale logic or memory devices. Whenever these devices are switched rapidly (on timescales shorter than a picosecond), transient and non-steady-state effects such as the ones considered here will become relevant. Thanks to the work by Kurth et al. we now have the theoretical and computational tools to simulate from first principles what these processes will look like.
2. M. Di Ventra, Electrical Transport in Nanoscale Systems (Cambridge University Press, Cambridge, 2008)[Amazon][WorldCat]
3. M. Koentopp, C. Chang, K. Burke, and R. Car, J. Phys. Condens. Matter 20, 083203 (2008)
4. W. Kohn, Rev. Mod. Phys. 71, 1253 (1999)
5. C. Toher, A. Filippetti, S. Sanvito, and K. Burke, Phys. Rev. Lett. 95, 146402 (2005)
6. Time-dependent density functional theory, Lecture Notes in Physics Vol 706, edited by M. A. L. Marques, C. A. Ullrich, F. Nogueira, A. Rubio, K. Burke, and E. K. U. Gross (Springer, Berlin, 2006)[Amazon][WorldCat]
7. N. A. Lima, M. F. Silva, L. N. Oliveira, and K. Capelle, Phys. Rev. Lett. 90, 146402 (2003)
8. D. Vieira, K. Capelle, and C. A. Ullrich, Phys. Chem. Chem. Phys. 11, 4647 (2009)
9. N. Sai, M. Zwolak, G. Vignale, and M. Di Ventra, Phys. Rev. Lett. 94, 186810 (2005)
10. H. O. Wijewardane and C. A. Ullrich, Phys. Rev. Lett. 95, 086401 (2005)
About the Author
Image of Carsten A. Ullrich
Carsten A. Ullrich is an Associate Professor of Physics at the University of Missouri-Columbia. He obtained his Ph.D. in theoretical physics in 1995 from the University of Würzburg, Germany. His main research interests are the fundamental and practical aspects of time-dependent density-functional theory, with an emphasis on transport, collective excitations, and excitonic effects in bulk and nanostructured semiconductor materials. In 2009 he was recognized as an Outstanding Referee by the American Physical Society.
Subject Areas
MesoscopicsStrongly Correlated Materials
Related Articles
Focus: Voltage Fluctuations Converted to Electricity
Focus: Voltage Fluctuations Converted to Electricity
Viewpoint: Orbital Engineering, By Design
Materials Science
Viewpoint: Orbital Engineering, By Design
Viewpoint: Observing the Great Spin and Orbital Swap
Atomic and Molecular Physics
Viewpoint: Observing the Great Spin and Orbital Swap
The experimental observation of spin-orbital exchange interactions in ultracold atoms opens a window for the exploration of poorly understood magnetic behaviors. Read More »
More Articles |
cb6e5710ab3506a3 | Take the 2-minute tour ×
I am reading Arthur Jaffe's Introduction to Quantum Field Theory. (You can find it here.) There is an interesting question posed in Exercise 2.5.1:
Solutions to the Klein-Gordon equation propagate with finite speed. But $f^t$ instantly spreads from its localization at the origin [...] Does this fact not contradict the laws of special relativity?
$f^t$ is defined in equation (2.30) and it is explicitly a solution to the KG equation, see (2.32).
Is the correct answer that $f^t$ is not a propagating solution? I.e., the group velocity isn't well-defined so nothing can be said to propagate/violate finite speed information flow? Or is there something more subtle here?
share|improve this question
I don't think you're really answering Arthur's question. Would you agree that according to relativity, if the signals/perturbations are confined to a small region at time $t$, they should be confined to a region at most $dt\cdot c$ further at time $t+dt$? Right or wrong? Is it supposed to be true for $f$ being nonzero in a region? Hasn't Arthur found a way to violate this condition of relativity? If he has, does it contradict the relativistic invariance of the full KG theory? The first-quantized one? The second-quantized one? Can you use it to instantly affect remote places in QFT? Class. FT? – Luboš Motl Apr 11 '13 at 13:18
Your "solution" just tries to overlook Arthur's way of violating relativity by claiming that it doesn't agree with your way how relativity could be violated. But that doesn't matter. Your way isn't necessarily the only way how relativity could be violated. In particular, it's wrong if you claim that the group velocity must be well-defined in any situation that violates relativity. Arthur has presented you with something that could be interpreted as (ultimately wrong) proof that in KG equations, one may send signals to spacelike separated regions, after all. What's wrong with this proof? – Luboš Motl Apr 11 '13 at 13:24
And no, the fact that Arthur's construction doesn't agree with your ad hoc excuse that "disproofs of relativity should have well-defined group velocities" clearly can't be a valid reason why Arthur's proof of relativity is wrong. You must know very well that your solution isn't a solution, it's just an unjustifiable excuse to "eliminate" Arthur's proof although you don't see anything wrong with it. Why are you talking about more subtle things if you clearly don't understand even the non-subtle ones? – Luboš Motl Apr 11 '13 at 13:26
1 Answer 1
up vote 2 down vote accepted
Please, if this is a homework from Arthur to you and if you will use any information in the answer below, indicate it clearly in your answer (and send my best regards to Arthur).
Your comments about the group velocity have nothing to do with the right resolution to Arthur's "would-be violation of relativity". The right solution is that ${\mathfrak f}(\vec x,t)$ solves the first-order (in time) Schrödinger equation (2.5). As sketched in (2.34) etc., the solution immediately becomes nonzero at time $t+dt$ everywhere in space although ${\mathfrak f}=0$ were true everywhere except for a small region at time $t$.
However, because (2.5) is a first-order equation that is nonlocal in space, it follows that even $\partial {\mathfrak f}/\partial t$ is nonzero (almost) everywhere in space at time $t+dt$. This first time derivative is proportional to the action of the nonlocal Hamiltonian $H$ on the Gothic $f$.
This fact prevents us from saying that the original "impulse" was confined to the small region. To claim so in the context of solutions to the actual second-order (in time) Klein-Gordon equation, we need all the initial conditions i.e. both ${\mathfrak f}$ and $\partial {\mathfrak f}/\partial t$ (canonical coordinates and canonical momenta) to vanish everywhere outside the small region. The latter doesn't vanish in the initial time $t$ so it's not true that the initial "impulse" was confined to the small region, and it's therefore allowed for the responses to appear everywhere in space at $t+dt$, too.
If one could construct any solutions that vanish and whose first time derivative vanish everywhere outside a small region at time $t$, but that are nonzero arbitrarily far at the time $t+dt$, it would prove that the theory violates relativity, regardless of any additional excuses about ill-defined group velocities or anything else.
share|improve this answer
Thanks for this answer. I'm just a non-specialist trying to understand these notes. From your answer I understand that positive frequency solutions, f^t, can't be prepared locally. Any local disturbance in the field must then but a superposition of positive and negative energy solutions. Do you know if the same true for field detection? If I measure a field locally, must I be measuring some mixed +/- superposition solution? I'm confused because in quantum optics (the Glauber theory) the positive frequency part of the field operator is used to annihilate a particle at the spacetime point (x,t). – Jase Uknow Apr 11 '13 at 14:13
Hi! Good if it's no homework or exam, but the downside is that I can't send greetings to Arthur in this way. ;-) Indeed, not only in quantum optics but also in QFT, the negative/positive frequency modes are really split by producing creation/annihilation operators from the two groups, respectively. However, this doesn't imply that you can't measure certain observables. You may measure whatever (Hermitian) operator you want, anywhere. These statements about what you can prepare and what you may measure aren't analogous in any sense. – Luboš Motl Apr 11 '13 at 16:52
The first claim, here, says that a localized packet inevitably contains both positive and negative energy modes as contributions. But that's a statement about which states may exist and what are their "localization" properties in space. This has nothing to do with the question which obserables may be measured. – Luboš Motl Apr 11 '13 at 16:54
Your Answer
|
2aa08488849bec81 | From Wikipedia, the free encyclopedia
Jump to: navigation, search
Hydrogen, 1H
Hydrogen discharge tube.jpg
Purple glow in its plasma state
Hydrogen Spectra.jpg
Spectral lines of hydrogen
General properties
Name, symbol Hydrogen, H
Pronunciation /ˈhdrəən/[1]
Appearance colorless gas
Hydrogen in the periodic table
Hydrogen (diatomic nonmetal)
Helium (noble gas)
Lithium (alkali metal)
Beryllium (alkaline earth metal)
Boron (metalloid)
Carbon (polyatomic nonmetal)
Nitrogen (diatomic nonmetal)
Oxygen (diatomic nonmetal)
Fluorine (diatomic nonmetal)
Neon (noble gas)
Sodium (alkali metal)
Magnesium (alkaline earth metal)
Aluminium (post-transition metal)
Silicon (metalloid)
Phosphorus (polyatomic nonmetal)
Sulfur (polyatomic nonmetal)
Chlorine (diatomic nonmetal)
Argon (noble gas)
Potassium (alkali metal)
Calcium (alkaline earth metal)
Scandium (transition metal)
Titanium (transition metal)
Vanadium (transition metal)
Chromium (transition metal)
Manganese (transition metal)
Iron (transition metal)
Cobalt (transition metal)
Nickel (transition metal)
Copper (transition metal)
Zinc (transition metal)
Gallium (post-transition metal)
Germanium (metalloid)
Arsenic (metalloid)
Selenium (polyatomic nonmetal)
Bromine (diatomic nonmetal)
Krypton (noble gas)
Rubidium (alkali metal)
Strontium (alkaline earth metal)
Yttrium (transition metal)
Zirconium (transition metal)
Niobium (transition metal)
Molybdenum (transition metal)
Technetium (transition metal)
Ruthenium (transition metal)
Rhodium (transition metal)
Palladium (transition metal)
Silver (transition metal)
Cadmium (transition metal)
Indium (post-transition metal)
Tin (post-transition metal)
Antimony (metalloid)
Tellurium (metalloid)
Iodine (diatomic nonmetal)
Xenon (noble gas)
Caesium (alkali metal)
Barium (alkaline earth metal)
Lanthanum (lanthanide)
Cerium (lanthanide)
Praseodymium (lanthanide)
Neodymium (lanthanide)
Promethium (lanthanide)
Samarium (lanthanide)
Europium (lanthanide)
Gadolinium (lanthanide)
Terbium (lanthanide)
Dysprosium (lanthanide)
Holmium (lanthanide)
Erbium (lanthanide)
Thulium (lanthanide)
Ytterbium (lanthanide)
Lutetium (lanthanide)
Hafnium (transition metal)
Tantalum (transition metal)
Tungsten (transition metal)
Rhenium (transition metal)
Osmium (transition metal)
Iridium (transition metal)
Platinum (transition metal)
Gold (transition metal)
Mercury (transition metal)
Thallium (post-transition metal)
Lead (post-transition metal)
Bismuth (post-transition metal)
Polonium (post-transition metal)
Astatine (metalloid)
Radon (noble gas)
Francium (alkali metal)
Radium (alkaline earth metal)
Actinium (actinide)
Thorium (actinide)
Protactinium (actinide)
Uranium (actinide)
Neptunium (actinide)
Plutonium (actinide)
Americium (actinide)
Curium (actinide)
Berkelium (actinide)
Californium (actinide)
Einsteinium (actinide)
Fermium (actinide)
Mendelevium (actinide)
Nobelium (actinide)
Lawrencium (actinide)
Rutherfordium (transition metal)
Dubnium (transition metal)
Seaborgium (transition metal)
Bohrium (transition metal)
Hassium (transition metal)
Meitnerium (unknown chemical properties)
Darmstadtium (unknown chemical properties)
Roentgenium (unknown chemical properties)
Copernicium (transition metal)
Ununtrium (unknown chemical properties)
Flerovium (post-transition metal)
Ununpentium (unknown chemical properties)
Livermorium (unknown chemical properties)
Ununseptium (unknown chemical properties)
Ununoctium (unknown chemical properties)
– ← Hydrogenhelium
Atomic number 1
Standard atomic weight 1.008(1)
Element category diatomic nonmetal, could be considered metalloid
Group, block group 1, s-block
Period period 1
Electron configuration 1s1
per shell 1
Physical properties
Color colorless
Phase gas
Density at stp (0 °C and 101.325 kPa) 0.08988 g·L−1
liquid, at m.p. 0.07 g·cm−3 (solid: 0.0763 g·cm−3)[2]
liquid, at b.p. 0.07099 g·cm−3
Triple point 13.8033 K, 7.041 kPa
Critical point 32.938 K, 1.2858 MPa
Heat of fusion (H2) 0.117 kJ·mol−1
Heat of vaporization (H2) 0.904 kJ·mol−1
vapor pressure
at T (K) 15 20
Atomic properties
Oxidation states 1, −1 (an amphoteric oxide)
Electronegativity Pauling scale: 2.20
Ionization energies 1st: 1312.0 kJ·mol−1
Covalent radius 31±5 pm
Van der Waals radius 120 pm
Crystal structure hexagonal
Hexagonal crystal structure for Hydrogen
Speed of sound 1310 m·s−1 (gas, 27 °C)
Thermal conductivity 0.1805 W·m−1·K−1
Magnetic ordering diamagnetic[3]
CAS number 1333-74-0
Discovery Henry Cavendish[4][5] (1766)
Named by Antoine Lavoisier[6] (1783)
Most stable isotopes
Main article: Isotopes of Hydrogen
iso NA half-life DM DE (MeV) DP
1H 99.9885% 1H is stable with 0 neutrons
2H 0.0115% 2H is stable with 1 neutron
3H trace 12.32 y β 0.01861 3He
· references
Hydrogen is a chemical element with chemical symbol H and atomic number 1. With an atomic weight of 1.00794 u, hydrogen is the lightest element on the periodic table. Its monatomic form (H) is the most abundant chemical substance in the universe, constituting roughly 75% of all baryonic mass.[7][note 1] Non-remnant stars are mainly composed of hydrogen in its plasma state. The most common isotope of hydrogen, termed protium (name rarely used, symbol 1H), has a single proton and zero neutrons.
The universal emergence of atomic hydrogen first occurred during the recombination epoch. At standard temperature and pressure, hydrogen is a colorless, odorless, tasteless, non-toxic, nonmetallic, highly combustible diatomic gas with the molecular formula H2. Since hydrogen readily forms covalent compounds with most non-metallic elements, most of the hydrogen on Earth exists in molecular forms such as in the form of water or organic compounds. Hydrogen plays a particularly important role in acid–base reactions as many acid-base reactions involve the exchange of protons between soluble molecules. In ionic compounds, hydrogen can take the form of a negative charge (i.e., anion) known as a hydride, or as a positively charged (i.e., cation) species denoted by the symbol H+. The hydrogen cation is written as though composed of a bare proton, but in reality, hydrogen cations in ionic compounds are always more complex species than that would suggest. As the only neutral atom for which the Schrödinger equation can be solved analytically, study of the energetics and bonding of the hydrogen atom has played a key role in the development of quantum mechanics.
Hydrogen gas was first artificially produced in the early 16th century, via the mixing of metals with acids. In 1766–81, Henry Cavendish was the first to recognize that hydrogen gas was a discrete substance,[8] and that it produces water when burned, a property which later gave it its name: in Greek, hydrogen means "water-former".
Industrial production is mainly from the steam reforming of natural gas, and less often from more energy-intensive hydrogen production methods like the electrolysis of water.[9] Most hydrogen is employed near its production site, with the two largest uses being fossil fuel processing (e.g., hydrocracking) and ammonia production, mostly for the fertilizer market.
Hydrogen is a concern in metallurgy as it can embrittle many metals,[10] complicating the design of pipelines and storage tanks.[11]
Hydrogen gas (dihydrogen or molecular hydrogen)[12] is highly flammable and will burn in air at a very wide range of concentrations between 4% and 75% by volume.[13] The enthalpy of combustion for hydrogen is −286 kJ/mol:[14]
Hydrogen gas forms explosive mixtures with air if it is 4–74% concentrated and with chlorine if it is 5–95% concentrated. The mixtures may be ignited by spark, heat or sunlight. The hydrogen autoignition temperature, the temperature of spontaneous ignition in air, is 500 °C (932 °F).[15] Pure hydrogen-oxygen flames emit ultraviolet light and with high oxygen mix are nearly invisible to the naked eye, as illustrated by the faint plume of the Space Shuttle Main Engine compared to the highly visible plume of a Space Shuttle Solid Rocket Booster. The detection of a burning hydrogen leak may require a flame detector; such leaks can be very dangerous. Hydrogen flames in other conditions are blue, resembling blue natural gas flames.[16] The destruction of the Hindenburg airship was an infamous example of hydrogen combustion; the cause is debated, but the visible orange flames were the result of a rich mixture of hydrogen to oxygen combined with carbon compounds from the airship skin.
Electron energy levels
Main article: Hydrogen atom
A more accurate description of the hydrogen atom comes from a purely quantum mechanical treatment that uses the Schrödinger equation, Dirac equation or even the Feynman path integral formulation to calculate the probability density of the electron around the proton.[20] The most complicated treatments allow for the small effects of special relativity and vacuum polarization. In the quantum mechanical treatment, the electron in a ground state hydrogen atom has no angular momentum at all—an illustration of how the "planetary orbit" conception of electron motion differs from reality.
Elemental molecular forms
First tracks observed in liquid hydrogen bubble chamber at the Bevatron
The uncatalyzed interconversion between para and ortho H2 increases with increasing temperature; thus rapidly condensed H2 contains large quantities of the high-energy ortho form that converts to the para form very slowly.[25] The ortho/para ratio in condensed H2 is an important consideration in the preparation and storage of liquid hydrogen: the conversion from ortho to para is exothermic and produces enough heat to evaporate some of the hydrogen liquid, leading to loss of liquefied material. Catalysts for the ortho-para interconversion, such as ferric oxide, activated carbon, platinized asbestos, rare earth metals, uranium compounds, chromic oxide, or some nickel[26] compounds, are used during hydrogen cooling.[27]
Further information: Hydrogen compounds
Covalent and organic compounds
While H2 is not very reactive under standard conditions, it does form compounds with most elements. Hydrogen can form compounds with elements that are more electronegative, such as halogens (e.g., F, Cl, Br, I), or oxygen; in these compounds hydrogen takes on a partial positive charge.[28] When bonded to fluorine, oxygen, or nitrogen, hydrogen can participate in a form of medium-strength noncovalent bonding called hydrogen bonding, which is critical to the stability of many biological molecules.[29][30] Hydrogen also forms compounds with less electronegative elements, such as the metals and metalloids, in which it takes on a partial negative charge. These compounds are often known as hydrides.[31]
Hydrogen forms a vast array of compounds with carbon called the hydrocarbons, and an even vaster array with heteroatoms that, because of their general association with living things, are called organic compounds.[32] The study of their properties is known as organic chemistry[33] and their study in the context of living organisms is known as biochemistry.[34] By some definitions, "organic" compounds are only required to contain carbon. However, most of them also contain hydrogen, and because it is the carbon-hydrogen bond which gives this class of compounds most of its particular chemical characteristics, carbon-hydrogen bonds are required in some definitions of the word "organic" in chemistry.[32] Millions of hydrocarbons are known, and they are usually formed by complicated synthetic pathways, which seldom involve elementary hydrogen.
, which is polymeric. In lithium aluminium hydride, the AlH
Although hydrides can be formed with almost all main-group elements, the number and combination of possible compounds varies widely; for example, there are over 100 binary borane hydrides known, but only one binary aluminium hydride.[36] Binary indium hydride has not yet been identified, although larger complexes exist.[37]
Protons and acids
Further information: Acid–base reaction
A bare proton, H+
.[39] Other oxonium ions are found when water is in acidic solution with other solvents.[40]
ion, known as protonated molecular hydrogen or the trihydrogen cation.[41]
Main article: Isotopes of hydrogen
Hydrogen discharge (spectrum) tube
Deuterium discharge (spectrum) tube
Hydrogen has three naturally occurring isotopes, denoted 1
, 2
and 3
. Other, highly unstable nuclei (4
to 7
) have been synthesized in the laboratory but not observed in nature.[42][43]
• 1
• 2
-NMR spectroscopy.[45] Heavy water is used as a neutron moderator and coolant for nuclear reactors. Deuterium is also a potential fuel for commercial nuclear fusion.[46]
• 3
is known as tritium and contains one proton and two neutrons in its nucleus. It is radioactive, decaying into helium-3 through beta decay with a half-life of 12.32 years.[38] It is so radioactive that it can be used in luminous paint, making it useful in such things as watches. The glass prevents the small amount of radiation from getting out.[47] Small amounts of tritium occur naturally because of the interaction of cosmic rays with atmospheric gases; tritium has also been released during nuclear weapons tests.[48] It is used in nuclear fusion reactions,[49] as a tracer in isotope geochemistry,[50] and specialized in self-powered lighting devices.[51] Tritium has also been used in chemical and biological labeling experiments as a radiolabel.[52]
and 3
) are sometimes used for deuterium and tritium, but the corresponding symbol for protium, P, is already in use for phosphorus and thus is not available for protium.[53] In its nomenclatural guidelines, the International Union of Pure and Applied Chemistry allows any of D, T, 2
, and 3
to be used, although 2
and 3
are preferred.[54]
Discovery and use
In 1671, Robert Boyle discovered and described the reaction between iron filings and dilute acids, which results in the production of hydrogen gas.[55][56] In 1766, Henry Cavendish was the first to recognize hydrogen gas as a discrete substance, by naming the gas from a metal-acid reaction "flammable air". He speculated that "flammable air" was in fact identical to the hypothetical substance called "phlogiston"[57][58] and further finding in 1781 that the gas produces water when burned. He is usually given credit for its discovery as an element.[4][5] In 1783, Antoine Lavoisier gave the element the name hydrogen (from the Greek ὑδρο- hydro meaning "water" and -γενής genes meaning "creator")[6] when he and Laplace reproduced Cavendish's finding that water is produced when hydrogen is burned.[5]
Antoine-Laurent de Lavoisier
Fe + H2O → FeO + H2
2 Fe + 3 H2O → Fe2O3 + 3 H2
3 Fe + 4 H2O → Fe3O4 + 4 H2
Hydrogen was liquefied for the first time by James Dewar in 1898 by using regenerative cooling and his invention, the vacuum flask.[5] He produced solid hydrogen the next year.[5] Deuterium was discovered in December 1931 by Harold Urey, and tritium was prepared in 1934 by Ernest Rutherford, Mark Oliphant, and Paul Harteck.[4] Heavy water, which consists of deuterium in the place of regular hydrogen, was discovered by Urey's group in 1932.[5] François Isaac de Rivaz built the first de Rivaz engine, an internal combustion engine powered by a mixture of hydrogen and oxygen in 1806. Edward Daniel Clarke invented the hydrogen gas blowpipe in 1819. The Döbereiner's lamp and limelight were invented in 1823.[5]
The first hydrogen-filled balloon was invented by Jacques Charles in 1783.[5] Hydrogen provided the lift for the first reliable form of air-travel following the 1852 invention of the first hydrogen-lifted airship by Henri Giffard.[5] German count Ferdinand von Zeppelin promoted the idea of rigid airships lifted by hydrogen that later were called Zeppelins; the first of which had its maiden flight in 1900.[5] Regularly scheduled flights started in 1910 and by the outbreak of World War I in August 1914, they had carried 35,000 passengers without a serious incident. Hydrogen-lifted airships were used as observation platforms and bombers during the war.
In the same year the first hydrogen-cooled turbogenerator went into service with gaseous hydrogen as a coolant in the rotor and the stator in 1937 at Dayton, Ohio, by the Dayton Power & Light Co,[59] because of the thermal conductivity of hydrogen gas this is the most common type in its field today.
The nickel hydrogen battery was used for the first time in 1977 aboard the U.S. Navy's Navigation technology satellite-2 (NTS-2).[60] For example, the ISS,[61] Mars Odyssey[62] and the Mars Global Surveyor[63] are equipped with nickel-hydrogen batteries. In the dark part of its orbit, the Hubble Space Telescope is also powered by nickel-hydrogen batteries, which were finally replaced in May 2009,[64] more than 19 years after launch, and 13 years over their design life.[65]
Role in quantum theory
Because of its relatively simple atomic structure, consisting only of a proton and an electron, the hydrogen atom, together with the spectrum of light produced from it or absorbed by it, has been central to the development of the theory of atomic structure.[66] Furthermore, the corresponding simplicity of the hydrogen molecule and the corresponding cation H+
Natural occurrence
Hydrogen, as atomic H, is the most abundant chemical element in the universe, making up 75% of normal matter by mass and over 90% by number of atoms (most of the mass of the universe, however, is not in the form of chemical-element type matter, but rather is postulated to occur as yet-undetected forms of mass such as dark matter and dark energy).[68] This element is found in great abundance in stars and gas giant planets. Molecular clouds of H2 are associated with star formation. Hydrogen plays a vital role in powering stars through the proton-proton reaction and the CNO cycle nuclear fusion.[69]
Under ordinary conditions on Earth, elemental hydrogen exists as the diatomic gas, H2. However, hydrogen gas is very rare in the Earth's atmosphere (1 ppm by volume) because of its light weight, which enables it to escape from Earth's gravity more easily than heavier gases. However, hydrogen is the third most abundant element on the Earth's surface,[71] mostly in the form of chemical compounds such as hydrocarbons and water.[38] Hydrogen gas is produced by some bacteria and algae and is a natural component of flatus, as is methane, itself a hydrogen source of increasing importance.[72]
A molecular form called protonated molecular hydrogen (H+
is one of the most abundant ions in the Universe, and it plays a notable role in the chemistry of the interstellar medium.[73] Neutral triatomic hydrogen H3 can only exist in an excited form and is unstable.[74] By contrast, the positive hydrogen molecular ion (H+
) is a rare molecule in the universe.
Main article: Hydrogen production
In the laboratory, H
Zn + 2 H+
+ H
Aluminium can also produce H
upon treatment with bases:
2 Al + 6 H
+ 2 OH
→ 2 Al(OH)
+ 3 H
2 H
(l) → 2 H
(g) + O
Steam reforming
Hydrogen can be prepared in several different ways, but economically the most important processes involve removal of hydrogen from hydrocarbons. Commercial bulk hydrogen is usually produced by the steam reforming of natural gas.[77] At high temperatures (1000–1400 K, 700–1100 °C or 1300–2000 °F), steam (water vapor) reacts with methane to yield carbon monoxide and H
+ H
→ CO + 3 H
→ C + 2 H
Consequently, steam reforming typically employs an excess of H
CO + H
+ H
Other important methods for H
production include partial oxidation of hydrocarbons:[78]
2 CH
+ O
→ 2 CO + 4 H
and the coal reaction, which can serve as a prelude to the shift reaction above:[77]
C + H
→ CO + H
Hydrogen is sometimes produced and consumed in the same industrial process, without being separated. In the Haber process for the production of ammonia, hydrogen is generated from natural gas.[79] Electrolysis of brine to yield chlorine also produces hydrogen as a co-product.[80]
Anaerobic corrosion
Fe + 2 H
O → Fe(OH)
+ H
3 Fe(OH)
+ 2 H
O + H
ferrous hydroxide → magnetite + water + hydrogen
The well crystallized magnetite (Fe
Geological occurrence: the serpentinization reaction
In the absence of atmospheric oxygen (O
), quartz (SiO
) and hydrogen (H
) is the following:
+ 2 H
O → 2 Fe
+ 3 SiO
+ 3 H
fayalite + water → magnetite + quartz + hydrogen
Formation in transformers
Consumption in processes
Large quantities of H
has several other important uses. H
is also used as a reducing agent of metallic ores.[86]
Hydrogen is highly soluble in many rare earth and transition metals[87] and is soluble in both nanocrystalline and amorphous metals.[88] Hydrogen solubility in metals is influenced by local distortions or impurities in the crystal lattice.[89] These properties may be useful when hydrogen is purified by passage through hot palladium disks, but the gas's high solubility is a metallurgical problem, contributing to the embrittlement of many metals,[10] complicating the design of pipelines and storage tanks.[11]
Apart from its use as a reactant, H
has wide applications in physics and engineering. It is used as a shielding gas in welding methods such as atomic hydrogen welding.[90][91] H2 is used as the rotor coolant in electrical generators at power stations, because it has the highest thermal conductivity of any gas. Liquid H2 is used in cryogenic research, including superconductivity studies.[92] Because H
In more recent applications, hydrogen is used pure or mixed with nitrogen (sometimes called forming gas) as a tracer gas for minute leak detection. Applications can be found in the automotive, chemical, power generation, aerospace, and telecommunications industries.[94] Hydrogen is an authorized food additive (E 949) that allows food package leak testing among other anti-oxidizing properties.[95]
Hydrogen's rarer isotopes also each have specific applications. Deuterium (hydrogen-2) is used in nuclear fission applications as a moderator to slow neutrons, and in nuclear fusion reactions.[5] Deuterium compounds have applications in chemistry and biology in studies of reaction isotope effects.[96] Tritium (hydrogen-3), produced in nuclear reactors, is used in the production of hydrogen bombs,[97] as an isotopic label in the biosciences,[52] and as a radiation source in luminous paints.[98]
Energy carrier
Hydrogen is not an energy resource,[100] except in the hypothetical context of commercial nuclear fusion power plants using deuterium or tritium, a technology presently far from development.[101] The Sun's energy comes from nuclear fusion of hydrogen, but this process is difficult to achieve controllably on Earth.[102] Elemental hydrogen from solar, biological, or electrical sources require more energy to make it than is obtained by burning it, so in these cases hydrogen functions as an energy carrier, like a battery. Hydrogen may be obtained from fossil sources (such as methane), but these sources are unsustainable.[100]
The energy density per unit volume of both liquid hydrogen and compressed hydrogen gas at any practicable pressure is significantly less than that of traditional fuel sources, although the energy density per unit fuel mass is higher.[100] Nevertheless, elemental hydrogen has been widely discussed in the context of energy, as a possible future carrier of energy on an economy-wide scale.[103] For example, CO
production from fossil fuels.[104] Hydrogen used in transportation would burn relatively cleanly, with some NOx emissions,[105] but without carbon emissions.[104] However, the infrastructure costs associated with full conversion to a hydrogen economy would be substantial.[106]
Semiconductor industry
Hydrogen is employed to saturate broken ("dangling") bonds of amorphous silicon and amorphous carbon that helps stabilizing material properties.[107] It is also a potential electron donor in various oxide materials, including ZnO,[108][109] SnO2, CdO, MgO,[110] ZrO2, HfO2, La2O3, Y2O3, TiO2, SrTiO3, LaAlO3, SiO2, Al2O3, ZrSiO4, HfSiO4, and SrZrO3.[111]
Biological reactions
Water splitting, in which water is decomposed into its component protons, electrons, and oxygen, occurs in the light reactions in all photosynthetic organisms. Some such organisms, including the alga Chlamydomonas reinhardtii and cyanobacteria, have evolved a second step in the dark reactions in which protons and electrons are reduced to form H2 gas by specialized hydrogenases in the chloroplast.[113] Efforts have been undertaken to genetically modify cyanobacterial hydrogenases to efficiently synthesize H2 gas even in the presence of oxygen.[114] Efforts have also been undertaken with genetically modified alga in a bioreactor.[115]
Safety and precautions
Main article: Hydrogen safety
Hydrogen poses a number of hazards to human safety, from potential detonations and fires when mixed with air to being an asphyxiant in its pure, oxygen-free form.[116] In addition, liquid hydrogen is a cryogen and presents dangers (such as frostbite) associated with very cold liquids.[117] Hydrogen dissolves in many metals, and, in addition to leaking out, may have adverse effects on them, such as hydrogen embrittlement,[118] leading to cracks and explosions.[119] Hydrogen gas leaking into external air may spontaneously ignite. Moreover, hydrogen fire, while being extremely hot, is almost invisible, and thus can lead to accidental burns.[120]
See also
1. ^ Simpson, J.A.; Weiner, E.S.C. (1989). "Hydrogen". Oxford English Dictionary 7 (2nd ed.). Clarendon Press. ISBN 0-19-861219-2.
8. ^ Presenter: Professor Jim Al-Khalili (21 January 2010). "Discovering the Elements". Chemistry: A Volatile History. 25:40 minutes in. BBC. BBC Four. http://www.bbc.co.uk/programmes/b00q2mk5.
11. ^ a b Christensen, C.H.; Nørskov, J.K.; Johannessen, T. (9 July 2005). "Making society independent of fossil fuels — Danish researchers reveal new technology". Technical University of Denmark. Retrieved 28 March 2008. [dead link]
16. ^ hydrogen flame visibility
18. ^ Millar, Tom (10 December 2003). "Lecture 7, Emission Lines — Examples". PH-3009 (P507/P706/M324) Interstellar Physics. University of Manchester. Retrieved 5 February 2008. [dead link]
20. ^ Stern, David P. (13 February 2005). "Wave Mechanics". NASA Goddard Space Flight Center. Retrieved 16 April 2008.
26. ^ "Ortho-Para conversion. Pag. 13" (PDF). [dead link]
29. ^ Kimball, John W. (7 August 2003). "Hydrogen". Kimball's Biology Pages. Retrieved 4 March 2008.
30. ^ IUPAC Compendium of Chemical Terminology, Electronic version, Hydrogen Bond
31. ^ Sandrock, Gary (2 May 2002). "Metal-Hydrogen Systems". Sandia National Laboratories. Retrieved 23 March 2008.
33. ^ "Organic Chemistry". Dictionary.com. Lexico Publishing Group. 2008. Retrieved 23 March 2008.
34. ^ "Biochemistry". Dictionary.com. Lexico Publishing Group. 2008. Retrieved 23 March 2008.
36. ^ Downs, Anthony J.; Pulham, Colin R. (1994). "The hydrides of aluminium, gallium, indium, and thallium: a re-evaluation". Chemical Society Reviews 23 (3): 175–184. doi:10.1039/CS9942300175.
46. ^ Broad, William J. (11 November 1991). "Breakthrough in Nuclear Fusion Offers Hope for Power of Future". The New York Times. Retrieved 12 February 2008.
47. ^ The Elements, Theodore Gray, Black Dog & Leventhal Publishers Inc., 2009
50. ^ Kendall, Carol; Caldwell, Eric (1998). "Fundamentals of Isotope Geochemistry". US Geological Survey. Retrieved 8 March 2008.
53. ^ van der Krogt, Peter (5 May 2005). "Hydrogen". Elementymology & Elements Multidict. Retrieved 20 December 2010.
55. ^ Boyle, Robert "Tracts written by the Honourable Robert Boyle containing new experiments, touching the relation betwixt flame and air..." (London, England: 1672).
57. ^ "Why did oxygen supplant phlogiston? Research programmes in the Chemical Revolution – Cambridge Books Online – Cambridge University Press". Retrieved 22 October 2011.
58. ^ Just the Facts—Inventions & Discoveries, School Specialty Publishing, 2005
59. ^ "A chronological history of electrical development from 600 B.C". Archive.org. Retrieved 6 April 2009.
60. ^ "NTS-2 Nickel-Hydrogen Battery Performance 31". Aiaa.org. Retrieved 6 April 2009.
61. ^ Jannette, A.G.; Hojnicki, J.S.; McKissock, D.B.; Fincannon, J.; Kerslake, T.W.; Rodriguez, C.D. (July 2002). "Validation of international space station electrical performance model via on-orbit telemetry". IECEC '02. 2002 37th Intersociety Energy Conversion Engineering Conference, 2002: 45–50. doi:10.1109/IECEC.2002.1391972. ISBN 0-7803-7296-4. Retrieved 11 November 2011.
62. ^ Anderson, P.M.; Coyne, J.W. (2002). "A lightweight high reliability single battery power system for interplanetary spacecraft". Aerospace Conference Proceedings 5: 5–2433. doi:10.1109/AERO.2002.1035418. ISBN 0-7803-7231-X.
63. ^ "Mars Global Surveyor". Astronautix.com. Retrieved 6 April 2009.
64. ^ Hubble servicing mission 4 essentials
65. ^ Extending Hubble's mission life with new batteries
66. ^ Crepeau, Bob (1 January 2006). Niels Bohr: The Atomic Model. Great Scientific Minds (Great Neck Publishing). ISBN 1-4298-0723-7.
68. ^ Gagnon, Steve. "Hydrogen". Jefferson Lab. Retrieved 5 February 2008.
69. ^ Haubold, Hans; Mathai, A. M. (15 November 2007). "Solar Thermonuclear Energy Generation". Columbia University. Retrieved 12 February 2008.
70. ^ Storrie-Lombardi, Lisa J.; Wolfe, Arthur M. (2000). "Surveys for z > 3 Damped Lyman-alpha Absorption Systems: the Evolution of Neutral Gas". Astrophysical Journal 543 (2): 552–576. arXiv:astro-ph/0006044. Bibcode:2000ApJ...543..552S. doi:10.1086/317138.
72. ^ Berger, Wolfgang H. (15 November 2007). "The Future of Methane". University of California, San Diego. Retrieved 12 February 2008.
73. ^ McCall Group, Oka Group (22 April 2005). "H3+ Resource Center". Universities of Illinois and Chicago. Retrieved 5 February 2008.
74. ^ Helm, H. et al.. "Coupling of Bound States to Continuum States in Neutral Triatomic Hydrogen". Department of Molecular and Optical Physics, University of Freiburg, Germany. Retrieved 25 November 2009.
76. ^ Venere, Emil (15 May 2007). "New process generates hydrogen from aluminum alloy to run engines, fuel cells". Purdue University. Retrieved 5 February 2008.
82. ^ Perret, Robert. "Development of Solar-Powered Thermochemical Production of Hydrogen from Water, DOE Hydrogen Program, 2007" (PDF). Retrieved 17 May 2008.
84. ^ "Virginia Tech team develops process for high-yield production of hydrogen from xylose under mild conditions". Green Car Congress. 3 April 2013. doi:10.1002/anie.201300766. Retrieved 22 January 2014.
88. ^ Kirchheim, R.; Mutschele, T.; Kieninger, W.; Gleiter, H; Birringer, R; Koble, T (1988). "Hydrogen in amorphous and nanocrystalline metals". Materials Science and Engineering 99: 457–462. doi:10.1016/0025-5416(88)90377-1.
93. ^ Barnes, Matthew (2004). "LZ-129, Hindenburg". The Great Zeppelins. Retrieved 18 March 2008.
94. ^ Block, Matthias (3 September 2004). "Hydrogen as Tracer Gas for Leak Detection". 16th WCNDT 2004 (Montreal, Canada: Sensistor Technologies). Retrieved 25 March 2008.
96. ^ Reinsch, J; Katz, A; Wean, J; Aprahamian, G; MacFarland, JT (1980). "The deuterium isotope effect upon the reaction of fatty acyl-CoA dehydrogenase and butyryl-CoA". J. Biol. Chem. 255 (19): 9093–97. PMID 7410413.
99. ^ "International Temperature Scale of 1990" (PDF). Procès-Verbaux du Comité International des Poids et Mesures. 1989: T23–T42. Retrieved 25 March 2008.
100. ^ a b c McCarthy, John (31 December 1995). "Hydrogen". Stanford University. Retrieved 14 March 2008.
109. ^ Janotti, Anderson; Van De Walle, CG (2007). "Hydrogen multicentre bonds". Nature Materials 6 (1): 44–47. Bibcode:2007NatMa...6...44J. doi:10.1038/nmat1795. PMID 17143265.
110. ^ Kilic, Cetin; Zunger, Alex (2002). "n-type doping of oxides by hydrogen". Applied Physics Letters 81 (1): 73–75. Bibcode:2002ApPhL..81...73K. doi:10.1063/1.1482783.
115. ^ Williams, Chris (24 February 2006). "Pond life: the future of energy". Science (The Register). Retrieved 24 March 2008.
116. ^ a b Brown, W. J. et al. (1997). "Safety Standard for Hydrogen and Hydrogen Systems" (PDF). NASA. Retrieved 5 February 2008.
118. ^ "'Bugs' and hydrogen embrittlement". Science News (Washington, D.C.) 128 (3): 41. 20 July 1985. doi:10.2307/3970088. JSTOR 3970088.
120. ^ "Hydrogen Safety". Humboldt State University. Retrieved 14 April 2010. [dead link]
Further reading
• Ferreira-Aparicio, P; Benito, M. J.; Sanz, J. L. (2005). "New Trends in Reforming Technologies: from Hydrogen Industrial Plants to Multifuel Microreformers". Catalysis Reviews 47 (4): 491–588. doi:10.1080/01614940500364958.
• Scerri, Eric (2007). The Periodic System, Its Story and Its Significance,. New York: Oxford University Press. ISBN 0-19-530573-6.
External links
Listen to this article (2 parts) · (info)
Part 1 • Part 2
More spoken articles |
caeb8f89b0f136e9 | Fokker–Planck equation
From Wikipedia, the free encyclopedia
Jump to: navigation, search
A solution to the one-dimensional Fokker–Planck equation, with both the drift and the diffusion term. In this case the initial condition is a Dirac delta function centered away from zero velocity. Over time the distribution widens due to random impulses.
In statistical mechanics, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag forces and random forces, as in Brownian motion. The equation can be generalized to other observables as well.[1] It is named after Adriaan Fokker[2] and Max Planck[3] and is also known as the Kolmogorov forward equation, after Andrey Kolmogorov, who independently discovered the concept in 1931.[4] When applied to particle position distributions, it is better known as the Smoluchowski equation (after Marian Smoluchowski). The case with zero diffusion is known in statistical mechanics as the Liouville equation.
The first consistent microscopic derivation of the Fokker–Planck equation in the single scheme of classical and quantum mechanics was performed[5] by Nikolay Bogoliubov and Nikolay Krylov.[6]
The Smoluchowski equation is the Fokker–Planck equation for the probability density function of the particle positions of Brownian particles.[7]
One dimension[edit]
In one spatial dimension x, for an Itō process driven by the standard Wiener process and described by the stochastic differential equation (SDE)
with drift and diffusion coefficient , the Fokker–Planck equation for the probability density of the random variable is
While the Fokker–Planck equation is used with problems where the initial distribution is known, if the problem is to know the distribution at previous times, the Feynman-Kac formula can be used, which is a consequence of the Kolmogorov backward equation.
The stochastic process defined above in the Itō sense can be rewritten within the Stratonovich convention as a Stratonovich SDE:
It includes an added noise-induced drift term due to diffusion gradient effects if the noise is state-dependent. This convention is more often used in physical applications. Indeed, it is well known that any solution to the Stratonovich SDE is a solution to the Itō SDE.
The zero drift equation with constant diffusion can be considered as a model of classical Brownian motion:
This model has discrete spectrum of solutions if the condition of fixed boundaries is added for :
It has been shown [9] that in this case an analytical spectrum of solutions allows deriving a local uncertainty relation for the coordinate-velocity phase volume:
Here is a minimal value of a corresponding diffusion spectrum , while and represent the uncertainty of coordinate-velocity definition.
Many dimensions[edit]
More generally, if
where and are N-dimensional random vectors, is an NM matrix and is an M-dimensional standard Wiener process, the probability density for satisfies the Fokker–Planck equation
with drift vector and diffusion tensor
If instead of an Itō SDE, a Stratonovich SDE is considered,
the Fokker–Planck equation will read ([8] pag. 129):
Wiener process[edit]
A standard scalar Wiener process is generated by the stochastic differential equation
Here the drift term is zero and the diffusion coefficient is 1. Thus the corresponding Fokker–Planck equation is
which is the simplest form of a diffusion equation. If the initial condition is , the solution is
Ornstein–Uhlenbeck process[edit]
The Ornstein–Uhlenbeck process is a process defined as
with . The corresponding Fokker–Planck equation is
The stationary solution () is
Plasma physics[edit]
In plasma physics, the distribution function for a particle species , , takes the place of the probability density function. The corresponding Boltzmann equation is given by
where the third term includes the particle acceleration due to the Lorentz force and the Fokker–Planck term at the right-hand side represents the effects of particle collisions. The quantities and are the average change in velocity a particle of type experiences due to collisions with all other particle species in unit time. Expressions for these quantities are given elsewhere.[10] If collisions are ignored, the Boltzmann equation reduces to the Vlasov equation.
Computational considerations[edit]
Brownian motion follows the Langevin equation, which can be solved for many different stochastic forcings with results being averaged (the Monte Carlo method, canonical ensemble in molecular dynamics). However, instead of this computationally intensive approach, one can use the Fokker–Planck equation and consider the probability of the particle having a velocity in the interval when it starts its motion with at time 0.
Being a partial differential equation, the Fokker–Planck equation can be solved analytically only in special cases. A formal analogy of the Fokker–Planck equation with the Schrödinger equation allows the use of advanced operator techniques known from quantum mechanics for its solution in a number of cases. In many applications, one is only interested in the steady-state probability distribution , which can be found from . The computation of mean first passage times and splitting probabilities can be reduced to the solution of an ordinary differential equation which is intimately related to the Fokker–Planck equation.
Particular cases with known solution and inversion[edit]
In mathematical finance for volatility smile modeling of options via local volatility, one has the problem of deriving a diffusion coefficient consistent with a probability density obtained from market option quotes. The problem is therefore an inversion of the Fokker–Planck equation: Given the density f(x,t) of the option underlying X deduced from the option market, one aims at finding the local volatility consistent with f. This is an inverse problem that has been solved in general by Dupire (1994, 1997) with a non-parametric solution. Brigo and Mercurio (2002, 2003) propose a solution in parametric form via a particular local volatility consistent with a solution of the Fokker–Planck equation given by a mixture model. More information is available also in Fengler (2008), Gatheral (2008) and Musiela and Rutkowski (2008).
Fokker–Planck equation and path integral[edit]
Every Fokker–Planck equation is equivalent to a path integral. The path integral formulation is an excellent starting point for the application of field theory methods.[11] This is used, for instance, in critical dynamics.
A derivation of the path integral is possible in the same way as in quantum mechanics, simply because the Fokker–Planck equation is formally equivalent to the Schrödinger equation. Here are the steps for a Fokker–Planck equation with one variable x. Write the FP equation in the form
The x-derivatives here only act on the -function, not on . Integrate over a time interval ,
Insert the Fourier integral
for the -function,
This equation expresses as functional of . Iterating times and performing the limit gives a path integral with Lagrangian
The variables conjugate to are called "response variables".[12]
Although formally equivalent, different problems may be solved more easily in the Fokker–Planck equation or the path integral formulation. The equilibrium distribution for instance may be obtained more directly from the Fokker–Planck equation.
See also[edit]
Notes and references[edit]
1. ^ Leo P. Kadanoff (2000). Statistical Physics: statics, dynamics and renormalization. World Scientific. ISBN 981-02-3764-2.
2. ^ Fokker, A. D. (1914). "Die mittlere Energie rotierender elektrischer Dipole im Strahlungsfeld". Ann. Phys. 348 (4. Folge 43): 810–820. doi:10.1002/andp.19143480507.
3. ^ Planck, M. (1917). "Über einen Satz der statistischen Dynamik und seine Erweiterung in der Quantentheorie". Sitzungsber. Preuss. Akad. Wiss. 24.
4. ^ Kolmogorov, Andrei (1931). "Über die analytischen Methoden in der Wahrscheinlichkeitstheorie" [On Analytical Methods in the Theory of Probability]. Mathematische Annalen (in German). 104 (1): 415–458 [pp. 448–451]. doi:10.1007/BF01457949.
5. ^ N. N. Bogolyubov Jr. and D. P. Sankovich (1994). "N. N. Bogolyubov and statistical mechanics". Russian Math. Surveys 49(5): 19—49. doi:10.1070/RM1994v049n05ABEH002419
6. ^ N. N. Bogoliubov and N. M. Krylov (1939). Fokker–Planck equations generated in perturbation theory by a method based on the spectral properties of a perturbed Hamiltonian. Zapiski Kafedry Fiziki Akademii Nauk Ukrainian SSR 4: 81–157 (in Ukrainian).
7. ^ Dhont, J. K. G. (1996). An Introduction to Dynamics of Colloids. Elsevier. p. 183. ISBN 0-08-053507-0.
8. ^ a b Öttinger, Hans Christian (1996). Stochastic Processes in Polymeric Fluids. Berlin-Heidelberg: Springer-Verlag. p. 75. ISBN 978-3-540-58353-0.
9. ^ Kamenshchikov, S. (2014). "Clustering and Uncertainty in Perfect Chaos Systems". Journal of Chaos. doi:10.1155/2014/292096.
10. ^ Rosenbluth, M. N. (1957). "Fokker-Planck Equation for an Inverse-Square Force". Physical Review. 107 (1): 1–6. Bibcode:1957PhRv..107....1R. doi:10.1103/physrev.107.1.
11. ^ Zinn-Justin, Jean (1996). Quantum field theory and critical phenomena. Oxford: Clarendon Press. ISBN 0-19-851882-X.
12. ^ Janssen, H. K. (1976). "On a Lagrangean for Classical Field Dynamics and Renormalization Group Calculation of Dynamical Critical Properties". Z. Phys. B23 (4): 377–380. Bibcode:1976ZPhyB..23..377J. doi:10.1007/BF01316547.
Further reading[edit]
• Bruno Dupire (1994) Pricing with a Smile. Risk Magazine, January, 18–20.
• Bruno Dupire (1997) Pricing and Hedging with Smiles. Mathematics of Derivative Securities. Edited by M.A.H. Dempster and S.R. Pliska, Cambridge University Press, Cambridge, 103–111. ISBN 0-521-58424-8.
• Brigo, D.; Mercurio, Fabio (2002). "Lognormal-Mixture Dynamics and Calibration to Market Volatility Smiles". International Journal of Theoretical and Applied Finance. 5 (4): 427–446. doi:10.1142/S0219024902001511.
• Brigo, D.; Mercurio, F.; Sartorelli, G. (2003). "Alternative asset-price dynamics and volatility smile". Quantitative Finance. 3: 173. doi:10.1088/1469-7688/3/3/303.
• Fengler, M. R. (2008). Semiparametric Modeling of Implied Volatility, 2005, Springer Verlag, ISBN 978-3-540-26234-3
• Crispin Gardiner (2009), "Stochastic Methods", 4th edition, Springer, ISBN 978-3-540-70712-7.
• Jim Gatheral (2008). The Volatility Surface. Wiley and Sons, ISBN 978-0-471-79251-2.
• Marek Musiela, Marek Rutkowski. Martingale Methods in Financial Modelling, 2008, 2nd Edition, Springer-Verlag, ISBN 978-3-540-20966-9.
• Hannes Risken, "The Fokker–Planck Equation: Methods of Solutions and Applications", 2nd edition, Springer Series in Synergetics, Springer, ISBN 3-540-61530-X.
• Giorgio Orfino, "Simulazione dell'equazione di Fokker-Planck in Ottica Quantistica", Università degli Studi di Pavia, A.a. 94/95: |
eb865cc676cd769d | Wave–particle duality
From Wikipedia, the free encyclopedia
(Redirected from Wave-particle duality)
Jump to: navigation, search
Wave–particle duality is the concept that every elementary particle or quantic entity may be partly described in terms not only of particles, but also of waves. It expresses the inability of the classical concepts "particle" or "wave" to fully describe the behavior of quantum-scale objects. As Albert Einstein wrote: "It seems as though we must use sometimes the one theory and sometimes the other, while at times we may use either. We are faced with a new kind of difficulty. We have two contradictory pictures of reality; separately neither of them fully explains the phenomena of light, but together they do".[1]
Through the work of Max Planck, Einstein, Louis de Broglie, Arthur Compton, Niels Bohr and many others, current scientific theory holds that all particles also have a wave nature (and vice versa).[2] This phenomenon has been verified not only for elementary particles, but also for compound particles like atoms and even molecules. For macroscopic particles, because of their extremely short wavelengths, wave properties usually cannot be detected.[3]
Although the use of the wave-particle duality has worked well in physics, the meaning or interpretation has not been satisfactorily resolved; see Interpretations of quantum mechanics.
Niels Bohr regarded the "duality paradox" as a fundamental or metaphysical fact of nature. A given kind of quantum object will exhibit sometimes wave, sometimes particle, character, in respectively different physical settings. He saw such duality as one aspect of the concept of complementarity.[4] Bohr regarded renunciation of the cause-effect relation, or complementarity, of the space-time picture, as essential to the quantum mechanical account.[5]
Werner Heisenberg considered the question further. He saw the duality as present for all quantic entities, but not quite in the usual quantum mechanical account considered by Bohr. He saw it in what is called second quantization, which generates an entirely new concept of fields which exist in ordinary space-time, causality still being visualizable. Classical field values (e.g. the electric and magnetic field strengths of Maxwell) are replaced by an entirely new kind of field value, as considered in quantum field theory. Turning the reasoning around, ordinary quantum mechanics can be deduced as a specialized consequence of quantum field theory.[6][7]
Brief history of wave and particle viewpoints[edit]
Democritus—the original atomist—argued that all things in the universe, including light, are composed of indivisible sub-components (light being some form of solar atom).[8] At the beginning of the 11th Century, the Arabic scientist Alhazen wrote the first comprehensive treatise on optics; describing refraction, reflection, and the operation of a pinhole lens via rays of light traveling from the point of emission to the eye. He asserted that these rays were composed of particles of light. In 1630, René Descartes popularized and accredited the opposing wave description in his treatise on light, showing that the behavior of light could be re-created by modeling wave-like disturbances in a universal medium ("plenum"). Beginning in 1670 and progressing over three decades, Isaac Newton developed and championed his corpuscular hypothesis, arguing that the perfectly straight lines of reflection demonstrated light's particle nature; only particles could travel in such straight lines. He explained refraction by positing that particles of light accelerated laterally upon entering a denser medium. Around the same time, Newton's contemporaries Robert Hooke and Christiaan Huygens—and later Augustin-Jean Fresnel—mathematically refined the wave viewpoint, showing that if light traveled at different speeds in different media (such as water and air), refraction could be easily explained as the medium-dependent propagation of light waves. The resulting Huygens–Fresnel principle was extremely successful at reproducing light's behavior and was subsequently supported by Thomas Young's 1803 discovery of double-slit interference.[9][10] The wave view did not immediately displace the ray and particle view, but began to dominate scientific thinking about light in the mid 19th century, since it could explain polarization phenomena that the alternatives could not.[11]
Thomas Young's sketch of two-slit diffraction of waves, 1803
James Clerk Maxwell discovered that he could apply his equations for electromagnetism, which had been previously discovered, along with a slight modification to describe self-propagating waves of oscillating electric and magnetic fields. When the propagation speed of these electromagnetic waves was calculated, the speed of light fell out. It quickly became apparent that visible light, ultraviolet light, and infrared light (phenomena thought previously to be unrelated) were all electromagnetic waves of differing frequency. The wave theory had prevailed—or at least it seemed to.
While the 19th century had seen the success of the wave theory at describing light, it had also witnessed the rise of the atomic theory at describing matter. Antoine Lavoisier deduced the law of conservation of mass and categorized many new chemical elements and compounds; and Joseph Louis Proust advanced chemistry towards the atom by showing that elements combined in definite proportions. This led John Dalton to propose that elements were invisible sub components; Amedeo Avogadro discovered diatomic gases and completed the basic atomic theory, allowing the correct molecular formulae of most known compounds—as well as the correct weights of atoms—to be deduced and categorized in a consistent manner. Dimitri Mendeleev saw an order in recurring chemical properties, and created a table presenting the elements in unprecedented order and symmetry.
Animation showing the wave-particle duality with a double slit experiment and effect of an observer. Increase size to see explanations in the video itself. See also quiz based on this animation.
Particle impacts make visible the interference pattern of waves.
A quantum particle is represented by a wave packet.
Interference of a quantum particle with itself.
Click images for animations.
Turn of the 20th century and the paradigm shift[edit]
Particles of electricity[edit]
At the close of the 19th century, the reductionism of atomic theory began to advance into the atom itself; determining, through physics, the nature of the atom and the operation of chemical reactions. Electricity, first thought to be a fluid, was now understood to consist of particles called electrons. This was first demonstrated by J. J. Thomson in 1897 when, using a cathode ray tube, he found that an electrical charge would travel across a vacuum (which would possess infinite resistance in classical theory). Since the vacuum offered no medium for an electric fluid to travel, this discovery could only be explained via a particle carrying a negative charge and moving through the vacuum. This electron flew in the face of classical electrodynamics, which had successfully treated electricity as a fluid for many years (leading to the invention of batteries, electric motors, dynamos, and arc lamps). More importantly, the intimate relation between electric charge and electromagnetism had been well documented following the discoveries of Michael Faraday and James Clerk Maxwell. Since electromagnetism was known to be a wave generated by a changing electric or magnetic field (a continuous, wave-like entity itself) an atomic/particle description of electricity and charge was a non sequitur. Furthermore, classical electrodynamics was not the only classical theory rendered incomplete.
Radiation quantization[edit]
Main article: Planck's law
In 1901, Max Planck published an analysis that succeeded in reproducing the observed spectrum of light emitted by a glowing object. To accomplish this, Planck had to make an ad hoc mathematical assumption of quantized energy of the oscillators (atoms of the black body) that emit radiation. It was Einstein who later proposed that it is the electromagnetic radiation itself that is quantized, and not the energy of radiating atoms.
Black-body radiation, the emission of electromagnetic energy due to an object's heat, could not be explained from classical arguments alone. The equipartition theorem of classical mechanics, the basis of all classical thermodynamic theories, stated that an object's energy is partitioned equally among the object's vibrational modes. But applying the same reasoning to the electromagnetic emission of such a thermal object was not so successful. It had been long known that thermal objects emit light. Since light was known to be waves of electromagnetism, physicists hoped to describe this emission via classical laws. This became known as the black body problem. Since the equipartition theorem worked so well in describing the vibrational modes of the thermal object itself, it was natural to assume that it would perform equally well in describing the radiative emission of such objects. But a problem quickly arose: if each mode received an equal partition of energy, the short wavelength modes would consume all the energy. This became clear when plotting the Rayleigh–Jeans law which, while correctly predicting the intensity of long wavelength emissions, predicted infinite total energy as the intensity diverges to infinity for short wavelengths. This became known as the ultraviolet catastrophe.
In 1900, Max Planck hypothesized that the frequency of light emitted by the black body depended on the frequency of the oscillator that emitted it, and the energy of these oscillators increased linearly with frequency (according to his constant h, where E = hν). This was not an unsound proposal considering that macroscopic oscillators operate similarly: when studying five simple harmonic oscillators of equal amplitude but different frequency, the oscillator with the highest frequency possesses the highest energy (though this relationship is not linear like Planck's). By demanding that high-frequency light must be emitted by an oscillator of equal frequency, and further requiring that this oscillator occupy higher energy than one of a lesser frequency, Planck avoided any catastrophe; giving an equal partition to high-frequency oscillators produced successively fewer oscillators and less emitted light. And as in the Maxwell–Boltzmann distribution, the low-frequency, low-energy oscillators were suppressed by the onslaught of thermal jiggling from higher energy oscillators, which necessarily increased their energy and frequency.
The most revolutionary aspect of Planck's treatment of the black body is that it inherently relies on an integer number of oscillators in thermal equilibrium with the electromagnetic field. These oscillators give their entire energy to the electromagnetic field, creating a quantum of light, as often as they are excited by the electromagnetic field, absorbing a quantum of light and beginning to oscillate at the corresponding frequency. Planck had intentionally created an atomic theory of the black body, but had unintentionally generated an atomic theory of light, where the black body never generates quanta of light at a given frequency with an energy less than . However, once realizing that he had quantized the electromagnetic field, he denounced particles of light as a limitation of his approximation, not a property of reality.
Photoelectric effect illuminated[edit]
While Planck had solved the ultraviolet catastrophe by using atoms and a quantized electromagnetic field, most contemporary physicists agreed that Planck's "light quanta" represented only flaws in his model. A more-complete derivation of black body radiation would yield a fully continuous and 'wave-like' electromagnetic field with no quantization. However, in 1905 Albert Einstein took Planck's black body model to produce his solution to another outstanding problem of the day: the photoelectric effect, wherein electrons are emitted from atoms when they absorb energy from light. Since their discovery eight years previously, electrons had been studied in physics laboratories worldwide.
In 1902 Philipp Lenard discovered that the energy of these ejected electrons did not depend on the intensity of the incoming light, but instead on its frequency. So if one shines a little low-frequency light upon a metal, a few low energy electrons are ejected. If one now shines a very intense beam of low-frequency light upon the same metal, a whole slew of electrons are ejected; however they possess the same low energy, there are merely more of them. The more light there is, the more electrons are ejected. Whereas in order to get high energy electrons, one must illuminate the metal with high-frequency light. Like blackbody radiation, this was at odds with a theory invoking continuous transfer of energy between radiation and matter. However, it can still be explained using a fully classical description of light, as long as matter is quantum mechanical in nature.[12]
If one used Planck's energy quanta, and demanded that electromagnetic radiation at a given frequency could only transfer energy to matter in integer multiples of an energy quantum , then the photoelectric effect could be explained very simply. Low-frequency light only ejects low-energy electrons because each electron is excited by the absorption of a single photon. Increasing the intensity of the low-frequency light (increasing the number of photons) only increases the number of excited electrons, not their energy, because the energy of each photon remains low. Only by increasing the frequency of the light, and thus increasing the energy of the photons, can one eject electrons with higher energy. Thus, using Planck's constant h to determine the energy of the photons based upon their frequency, the energy of ejected electrons should also increase linearly with frequency; the gradient of the line being Planck's constant. These results were not confirmed until 1915, when Robert Andrews Millikan, who had previously determined the charge of the electron, produced experimental results in perfect accord with Einstein's predictions. While the energy of ejected electrons reflected Planck's constant, the existence of photons was not explicitly proven until the discovery of the photon antibunching effect, of which a modern experiment can be performed in undergraduate-level labs.[13] This phenomenon could only be explained via photons, and not through any semi-classical theory (which could alternatively explain the photoelectric effect). When Einstein received his Nobel Prize in 1921, it was not for his more difficult and mathematically laborious special and general relativity, but for the simple, yet totally revolutionary, suggestion of quantized light. Einstein's "light quanta" would not be called photons until 1925, but even in 1905 they represented the quintessential example of wave-particle duality. Electromagnetic radiation propagates following linear wave equations, but can only be emitted or absorbed as discrete elements, thus acting as a wave and a particle simultaneously.
Einstein's explanation of the photoelectric effect[edit]
Main article: Photoelectric effect
In 1905, Albert Einstein provided an explanation of the photoelectric effect, a hitherto troubling experiment that the wave theory of light seemed incapable of explaining. He did so by postulating the existence of photons, quanta of light energy with particulate qualities.
In the photoelectric effect, it was observed that shining a light on certain metals would lead to an electric current in a circuit. Presumably, the light was knocking electrons out of the metal, causing current to flow. However, using the case of potassium as an example, it was also observed that while a dim blue light was enough to cause a current, even the strongest, brightest red light available with the technology of the time caused no current at all. According to the classical theory of light and matter, the strength or amplitude of a light wave was in proportion to its brightness: a bright light should have been easily strong enough to create a large current. Yet, oddly, this was not so.
Einstein explained this conundrum by postulating that the electrons can receive energy from electromagnetic field only in discrete portions (quanta that were called photons): an amount of energy E that was related to the frequency f of the light by
where h is Planck's constant (6.626 × 10−34 J seconds). Only photons of a high enough frequency (above a certain threshold value) could knock an electron free. For example, photons of blue light had sufficient energy to free an electron from the metal, but photons of red light did not. One photon of light above the threshold frequency could release only one electron; the higher the frequency of a photon, the higher the kinetic energy of the emitted electron, but no amount of light (using technology available at the time) below the threshold frequency could release an electron. To "violate" this law would require extremely high-intensity lasers which had not yet been invented. Intensity-dependent phenomena have now been studied in detail with such lasers.[14]
Einstein was awarded the Nobel Prize in Physics in 1921 for his discovery of the law of the photoelectric effect.
De Broglie's wavelength[edit]
Main article: Matter wave
In 1924, Louis-Victor de Broglie formulated the de Broglie hypothesis, claiming that all matter,[15][16] not just light, has a wave-like nature; he related wavelength (denoted as λ), and momentum (denoted as p):
This is a generalization of Einstein's equation above, since the momentum of a photon is given by p = and the wavelength (in a vacuum) by λ = , where c is the speed of light in vacuum.
De Broglie's formula was confirmed three years later for electrons (which differ from photons in having a rest mass) with the observation of electron diffraction in two independent experiments. At the University of Aberdeen, George Paget Thomson passed a beam of electrons through a thin metal film and observed the predicted interference patterns. At Bell Labs, Clinton Joseph Davisson and Lester Halbert Germer guided their beam through a crystalline grid.
De Broglie was awarded the Nobel Prize for Physics in 1929 for his hypothesis. Thomson and Davisson shared the Nobel Prize for Physics in 1937 for their experimental work.
Heisenberg's uncertainty principle[edit]
In his work on formulating quantum mechanics, Werner Heisenberg postulated his uncertainty principle, which states:
here indicates standard deviation, a measure of spread or uncertainty;
x and p are a particle's position and linear momentum respectively.
is the reduced Planck's constant (Planck's constant divided by 2).
Heisenberg originally explained this as a consequence of the process of measuring: Measuring position accurately would disturb momentum and vice versa, offering an example (the "gamma-ray microscope") that depended crucially on the de Broglie hypothesis. It is now thought, however, that this only partly explains the phenomenon, but that the uncertainty also exists in the particle itself, even before the measurement is made.
In fact, the modern explanation of the uncertainty principle, extending the Copenhagen interpretation first put forward by Bohr and Heisenberg, depends even more centrally on the wave nature of a particle: Just as it is nonsensical to discuss the precise location of a wave on a string, particles do not have perfectly precise positions; likewise, just as it is nonsensical to discuss the wavelength of a "pulse" wave traveling down a string, particles do not have perfectly precise momenta (which corresponds to the inverse of wavelength). Moreover, when position is relatively well defined, the wave is pulse-like and has a very ill-defined wavelength (and thus momentum). And conversely, when momentum (and thus wavelength) is relatively well defined, the wave looks long and sinusoidal, and therefore it has a very ill-defined position.
de Broglie–Bohm theory[edit]
Couder experiments,[17] "materializing" the pilot wave model.
De Broglie himself had proposed a pilot wave construct to explain the observed wave-particle duality. In this view, each particle has a well-defined position and momentum, but is guided by a wave function derived from Schrödinger's equation. The pilot wave theory was initially rejected because it generated non-local effects when applied to systems involving more than one particle. Non-locality, however, soon became established as an integral feature of quantum theory (see EPR paradox), and David Bohm extended de Broglie's model to explicitly include it.
In the resulting representation, also called the de Broglie–Bohm theory or Bohmian mechanics,[18] the wave-particle duality vanishes, and explains the wave behaviour as a scattering with wave appearance, because the particle's motion is subject to a guiding equation or quantum potential. "This idea seems to me so natural and simple, to resolve the wave-particle dilemma in such a clear and ordinary way, that it is a great mystery to me that it was so generally ignored",[19] J.S.Bell.
The best illustration of the pilot-wave model was given by Couder's 2010 "walking droplets" experiments,[20] demonstrating the pilot-wave behaviour in a macroscopic mechanical analog.[17]
Wave behavior of large objects[edit]
Since the demonstrations of wave-like properties in photons and electrons, similar experiments have been conducted with neutrons and protons. Among the most famous experiments are those of Estermann and Otto Stern in 1929.[21] Authors of similar recent experiments with atoms and molecules, described below, claim that these larger particles also act like waves. A wave is basically a group of particles which moves in a particular form of motion, i.e. to and fro. If we break that flow by an object it will convert into radiants.
A dramatic series of experiments emphasizing the action of gravity in relation to wave–particle duality was conducted in the 1970s using the neutron interferometer.[22] Neutrons, one of the components of the atomic nucleus, provide much of the mass of a nucleus and thus of ordinary matter. In the neutron interferometer, they act as quantum-mechanical waves directly subject to the force of gravity. While the results were not surprising since gravity was known to act on everything, including light (see tests of general relativity and the Pound–Rebka falling photon experiment), the self-interference of the quantum mechanical wave of a massive fermion in a gravitational field had never been experimentally confirmed before.
In 1999, the diffraction of C60 fullerenes by researchers from the University of Vienna was reported.[23] Fullerenes are comparatively large and massive objects, having an atomic mass of about 720 u. The de Broglie wavelength of the incident beam was about 2.5 pm, whereas the diameter of the molecule is about 1 nm, about 400 times larger. In 2012, these far-field diffraction experiments could be extended to phthalocyanine molecules and their heavier derivatives, which are composed of 58 and 114 atoms respectively. In these experiments the build-up of such interference patterns could be recorded in real time and with single molecule sensitivity.[24][25]
In 2003, the Vienna group also demonstrated the wave nature of tetraphenylporphyrin[26]—a flat biodye with an extension of about 2 nm and a mass of 614 u. For this demonstration they employed a near-field Talbot Lau interferometer.[27][28] In the same interferometer they also found interference fringes for C60F48., a fluorinated buckyball with a mass of about 1600 u, composed of 108 atoms.[26] Large molecules are already so complex that they give experimental access to some aspects of the quantum-classical interface, i.e., to certain decoherence mechanisms.[29][30] In 2011, the interference of molecules as heavy as 6910 u could be demonstrated in a Kapitza–Dirac–Talbot–Lau interferometer.[31] In 2013, the interference of molecules beyond 10,000 u has been demonstrated.[32]
Whether objects heavier than the Planck mass (about the weight of a large bacterium) have a de Broglie wavelength is theoretically unclear and experimentally unreachable; above the Planck mass a particle's Compton wavelength would be smaller than the Planck length and its own Schwarzschild radius, a scale at which current theories of physics may break down or need to be replaced by more general ones.[33]
Recently Couder, Fort, et al. showed[34] that we can use macroscopic oil droplets on a vibrating surface as a model of wave–particle duality—localized droplet creates periodical waves around and interaction with them leads to quantum-like phenomena: interference in double-slit experiment,[35] unpredictable tunneling[36] (depending in complicated way on practically hidden state of field), orbit quantization[37] (that particle has to 'find a resonance' with field perturbations it creates—after one orbit, its internal phase has to return to the initial state) and Zeeman effect.[38]
Treatment in modern quantum mechanics[edit]
Wave–particle duality is deeply embedded into the foundations of quantum mechanics. In the formalism of the theory, all the information about a particle is encoded in its wave function, a complex-valued function roughly analogous to the amplitude of a wave at each point in space. This function evolves according to a differential equation (generically called the Schrödinger equation). For particles with mass this equation has solutions that follow the form of the wave equation. Propagation of such waves leads to wave-like phenomena such as interference and diffraction. Particles without mass, like photons, have no solutions of the Schrödinger equation so have another wave.
The particle-like behavior is most evident due to phenomena associated with measurement in quantum mechanics. Upon measuring the location of the particle, the particle will be forced into a more localized state as given by the uncertainty principle. When viewed through this formalism, the measurement of the wave function will randomly "collapse", or rather "decohere", to a sharply peaked function at some location. For particles with mass the likelihood of detecting the particle at any particular location is equal to the squared amplitude of the wave function there. The measurement will return a well-defined position, (subject to uncertainty), a property traditionally associated with particles. It is important to note that a measurement is only a particular type of interaction where some data is recorded and the measured quantity is forced into a particular eigenstate. The act of measurement is therefore not fundamentally different from any other interaction.
Following the development of quantum field theory the ambiguity disappeared. The field permits solutions that follow the wave equation, which are referred to as the wave functions. The term particle is used to label the irreducible representations of the Lorentz group that are permitted by the field. An interaction as in a Feynman diagram is accepted as a calculationally convenient approximation where the outgoing legs are known to be simplifications of the propagation and the internal lines are for some order in an expansion of the field interaction. Since the field is non-local and quantized, the phenomena which previously were thought of as paradoxes are explained. Within the limits of the wave-particle duality the quantum field theory gives the same results.
There are two ways to visualize the wave-particle behaviour: by the "standard model", described below; and by the Broglie–Bohm model, where no duality is perceived.
Below is an illustration of wave–particle duality as it relates to De Broglie's hypothesis and Heisenberg's uncertainty principle (above), in terms of the position and momentum space wavefunctions for one spinless particle with mass in one dimension. These wavefunctions are Fourier transforms of each other.
The more localized the position-space wavefunction, the more likely the particle is to be found with the position coordinates in that region, and correspondingly the momentum-space wavefunction is less localized so the possible momentum components the particle could have are more widespread.
Conversely the more localized the momentum-space wavefunction, the more likely the particle is to be found with those values of momentum components in that region, and correspondingly the less localized the position-space wavefunction, so the position coordinates the particle could occupy are more widespread.
Position x and momentum p wavefunctions corresponding to quantum particles. The colour opacity (%) of the particles corresponds to the probability density of finding the particle with position x or momentum component p.
Top: If wavelength λ is unknown, so are momentum p, wave-vector k and energy E (de Broglie relations). As the particle is more localized in position space, Δx is smaller than for Δpx.
Bottom: If λ is known, so are p, k, and E. As the particle is more localized in momentum space, Δp is smaller than for Δx.
Alternative views[edit]
Wave–particle duality is an ongoing conundrum in modern physics. Most physicists accept wave-particle duality as the best explanation for a broad range of observed phenomena; however, it is not without controversy. Alternative views are also presented here. These views are not generally accepted by mainstream physics, but serve as a basis for valuable discussion within the community.
Both-particle-and-wave view[edit]
The pilot wave model, originally developed by Louis de Broglie and further developed by David Bohm into the hidden variable theory proposes that there is no duality, but rather a system exhibits both particle properties and wave properties simultaneously, and particles are guided, in a deterministic fashion, by the pilot wave (or its "quantum potential") which will direct them to areas of constructive interference in preference to areas of destructive interference. This idea is held by a significant minority within the physics community.[39]
At least one physicist considers the "wave-duality" as not being an incomprehensible mystery. L.E. Ballentine, Quantum Mechanics, A Modern Development, p. 4, explains:
It has been claimed[citation needed] that the Afshar experiment[40] (2007) shows that it is possible to simultaneously observe both wave and particle properties of photons. This claim is, however, rejected by other scientists.[citation needed]
Wave-only view[edit]
At least one scientist proposes that the duality can be replaced by a "wave-only" view. In his book Collective Electrodynamics: Quantum Foundations of Electromagnetism (2000), Carver Mead purports to analyze the behavior of electrons and photons purely in terms of electron wave functions, and attributes the apparent particle-like behavior to quantization effects and eigenstates. According to reviewer David Haddon:[41]
Mead has cut the Gordian knot of quantum complementarity. He claims that atoms, with their neutrons, protons, and electrons, are not particles at all but pure waves of matter. Mead cites as the gross evidence of the exclusively wave nature of both light and matter the discovery between 1933 and 1996 of ten examples of pure wave phenomena, including the ubiquitous laser of CD players, the self-propagating electrical currents of superconductors, and the Bose–Einstein condensate of atoms.
Albert Einstein, who, in his search for a Unified Field Theory, did not accept wave-particle duality, wrote:[42]
The many-worlds interpretation (MWI) is sometimes presented as a waves-only theory, including by its originator, Hugh Everett who referred to MWI as "the wave interpretation".[43]
The Three Wave Hypothesis of R. Horodecki relates the particle to wave.[44][45] The hypothesis implies that a massive particle is an intrinsically spatially as well as temporally extended wave phenomenon by a nonlinear law.
Particle-only view[edit]
Still in the days of the old quantum theory, a pre-quantum-mechanical version of wave–particle duality was pioneered by William Duane,[46] and developed by others including Alfred Landé.[47] Duane explained diffraction of x-rays by a crystal in terms solely of their particle aspect. The deflection of the trajectory of each diffracted photon was explained as due to quantized momentum transfer from the spatially regular structure of the diffracting crystal.[48]
Neither-wave-nor-particle view[edit]
It has been argued that there are never exact particles or waves, but only some compromise or intermediate between them. For this reason, in 1928 Arthur Eddington[49] coined the name "wavicle" to describe the objects although it is not regularly used today. One consideration is that zero-dimensional mathematical points cannot be observed. Another is that the formal representation of such points, the Dirac delta function is unphysical, because it cannot be normalized. Parallel arguments apply to pure wave states. Roger Penrose states:[50]
"Such 'position states' are idealized wavefunctions in the opposite sense from the momentum states. Whereas the momentum states are infinitely spread out, the position states are infinitely concentrated. Neither is normalizable [...]."
Relational approach to wave–particle duality[edit]
Relational quantum mechanics is developed which regards the detection event as establishing a relationship between the quantized field and the detector. The inherent ambiguity associated with applying Heisenberg's uncertainty principle and thus wave–particle duality is subsequently avoided.[51]
Although it is difficult to draw a line separating wave–particle duality from the rest of quantum mechanics, it is nevertheless possible to list some applications of this basic idea.
• Wave–particle duality is exploited in electron microscopy, where the small wavelengths associated with the electron can be used to view objects much smaller than what is visible using visible light.
• Similarly, neutron diffraction uses neutrons with a wavelength of about 0.1 nm, the typical spacing of atoms in a solid, to determine the structure of solids.
• Photos are now able to show this dual nature, perhaps this will lead to new ways of examining and recording this behaviour.[52]
See also[edit]
Notes and references[edit]
2. ^ Walter Greiner (2001). Quantum Mechanics: An Introduction. Springer. ISBN 3-540-67458-6.
3. ^ R. Eisberg & R. Resnick (1985). Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles (2nd ed.). John Wiley & Sons. pp. 59–60. ISBN 047187373X. For both large and small wavelengths, both matter and radiation have both particle and wave aspects.... But the wave aspects of their motion become more difficult to observe as their wavelengths become shorter.... For ordinary macroscopic particles the mass is so large that the momentum is always sufficiently large to make the de Broglie wavelength small enough to be beyond the range of experimental detection, and classical mechanics reigns supreme.
4. ^ Kumar, Manjit (2011). Quantum: Einstein, Bohr, and the Great Debate about the Nature of Reality (Reprint ed.). W. W. Norton & Company. pp. 242, 375–376. ISBN 978-0393339888.
7. ^ Preparata, G. (2002). An Introduction to a Realistic Quantum Physics, World Scientific, River Edge NJ, ISBN 978-981-238-176-7.
8. ^ Nathaniel Page Stites, M.A./M.S. "Light I: Particle or Wave?," Visionlearning Vol. PHY-1 (3), 2005. http://www.visionlearning.com/library/module_viewer.php?mid=132
10. ^ Thomas Young: The Double Slit Experiment
11. ^ Buchwald, Jed (1989). The Rise of the Wave Theory of Light: Optical Theory and Experiment in the Early Nineteenth Century. Chicago: University of Chicago Press. ISBN 0-226-07886-8. OCLC 18069573.
12. ^ Lamb, Willis E.; Scully, Marlan O. (1968). "The photoelectric effect without photons" (PDF).
13. ^ "Observing the quantum behavior of light in an undergraduate laboratory". American Journal of Physics. 72: 1210. Bibcode:2004AmJPh..72.1210T. doi:10.1119/1.1737397.
14. ^ Zhang, Q (1996). "Intensity dependence of the photoelectric effect induced by a circularly polarized laser beam". Physics Letters A. 216 (1-5): 125–128. Bibcode:1996PhLA..216..125Z. doi:10.1016/0375-9601(96)00259-9.
15. ^ Donald H Menzel, "Fundamental formulas of Physics", volume 1, page 153; Gives the de Broglie wavelengths for composite particles such as protons and neutrons.
16. ^ Brian Greene, The Elegant Universe, page 104 "all matter has a wave-like character"
17. ^ a b See this Science Channel production (Season II, Episode VI "How Does The Universe Work?"), presented by Morgan Freeman, https://www.youtube.com/watch?v=W9yWv5dqSKk
18. ^ Bohmian Mechanics, Stanford Encyclopedia of Philosophy.
19. ^ Bell, J. S., "Speakable and Unspeakable in Quantum Mechanics", Cambridge: Cambridge University Press, 1987.
20. ^ Y. Couder, A. Boudaoud, S. Protière, Julien Moukhtar, E. Fort: Walking droplets: a form of wave-particle duality at macroscopic level? , doi:10.1051/epn/2010101, (PDF)
21. ^ Estermann, I.; Stern O. (1930). "Beugung von Molekularstrahlen". Zeitschrift für Physik. 61 (1-2): 95–125. Bibcode:1930ZPhy...61...95E. doi:10.1007/BF01340293.
22. ^ R. Colella, A. W. Overhauser and S. A. Werner, Observation of Gravitationally Induced Quantum Interference, Phys. Rev. Lett. 34, 1472–1474 (1975).
23. ^ Arndt, Markus; O. Nairz; J. Voss-Andreae, C. Keller, G. van der Zouw, A. Zeilinger (14 October 1999). "Wave–particle duality of C60". Nature. 401 (6754): 680–682. Bibcode:1999Natur.401..680A. doi:10.1038/44348. PMID 18494170.
24. ^ Juffmann, Thomas; et al. (25 March 2012). "Real-time single-molecule imaging of quantum interference". Nature Nanotechnology. Retrieved 27 March 2012.
25. ^ Quantumnanovienna. "Single molecules in a quantum interference movie". Retrieved 2012-04-21.
26. ^ a b Hackermüller, Lucia; Stefan Uttenthaler; Klaus Hornberger; Elisabeth Reiger; Björn Brezger; Anton Zeilinger; Markus Arndt (2003). "The wave nature of biomolecules and fluorofullerenes". Phys. Rev. Lett. 91 (9): 090408. arXiv:quant-ph/0309016free to read. Bibcode:2003PhRvL..91i0408H. doi:10.1103/PhysRevLett.91.090408. PMID 14525169.
27. ^ Clauser, John F.; S. Li (1994). "Talbot von Lau interefometry with cold slow potassium atoms.". Phys. Rev. A. 49 (4): R2213–17. Bibcode:1994PhRvA..49.2213C. doi:10.1103/PhysRevA.49.R2213. PMID 9910609.
28. ^ Brezger, Björn; Lucia Hackermüller; Stefan Uttenthaler; Julia Petschinka; Markus Arndt; Anton Zeilinger (2002). "Matter-wave interferometer for large molecules". Phys. Rev. Lett. 88 (10): 100404. arXiv:quant-ph/0202158free to read. Bibcode:2002PhRvL..88j0404B. doi:10.1103/PhysRevLett.88.100404. PMID 11909334.
29. ^ Hornberger, Klaus; Stefan Uttenthaler; Björn Brezger; Lucia Hackermüller; Markus Arndt; Anton Zeilinger (2003). "Observation of Collisional Decoherence in Interferometry". Phys. Rev. Lett. 90 (16): 160401. arXiv:quant-ph/0303093free to read. Bibcode:2003PhRvL..90p0401H. doi:10.1103/PhysRevLett.90.160401. PMID 12731960.
30. ^ Hackermüller, Lucia; Klaus Hornberger; Björn Brezger; Anton Zeilinger; Markus Arndt (2004). "Decoherence of matter waves by thermal emission of radiation". Nature. 427 (6976): 711–714. arXiv:quant-ph/0402146free to read. Bibcode:2004Natur.427..711H. doi:10.1038/nature02276. PMID 14973478.
31. ^ Gerlich, Stefan; et al. (2011). "Quantum interference of large organic molecules". Nature Communications. 2 (263). Bibcode:2011NatCo...2E.263G. doi:10.1038/ncomms1263. PMC 3104521free to read. PMID 21468015.
32. ^ Eibenberger, S.; Gerlich, S.; Arndt, M.; Mayor, M.; Tüxen, J. (2013). "Matter–wave interference of particles selected from a molecular library with masses exceeding 10 000 amu". Physical Chemistry Chemical Physics. 15 (35): 14696–14700. doi:10.1039/c3cp51500a. PMID 23900710.
33. ^ Peter Gabriel Bergmann, The Riddle of Gravitation, Courier Dover Publications, 1993 ISBN 0-486-27378-4 online
34. ^ http://www.youtube.com/watch?v=W9yWv5dqSKk - You Tube video - Yves Couder Explains Wave/Particle Duality via Silicon Droplets
35. ^ Y. Couder, E. Fort, Single-Particle Diffraction and Interference at a Macroscopic Scale, PRL 97, 154101 (2006) online
36. ^ A. Eddi, E. Fort, F. Moisy, Y. Couder, Unpredictable Tunneling of a Classical Wave–Particle Association, PRL 102, 240401 (2009)
37. ^ Fort, E.; Eddi, A.; Boudaoud, A.; Moukhtar, J.; Couder, Y. (2010). "Path-memory induced quantization of classical orbits". PNAS. 107 (41): 17515–17520. doi:10.1073/pnas.1007386107.
38. ^ http://prl.aps.org/abstract/PRL/v108/i26/e264503 - Level Splitting at Macroscopic Scale
39. ^ (Buchanan pp. 29–31)
40. ^ Afshar S.S. et al: Paradox in Wave Particle Duality. Found. Phys. 37, 295 (2007) http://arxiv.org/abs/quant-ph/0702188 arXiv:quant-ph/0702188
41. ^ David Haddon. "Recovering Rational Science". Touchstone. Retrieved 2007-09-12.
42. ^ Paul Arthur Schilpp, ed, Albert Einstein: Philosopher-Scientist, Open Court (1949), ISBN 0-87548-133-7, p 51.
43. ^ See section VI(e) of Everett's thesis: The Theory of the Universal Wave Function, in Bryce Seligman DeWitt, R. Neill Graham, eds, The Many-Worlds Interpretation of Quantum Mechanics, Princeton Series in Physics, Princeton University Press (1973), ISBN 0-691-08131-X, pp 3–140.
44. ^ Horodecki, R. (1981). "De broglie wave and its dual wave". Phys. Lett. A. 87 (3): 95–97. Bibcode:1981PhLA...87...95H. doi:10.1016/0375-9601(81)90571-5.
45. ^ Horodecki, R. (1983). "Superluminal singular dual wave". Lett. Novo Cimento. 38: 509–511.
49. ^ Eddington, Arthur Stanley (1928). The Nature of the Physical World. Cambridge, UK.: MacMillan. p. 201.
50. ^ Penrose, Roger (2007). The Road to Reality: A Complete Guide to the Laws of the Universe. Vintage. p. 521, §21.10. ISBN 978-0-679-77631-4.
51. ^ http://www.quantum-relativity.org/Quantum-Relativity.pdf. See Q. Zheng and T. Kobayashi, Quantum Optics as a Relativistic Theory of Light; Physics Essays 9 (1996) 447. Annual Report, Department of Physics, School of Science, University of Tokyo (1992) 240.
52. ^ Ecole Polytechnique Federale de Lausanne. "The first ever photograph of light as both a particle and wave". Retrieved 15 July 2016.
External links[edit] |
8cdbd84378777ec2 | Take the 2-minute tour ×
share|improve this question
4 Answers 4
share|improve this answer
One possible hidden variable theory is that the hidden variable is just the positions of all the particles. In that case for the ground state the probability distribution for position doesn't change. So it is easy to make a hidden variable theory where in the ground state the electron isn't moving. Not moving, no change in hidden variable, no change in probability distribution.
Your reasoning basically goes: classically charges orbit other charges and radiate and spiral in rather quickly, in hidden variables particles have positions and I'll just assume that the hidden variables act just like classical mechanics and isn't this a problem between hidden variables and quantum mechanics? It would be a problem in thousands of ways if the hidden variable theory was just not doing quantum mechanics. When you do hidden variables, then in order to get quantum mechanics with it, you have to do things differently than you do in regular classical physics. Doing otherwise would be like trying to do special relativity but refusing to do time dilation because you have time from Newtonian physics and don't want to allow a new theory to do anything differently.
One intuitive way to think about a hidden variable is to imagine that there is a state (and hidden variable) based force that pushes the particles around differently than they would if there was just classical mechanics.
Since at every moment you could do a position measurement, there needs to be a probability density $\rho$ for position. And these can sometimes change (but for a ground state they don't have to) so we should have a probability current $\vec J$ that satisfies $\frac{\partial \rho}{\partial t}=-\vec\nabla \cdot \vec J$ to get conservation of probability (just like current allows conservation of charge). So based on where the particle would be according to the hidden variable, we need additional forces to move it around so that the probability flux matches what quantum mechanics says it does. These additional forces could sometimes be totally opposite the classical forces (for instance in a ground state) and in general can do whatever they need to do to make the probability move around how it has to move. And since quantum is different than classical, it will have to move differently at least when quantum and classical disagree.
It turns out it will have to move differently for different states. It turns out it will have to move in a nonlocal way. But since all measurements eventually become a measurement of position (position of the ink on the paper you write up your results if nothing else) having the state and the position as the hidden variable suffices. It'll just have to be weird how the position changes, weird from a classical bias.
Which is the other truck sized loophole. Since quantum is different than classical, your hidden variables were allowed to introduce new forces (state dependent forces, forces that can be large even when things are far apart, etc.) to make the results agree with quantum mechanics. So you can freely change how electromagnetism works and radiation and such too, for the same reason, change whatever you have to to make your predictions agree with quantum mechanics. It's not wrong to do what you need to do to agree with experiment.
We are free to make any theory that agrees with experiment. If you make your hidden variable theory agree with quantum mechanics you are free to do anything you need to do to make it do that. Is it worth the effort? Depends, it could be easier to remember, or do a classical limit or correspondence, or could be easier to teach or inspiration for modifications. Could be better or easier to implement computationally could be an alternative implementation just to confirm you did it right, two implementations and calculations can be better than one. But there is no obvious answer if it is worth it for a particular person. There is an obvious danger not to take too seriously a story about what happens before you look. But if you can tell the difference between what you compute that can be observed and what you compute is about the internal dynamics between times you look, then you can avoid taking the story too literally. And I'm not sure it is better to pretend that things that have been done can't be done.
If you want a theory where your ground states have no particle motion and don't radiate, it's been done.
share|improve this answer
why do atoms not collapse on themselves.
The reason is explained in answer to this question . In a nutshell, our observations/measurements lead to the quantum mechanical framework for atoms molecules and elementary particles and a probabilistic theory, quantum mechanics. The positions and energies of the particles are not determined but given by a probability amplitude.
It is a postulate of quantum mechanics that there are no hidden variables, and QM is validated by the data.
Probability distributions are not unique and were not applied first to Quantum Mechanics. Statistical mechanics gives probability distributions too. In statistical mechanics the trajectories of the particles making up the ensemble are classical, but they are an underlying level on which the equations of statistical mechanics are a meta level.
People working on hidden variable theories are working at a lower level than the observed and validated quantum mechanical. The hidden variables are supposed to reproduce the probability distributions observed and explained by quantum mechanics, by making quantum mechanics a meta level to the hidden variables level.
An example of a hidden variable theory is the de Broglie-Bohm theory, or pilot wave theory. This reproduces the Schrodinger equation quantum mechanics in a much more complicated way, and is often called an interpretation of QM.
The theory results in a measurement formalism, analogous to thermodynamics for classical mechanics, that yields the standard quantum formalism generally associated with the Copenhagen interpretation. The theory's explicit non-locality resolves the "measurement problem" which is conventionally delegated to the topic of interpretations of quantum mechanics in the Copenhagen interpretation The Born rule in Broglie–Bohm theory is not a basic law. Rather, in this theory the link between the probability density and the wave function has the status of a hypothesis, called the quantum equilibrium hypothesis, which is additional to the basic principles governing the wave function.
There are various problems in predictions that do not agree with the data.
Bohmian mechanics also known as de Broglie-Bohm theory is the most popular alternative approach to quantum mechanics. Whereas the standard interpretation of quantum mechanics is based on the complementarity principle Bohmian mechanics assumes that both particle and wave are concrete physical objects. In 1993 Peter Holland has written an ardent account on the plausibility of the de Broglie-Bohm theory. He proved that it fully reproduces quantum mechanics if the initial particle distribution is consistent with a solution of the Schrödinger equation. Which may be the reasons that Bohmian mechanics has not yet found global acceptance? In this article it will be shown that predicted properties of atoms and molecules are in conflict with experimental findings. Moreover it will be demonstrated that repeatedly published ensembles of trajectories illustrating double slit diffraction processes do not agree with quantum mechanics. The credibility of a theory is undermined when recognizably wrong data presented frequently over years are finally not declared obsolete.
There are people working on hidden variable theories, some of them quite prominent, as Gerald 't Hooft, who has also discussed his views here on this site some time ago.
share|improve this answer
share|improve this answer
Your Answer
|
0ea4d0d16202908f | Sabulski-Reimann conjecture
From Uncyclopedia, the content-free encyclopedia
Jump to: navigation, search
During history, many people have wondered: "Could Jesus microwave a burrito so hot that he himself could not eat it?" The Sabulski-Riemann conjecture states that while "on one hand he could, there is a slight possibility he did not have the physical and mental ability to do so." The conjecture lead to the ancient Chinese proverb: "It takes one to know one". Many organizations have spent centuries studying this paradox.
The Sabulski-Riemann conjecture (also called the xxx zeta hypothesis), first formulated by Erwin Sabulski in 1859.
edit Historical Significance and Analysis
The conjecture lead to the ancient Chinese proverb: It takes one to know one. "A $1,000,000 prize has been offered by the El Monterey Institute for a proof of the conjecture.
Most theologians believe that the Sabulski-Riemann conjecture can be proven, although a number of detractors remain skeptical.
edit Sabulski
Sabulski's hypothesis is based on Quantum superpositon, which posits merely that when a burrito is not measured, it exists in all possible configurations. But Sabulski knew better. He knew that the burrito also exists in all impossible configurations. As a corrolary perspective we can assert that when the object is measured, it collapses into a single state, which may be possible or impossible.
Sabulski's hypothesis is an extension of the Schrödinger equation in quantum mechanics, generalized as the probability with which an object will collapse into one particular state upon observation.
If Jesus is in fact omnipotent, then Jesus can prevent others from observing it. Thus, Jesus could then both heat a burrito so hot that He can't eat it, and immediately thereafter eat said burrito, and because others could not observe Him doing so, there would be no way to confirm the outcome of events.
edit Bernhard Riemann
A lovely french female shows off the french bikini.
While on the other hand Riemann's opus follows from the teachings of Descartes' view, that Jesus can do the logically impossible as shown below:
1. Jesus heats a burrito so hot He can't eat it.
2. Jesus eats the burrito
3. Jesus washes it down with a Squishy.
Presumably, such a being could also make the sum 2 + 2 = 5 become mathematically possible, or could create a square circle.
In the words of Harold Bolles, "If an omnipotent being can do what is logically impossible, then he can not only create situations which he cannot handle, but also since he is not bound by the limits of consistency, he can handle situations which he cannot handle."
However, this attempt to resolve the paradox is problematic in that the definition itself forgoes logical consistency. The paradox may be solved, but at the expense of rendering logic futile, unnecessary, or meaningless. Thus it can be shown that defining such a being nears impossibility, as said being transcends logic.
edit Lao Tzu
The Chinese proverb,"It takes one to know one" which historians attribute with certanity to Lao Tzu, was in fact an early manifestation of the Heisenberg Uncertainty Principle.
"It takes one to know one", refers to the fact that only an omnipotent being could really truly know another omnipotent being, such as Jesus. So logically, it would follow that only an omnipotent being would be able to watch Jesus microwave a burrito AND watch Him eat it.
- *Lao Tzu was a skilled water buffalo rider. - *We cannot be certain if Lao Tzu actually existed or not. - *Similarly, he (Lao Tzu) is uncertain as to whether we exist or not.
edit Conclusion
Bernhard Riemann mentioned the conjecture that became known as the Riemann hypothesis in his 1859 paper Effects of High Powered Microwave Energy on a Frozen Burrito, and Subsequent Edibility Thereof, but as it was not essential to his central purpose in that paper, he did not include a proof. Riemann knew that burritos could be heated to extreme tempuratures, and he knew that burritos were indeed edible under most circumstances. The mathematical heart of the Sabuliski-Riemann conjecture speculates that based on the distribution of the zeros of the Riemann Zeta Function ζ(s), the tasty and edible part of any non-trivial microwaved burrito can be estimated as ½.
edit David Hilbert
In 1900,[David Hilbert, you recall Hilbert of course... what you don't remember Hilbert? Jezus Fucknuts, I swear it's like discoursing with a fucking brick-wall here! Anyways, Hilbert referenced the Sabulski-Riemann conjecture in his famous list of 23 Unresolvenable Problems - In problem #8 he wrote: "The burrito! No other question has ever moved so profoundly the spirit of man. Wir müssen wissen. Wir werden wissen. If I were to awaken after having slept for a thousand years, my first question would be: I'm so hungry, does anyone have a burrito?"
edit See Also
Personal tools |
0f58a94f9d86a7e6 | The trouble with the aufbau principle
Generations of teachers are misleading their charges by teaching a sloppy version of the aufbau principle, claims Eric Scerri
The use of the aufbau principle to predict electronic configurations of atoms, and therefore explain the layout of the periodic table, is a key point when teaching chemistry. However, the version of this method that has been taught to generations of students is actually deeply flawed. The error is rather subtle and may well have arisen from an attempt to simplify matters.
Illustration of electron orbitals
© Science Photo Library
Starting at the beginning
The aufbau method was initially proposed by the Danish physicist Niels Bohr, who was the first person to use quantum mechanics to study atomic structure. He was also one of the first to fundamentally explain the periodic table in terms of arrangements of electrons (electronic configurations). Bohr proposed that the atoms of the periodic table can be thought of as being progressively built up one electron at a time: starting from the simplest atom of all, hydrogen with just one electron, moving onto helium with two electrons, lithium with three, all the way to uranium – which at that time (1913) was the heaviest known atom – with 92 electrons.
The next ingredient is a knowledge of the atomic orbitals into which the electrons are progressively placed. These orbitals, at least in their simplest form, nowadays come from solving the Schrödinger equation for the hydrogen atom.
The orbitals
The different atomic orbitals come in various kinds that are distinguished by labels such as s, p, d and f. Each shell of electrons can be broken down into various orbitals and as we move away from the nucleus each shell contains a progressively larger number of types of orbital: the first shell only contains a 1s orbital, the second shell 2s and 2p orbitals, the third shell 3s, 3p and 3d orbitals, the fourth shell 4s, 4p, 4d and 4f orbitals and so on.
Next, we need to know how many of these orbitals occur in each shell. The answer is provided by the formula 2l + 1, where l takes different values depending on whether we are speaking of s, p, d or f orbitals. For s orbitals l = 0, for p orbitals l = 1, for d orbitals l = 2 and so on. As a result there is potentially one s orbital, three p orbitals, five d orbitals, seven f orbitals and so on for each shell.
The flaw
Aufbau diagram
The aufbau diagram lies at the heart of the trouble © Science Photo Library
So far, so good. Now comes the magic ingredient in the sloppy version of this principle that claims to predict the order in which these orbitals fill (and here is where the fallacy lurks): rather than filling the shells around the nucleus in a simple sequence, where each shell must fill completely before moving onto the next shell, we are told that the correct procedure is more complicated. But we are also reassured that there is a nice simple pattern that governs the order of shell and consequently of orbital filling. This is demonstrated using the aufbau diagram, which lies at the heart of the trouble.
The order of filling is said to be obtained by starting at the top of the diagram and following the arrows. This process gives the order of filling of orbitals with electrons according to this sequence:
1s > 2s > 2p > 3s > 3p > 4s > 3d > 4p > 5s > 4d
and so on.
This diagram, when combined with a knowledge of how many electrons can be accommodated in each kind of orbital and the number of such available orbitals in each shell, is now supposed to give us a prediction of the complete electronic configuration of all but about 20 atoms in which further irregularities occur, such as the cases of chromium and copper. But let’s not get sidetracked by these anomalies and instead concentrate on a far deeper problem with this approach.
Some examples
Let’s consider a few examples. The atom of magnesium has a total of 12 electrons. Using the aufbau diagram we obtain an electronic configuration of 1s2, 2s2, 2p6, 3s2 in beautiful agreement with experiments that can examine the configuration directly by looking at the spectra of atoms. Another example is calcium, which has 20 electrons. This method gives a configuration of 1s2, 2s2, 2p6, 3s2, 3p6, 4s2 and once again there is perfect agreement with experiments on the spectrum of calcium atoms.
But now let’s see what happens for the next atom, scandium, with its 21 electrons. According to the aufbau diagram the configuration should be 1s2, 2s2, 2p6, 3s2, 3p6, 4s2, 3d1 and indeed it is. But conventional wisdom claims that the final electron to enter the atom of scandium is a 3d electron, when experiments indicate that the 3d orbital is filled before the 4s orbital.
Why the mistake occurs
But how can this apparently blatant mistake have occurred and taken root in chemical education circles? The answer lies with the fact that the aufbau diagram gives the overall configuration correctly in all but about 20 cases (see Anomalous electronic configurations). It is only when one questions the order of filling that this approach gives the wrong answer.
Unfortunately, sticking to this way of teaching electronic configurations has led many teachers and textbooks to invent all kinds of contorted schemes to explain why even though the 4s orbital fills preferentially (as it does if you follow the aufbau diagram) it is also the 4s electron that is preferentially ionised to form an ion of Sc+. These explanations are all incorrect, since the 4s orbital actually fills last and consequently it is perfectly natural that it should be the first orbital to lose an electron on forming a positive ion.
Anomalous electronic configurations
Grouped by periods and shown in correct order of orbital filling
Chromium [Ar] 3d5 4s1
Copper [Ar] 3d10 4s1
Niobium [Kr] 4d4 5s1
Molybdenum [Kr] 4d5 5s1
Ruthenium [Kr] 4d7 5s1
Rhodium [Kr] 4d8 5s1
Palladium [Kr] 4d10 5s0
Silver [Kr] 4d10 5s1
Lanthanum [Xe] 5d1 6s2
Cerium [Xe] 4f1 5d1 6s2
Gadolinium [Xe] 4f7 5d1 6s2
Platinum [Xe] 4f14 5d9 6s1
Gold [Xe] 4f14 5d10 6s1
Actinium [Rn] 6d1 7s2
Thorium [Rn] 6d2 7s2
Protactinium [Rn] 5f2 6d1 7s2
Uranium [Rn] 5f3 6d1 7s2
Neptunium [Rn] 5f4 6d1 7s2
Curium [Rn] 5f7 6d1 7s2
Examining the evidence
Atomic orbitals for scandium ions
Atomic orbitals for scandium ions © Royal Society of Chemistry
One source of proof that the sloppy version of the aufbau principle is wrong comes from examining the experimental spectral evidence from the ions of transition metal atoms.2 Using scandium as an example:
• Sc3+ has the configuration 1s2, 2s2, 2p6, 3s2, 3p6, 3d0, 4s0
• Sc2+ is 1s2, 2s2, 2p6, 3s2, 3p6, 3d1, 4s0
• Sc1+ is 1s2, 2s2, 2p6, 3s2, 3p6, 3d1, 4s1
• Sc is 1s2, 2s2, 2p6, 3s2, 3p6, 3d1, 4s2
On moving from the Sc3+ ion to that of Sc2+ it is clear that the additional electron enters a 3d orbital and not a 4s orbital as the aufbau diagram dictates. Similarly on moving from this ion to the Sc1+ ion the additional electron enters a 4s orbital as it does in finally arriving at neutral scandium atom or Sc. Similar patterns and sequences are observed for the subsequent atoms in the periodic table including titanium, vanadium, chromium (with further complications as mentioned before), manganese and so on. Not only is it not possible to predict the configuration in any of the transition metals, but the aufbau diagram also falls down for the lanthanides, and even the p-block elements.
And there’s more
Returning to scandium, and now following the correct version of the aufbau principle where it is accepted that energy levels are filled with electrons in order of decreasing stability, you may have noticed the configurations mentioned before looked rather odd. Because, according to this approach, the 3d orbitals should have a lower energy than 4s. Therefore when predicting the way that the electrons fill in scandium, we might suppose that the final three electrons after the core argon configuration 1s2, 2s2, 2p6, 3s2, 3p6 would all enter into some 3d orbitals to give 1s2, 2s2, 2p6, 3s2, 3p6, 3d3. As noted earlier, the observed configuration of the neutral atom is 1s2, 2s2, 2p6, 3s2, 3p6, 3d1, 4s2.
Balls of palladium metal
Palladium metal – [Kr] 4d10 5s0 – a double anomaly? © Science Photo Library
It is natural to question why one or two electrons are usually pushed into a higher energy orbital. The answer is because 3d orbitals are more compact than 4s, and as a result any electrons entering 3d orbitals will experience greater mutual repulsion. The slightly unsettling feature is that although the relevant s orbital can relieve such additional electron-electron repulsion, different atoms do not always make full use of this form of sheltering because the situation is more complicated than just described. One thing to consider is that nuclear charge increases as we move through the atoms, and there is a complicated set of interactions between the electrons and the nucleus as well as between the electrons themselves. This is what ultimately produces an electronic configuration and, contrary to what some educators may wish for, there is no simple qualitative rule of thumb that can cope with this complicated situation.
For example, it appears that the most stable configuration for atoms of chromium, copper, niobium, molybdenum, ruthenium, rhodium, silver, platinum and gold involves only moving one electron into an s orbital. The case of palladium is even more unexpected because it is the one instance where no electrons are promoted up to the less stable s orbital. Palladium can be said to be doubly anomalous.
Bottom line
In my opinion, there is no reason for chemistry teachers and textbook authors to continue to teach the sloppy version of the aufbau principle. Not only does it give false predictions regarding the order of electron filling in atoms, but it also leads teachers and textbook authors to perpetuate further educational inaccuracies.
It is high time that the teaching of aufbau and electronic configurations were carried out properly in order to reflect the truth of the matter rather than taking a shortcut and compounding it with a further imaginary story. At present very few books and sources give a correct account.3
The simple fact is that the 4s orbital fills last and so quite reasonably also ionises first. Interestingly the truth turns out to be simpler than the textbook fiction and the use of the sloppy version of the aufbau.
Eric Scerri is a lecturer and author in the department of chemistry and biochemistry at the University of California, Los Angeles, US
1. C Moore, Atomic Energy Levels, Vol 1, US Bureau of Standards, 1949
2. S Glasstone, Textbook of physical chemistry, 1946; D W Oxtoby, H P Gillis, A Campion, Principles of modern chemistry, 2007; E R Scerri, A tale of seven elements, 2013; S-G Wang and W H E Schwarz, Angew. Chem. Int. Ed. 2009, 48, 3404 (DOI: 10.1002/ange.200800827) |
eff50fc02d7c66db | About this Journal Submit a Manuscript Table of Contents
Advances in High Energy Physics
Volume 2011 (2011), Article ID 259025, 62 pages
Review Article
Holography at Work for Nuclear and Hadron Physics
Asia Pacific Center for Theoretical Physics and Department of Physics, Pohang University of Science and Technology, Pohang, Gyeongbuk 790-784, Republic of Korea
Received 1 July 2011; Accepted 31 August 2011
Academic Editor: Mark Mandelkern
Copyright © 2011 Youngman Kim and Deokhyun Yi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The purpose of this review is to provide basic ingredients of holographic QCD to nonexperts in string theory and to summarize its interesting achievements in nuclear and hadron physics. We focus on results from a less stringy bottom-up approach and review a stringy top-down model with some calculational details.
1. Introduction
The approaches based on the Anti de Sitter/conformal field theory (AdS/CFT) correspondence [13] find many interesting possibilities to explore strongly interacting systems. The discovery of D-branes in string theory [4] was a crucial ingredient to put the correspondence on a firm footing. Typical examples of the strongly interacting systems are dense baryonic matter, stable/unstable nuclei, strongly interacting quark gluon plasma, and condensed matter systems. The morale is to introduce an additional space, which roughly corresponds to the energy scale of 4D boundary field theory, and try to construct a 5D holographic dual model that captures certain nonperturbative aspects of strongly coupled field theory, which are highly nontrivial to analyze in conventional quantum field theory based on perturbative techniques. There are in general two different routes to modeling holographic dual of quantum chromodynamics (QCD). One way is a top-down approach based on stringy D-brane configurations. The other way is so-called a bottom-up approach to a holographic, in which a 5D holographic dual is constructed from QCD. Despite the fact that this bottom-up approach is somewhat ad hoc, it reflects some important features of the gauge/gravity duality and is rather successful in describing properties of hadrons. However, we should keep in mind that a usual simple, tree-level analysis in the holographic dual model, both top-down and bottom-up, is capturing the leading 𝑁𝑐 contributions, and we are bound to suffer from subleading corrections.
The goal of this review is twofold. First, we will assemble results mostly from simple bottom-up models in nuclear and hadron physics. Surely we cannot have them all here. We will devote to selected physical quantities discussed in the bottom-up model. The selection of the topics is based on authors’ personal bias. Second, we present some basic materials that might be useful to understand some aspects of AdS/CFT and D-brane models. We will focus on the role of the AdS/CFT in low-energy QCD. Although the correspondence between QCD and gravity theory is not known, we can obtain much insights on QCD by the gauge/gravity duality.
We organize this review as follows. Section 2 reviews the gauge/gravity. Section 3 briefly discusses developments of holographic QCD and demonstrates how to build up a bottom-up model using the AdS/CFT dictionary. After discussing the gauge/gravity duality and modeling in the bottom-up approach, we proceed with selected physical quantities. In each section, we show results mostly from the bottom-up approach and list some from the top-down model. Section 4 deals with vacuum condensates of QCD in holographic QCD. We will mainly discuss the gluon condensate and the quark-gluon mixed condensate. Section 5 collects some results on hadron spectroscopy and form factors from the bottom-up model. Contents are glueballs, light mesons, heavy quarkonium, and hadron form-factors. Section 6 is about QCD at finite temperature and density. We consider QCD phase transition and dense matter. Section 7 is devoted to some general remarks on holographic QCD and to list a few topics that are not discussed properly in this article. Due to our limited knowledge, we are not able to cover all interesting works done in holographic QCD. To compensate this defect partially, we will list some recent review articles on holographic QCD.
In Appendices AF, we look back on some basic materials that might be useful for nonexperts in string theory to work in holographic QCD. In Appendix A: we review the relation between the bulk mass and boundary operator dimension. In Appendix B: we present a D3/D7 model and axial U(1) symmetry in the model. In Appendix C: we discuss non-Abelian chiral symmetry based on D4/D8/D8 model. In Appendices D and E: we describe how to calculate the Hawking temperature of an AdS black hole. In Appendix F: we encapsulate the Hawking-Page transition and sketch how to calculate Polyakov loop expectation value in thermal AdS and AdS black hole.
We close this section with a cautionary remark.Though it is tempting to argue that holographic QCD is dual to real QCD, what we mean by QCD here might be mostly QCD-like or a cousin of QCD.
2. Introduction to the AdS/CFT Correspondence
The AdS/CFT correspondence, first suggested by Maldacena [1], is a duality between gravity theory in anti de Sitter space (AdS) background and conformal field theory (CFT). The original conjecture states that there is a correspondence between a weakly coupled gravity theory (type IIB string theory) on AdS5×𝑆5 and the strongly coupled 𝒩=4 supersymmetric Yang-Mills theory on the four-dimensional boundary of AdS5. The strings reside in a higher-dimensional curved spacetime and there exists some well-defined mapping between the objects in the gravity side and the dual objects in the four-dimensional gauge theory. Thus, the conjecture allows the use of non-perturbative methods for strongly coupled theory through its gravity dual.
2.1. D𝑝 Brane Dynamics
The duality emerges from a careful consideration of the D-brane dynamics. A D𝑝 brane sweeps out (𝑝+1) world-volume in spacetime. Introducing D branes gives open string modes whose endpoints lie on the D branes and the open string spectrum consists of a finite number of massless modes and also an infinite tower of massive modes. The open string end points can move only in the parallel (𝑝+1) directions of the brane, see Figure 1(a), and a D𝑝 brane can be seen as a point along its transverse directions. The dynamics of the D𝑝 brane is described by the Dirac-Born-Infeld (DBI) action [5] and Chern-Simons term:𝑆D𝑝=𝑇𝑝𝑑𝑝+1𝑥𝑒𝜙𝑃[𝑔]det𝑎𝑏+2𝜋𝛼𝐹𝑎𝑏+𝑆𝐶𝑆(2.1) with a dilaton 𝑒𝜙. Here 𝑔𝑎𝑏 is the induced metric on D𝑝. 𝑃 denotes the pullback and 𝐹𝑎𝑏 is the world-volume field strength. 𝑇𝑝 is the tension of the brane which has the following form:𝑇𝑝=1(2𝜋)𝑝𝑔𝑠𝑙𝑠𝑝+1=1(2𝜋)𝑝𝑔𝑠𝛼(𝑝+1)/2,(2.2) and it is the mass per unit spatial volume. Here 𝑔𝑠 is the string coupling and 𝑙𝑠 is the string length. 𝛼 is the Regge slope parameter and related to the string length scale as 𝑙𝑠=𝛼. In general, states in the closed string spectrum contain a finite number of massless modes and an infinite tower of massive modes with masses of order 𝑚𝑠=𝑙𝑠1=𝛼1/2. Thus, at low energies 𝐸𝑚𝑠, the higher-order corrections come in powers of 𝛼𝐸2 from integrating out the massive string modes. If there are 𝑁𝑐 stack of multiple D branes, the open strings between different branes give a non-Abelian 𝑈(𝑁𝑐) gauge group; see Figure 1(b). In the low-energy limit, we can integrate out the massive modes to obtain non-Abelian gauge theory of the massless fields.
Figure 1: The configurations of 𝑁𝑐 stack of D3 branes in 10d spacetime. (a) D3 branes sweep (3+1) dimensions in (9+1) space time. (b) 𝑁𝑐 = 3 stack of D3 branes and all the possible classes of open strings.
Now, we take 𝑝=3 and consider 𝑁𝑐 D3 brane stacks in type IIB theory. The low-energy effective action of this configuration gives a non-Abelian gauge theory with 𝑈(𝑁𝑐) gauge group. In addition, this gauge group can be factorized into 𝑈(𝑁𝑐)=𝑆𝑈(𝑁𝑐)×𝑈(1) and the 𝑈(1) part, which describes the center of mass motion of the D3 branes, can be decoupled by the global translational invariance. The remaining subgroup 𝑆𝑈(𝑁𝑐) describes the dynamics of branes from each other. Therefore we see that in the low-energy limit, the massless open string modes on 𝑁𝑐 stacks of D3-branes constitute 𝒩=4𝑆𝑈(𝑁𝑐) Yang-Mills theory [6] with 16 supercharges in (3+1) spacetime. From (2.1) for 𝑝=3, we obtain the effective Lagrangian at low energies up to the two-derivative order:1=4𝜋𝑔𝑠1Tr4𝐹𝜇𝜈𝐹𝜇𝜈+12𝐷𝜇𝜙𝑖𝐷𝜇𝜙𝑖14𝜙𝑖,𝜙𝑗2+𝑖2Ψ𝐼Γ𝜇𝐷𝜇Ψ𝐼𝑖2Ψ𝐼Γ𝑖𝜙𝑖,Ψ𝐼(2.3) with a gauge field 𝐴𝜇, six scalar fields 𝜙𝑖, and four Weyl fermions Ψ𝐼.
In fact, the original system also contains closed string states. The higher-order derivative corrections for the Lagrangian (2.3) come both in powers of 𝛼𝐸2 from the massive modes and powers of the string coupling 𝑔𝑠𝑁𝑐 for loop corrections. It is known that the string coupling constant 𝑔𝑠 is related by the 10-dimensional gravity constant as 𝐺(10)𝑔2𝑠𝑙8𝑠 and thus the dimensionless string coupling is of order 𝐺(10)𝐸8, which is negligible in the low-energy limit. Therefore, at low energies closed strings are decoupled from open strings and the physics on the 𝑁𝑐 D3 branes is described by the massless 𝒩=4 super Yang-Mills theory with gauge group 𝑆𝑈(𝑁𝑐).
2.2. AdS5×𝑆5 Geometry
Now we view the same system from a different angle. Since D branes are massive and carry energy and Ramond-Ramond (RR) charge, 𝑁𝑐 D3 branes deform the spacetime around them to make a curved geometry. Note that the total mass of a D3 brane is infinite because it occupies the infinite world-volume of its transverse directions, but the tension, or the mass per unit three-volume of the D3 brane𝑇3=1(2𝜋)3𝑔𝑠𝑙4𝑠(2.4) is finite.
In the flat spacetime, the circumference of the circle surrounding an origin at a distance 𝑟 is 2𝜋𝑟, and it simply shrinks to zero if one approaches the origin. But if there is a stack of D3 branes, it deforms the spacetime and makes throat geometry along its transverse directions. Thus, near the D3 branes, the radius of a circle around the stack approaches a constant 𝑅, an asymptotical infinite cylinder structure, or AdS5×𝑆5; see Figure 2(b). The 𝑁𝑐 D3 brane stack is located at the infinite end of the throat and this infinite end is called the “horizon”. In the near horizon geometry, a D3 brane is surrounded by a five-dimensional sphere 𝑆5.
Figure 2: The two descriptions of the 𝑁𝑐 D3 configuration. (a) Flat spacetime for 𝑟𝑅. 𝑁𝑐 D3 branes deform the spacetime.
To be more specific, let us start with type IIB string theory for 𝑝=3. We find a black hole type solution which is carrying charges with respect to the RR four-form potential. The theory has magnetically charged D3 branes, which are electrically charged under the potential 𝑑𝐴4 and it is self-dual 𝐹5=𝐹5. The low-energy effective action is1𝑆=(2𝜋)7𝑙8𝑠𝑑10𝑥𝑒𝑔2𝜙𝑅+4(𝜑)22𝐹5!25.(2.5) We assume that the metric is spherically symmetric in seven dimensions with the RR source at the origin; then the 𝑁𝑐 parameter appears in terms of the five-form field RR-field strength on the five-sphere as𝑆5𝐹5=𝑁𝑐,(2.6) where 𝑆5 is the five-sphere surrounding the source for a four-form field 𝐶4. Now by using the Euclidean symmetry we get the curved metric solution [79] for the D3 brane:𝑑𝑠2=𝑓(𝑟)1/2𝜂𝜇𝜈𝑑𝑥𝜇𝑑𝑥𝜈+𝑓(𝑟)1/2𝑑𝑟2+𝑟2𝑑Ω25,(2.7) where𝑅𝑓(𝑟)=1+4𝑟4(2.8) with the radius of the horizon 𝑅:𝑅2=4𝜋𝑔𝑠𝑁𝑐𝛼=4𝜋𝑔𝑠𝑁𝑐𝑙2𝑠.(2.9)𝑑Ω5 is the five-sphere metric. For 𝑟𝑅 we have 𝑓(𝑟)1 and the spacetime becomes flat with a small correction 𝑅4/𝑟4=4𝜋𝑔𝑠𝑁𝑐𝑙4𝑠/𝑟4. This factor can be interpreted as a gravitational potential since 𝐺(10)𝑔2𝑠𝑙8𝑠 and 𝑀D3𝑁𝑐𝑇3𝑁𝑐/𝑔𝑠𝑙4𝑠 and thus 𝑅4/𝑟4𝐺𝑀D3/𝑟4. In the near horizon limit, this gravitational effect becomes strong and the metric changes into𝑑𝑠2=𝑟2𝑅2𝜂𝜇𝜈𝑑𝑥𝜇𝑑𝑥𝜈+𝑅2𝑟2𝑑𝑟2+𝑟2𝑑Ω25.(2.10) This is AdS5×𝑆5.
The geometry by the D3 branes is sketched in Figure 2(b). Far away from the D3 brane stacks, the spacetime is flat (9+1)-dimensional Minkowski spacetime and the only modes which survive in the low-energy limit are the massless-closed string (graviton) multiplets, and they decouple from each other due to weak interactions. On the other hand, close to the D3 branes, the geometry takes the form AdS5×𝑆5 and the whole tower of massive modes exists there. This is because the excitations seen from an observer at infinity are close to the horizon and a closed string mode in a throat should go over a gravitational potential to meet the asymptotic flat region. Therefore, as we focus on the lower-energy limit, the excitation modes should be originated deeper in the throat, and then they decouple from the ones in the flat region. Thus in the low-energy limit the interacting sector lives in AdS5×𝑆5 geometry.
2.3. The Gauge/Gravity Duality
So far, we have considered two seemingly different descriptions of the 𝑁𝑐 D3 brane configuration. As we mentioned, each of the D3 branes carries the gravitational degrees of freedom in terms of its tension, or the string coupling 𝑔𝑠 as in (2.4). So the strength of the gravity effect due to 𝑁𝑐 stacks of D3 branes depends on the parameter 𝑔𝑠𝑁𝑐.
If 𝑔𝑠𝑁𝑐1, from (2.9) we see that 𝑅𝑙𝑠 and therefore the throat geometry effect is less than string length scale. Thus the spacetime is nearly flat and the fluctuations of the D3 branes are described by open string states. In this regime the string coupling 𝑔𝑠 is small and the closed strings are decoupled from the open strings. Here the closed string description is inapplicable since one needs to know about the geometry below the string length scale. If we take the low-energy limit, the effective theory, which describes the open string modes, is 𝒩=4 super Yang-Mills theory with 𝑆𝑈(𝑁𝑐) gauge group.
On the other hand, if 𝑔𝑠𝑁𝑐1, then the back-reaction of the branes on the background becomes important and spacetime will be curved. In this limit the closed string description reduces to classical gravity which is supergravity theory in the near horizon geometry. Here the open string description is not feasible because 𝑔𝑠𝑁𝑐 is related with the loop corrections and one has to deal with the strongly coupled open strings. Again, if we take the low-energy limit, the interaction is described by the type IIB string theory in the near-horizon geometry, AdS5×𝑆5.
The gauge/gravity correspondence is nothing but the conjecture connecting these two descriptions of 𝑁𝑐 D3 branes in the low-energy limit. It is a duality between the 𝒩=4 super-Yang-Mills theory with gauge group 𝑆𝑈(𝑁𝑐) and the type IIB closed string theory in AdS5×𝑆5; see Figure 3. The relation between the Yang-Mills coupling 𝑔YM and the string coupling strength 𝑔𝑠 is given by𝑔2YM=4𝜋𝑔𝑠,𝑅𝑙𝑠4=4𝜋𝑔𝑠𝑁𝑐.(2.11) Then, the ‘t Hooft coupling 𝜆=𝑔2YM𝑁𝑐 can be expressed in terms of the string length scale:𝑅𝜆=𝑙4𝑠4.(2.12) Therefore, the dependence of 𝑔𝑠𝑁𝑐 becomes the question of whether the ‘t Hooft coupling is large or small, or the gauge theory is strongly or weakly coupled.
Figure 3: The sketch of the AdS/CFT correspondence.
The two descriptions can be viewed as two extremes of 𝑟. For the sake of convenience, we use the coordinate 𝑧=𝑅2/𝑟. Then, the AdS5×𝑆5 metric (2.10) becomes𝑑𝑠2=𝑅2𝑧2𝜂𝜇𝜈𝑑𝑥𝜇𝑑𝑥𝜈+𝑑𝑧2+𝑅2𝑑Ω25,(2.13) which shows the conformal equivalence between AdS5 and flat spacetime more clearly. In (2.13), each 𝑧-slice of AdS5 is isometric to four-dimensional Minkowski spacetime. In this coordinate, 𝑧=0 is the boundary of AdS5, where Yang-Mills theory lives, with identifying 𝑥𝜇 as the coordinates of the gauge theory. If 𝑧, the determinant of the metric goes to zero and it is the Poincaré horizon. Here the factor 𝑅2/𝑧2 also has some relation with the energy scales. If the gauge theory side has a certain energy scale 𝐸, the corresponding energy in the gravity side is (𝑧/𝑅)𝐸. In other words, a gauge theory object with an energy scale 𝐸 is involved with a bulk side one localized in the z-direction at 𝑧1/𝐸 [1, 10, 11]. Therefore, the UV or high-energy limit corresponds to 𝑧0 (or 𝑟) and the IR or low-energy limit corresponds to 𝑧 (or 𝑟0).
The operator-field correspondence between operators in the four-dimensional gauge theory and corresponding dual fields in the gravity side was given in [2, 3]. Then the AdS/CFT correspondence can be stated as follows,𝑇𝑒𝑑4𝑥𝜙0(𝑥)𝒪(𝑥)CFT=𝑍sugra,(2.14) where 𝜙0(𝑥)=𝜙(𝑥,𝑢) and the string theory partition function 𝑍sugra at the boundary specified by 𝜙0 has the form𝑍sugra=𝑒𝑆sugra(𝜙(𝑥,𝑢))|𝑢.(2.15) The relation (2.14) implies that the generating functional of gauge-invariant operators in CFT can be matched with the generating functional for tree diagrams in supergravity.
3. Holographic QCD
Ever since the advent of the AdS/CFT correspondence, there have been many efforts, based on the correspondence, to study nonperturbative physics of strongly coupled gauge theories in general and QCD in particular.
Witten proposed [12] that we can extend the correspondence to non-supersymmetric theories by considering the AdS black hole and showed that this supergravity treatment qualitatively well describes strong coupled QCD (or QCD-like) at finite temperature: for instance, the area law behavior of Wilson loops, confinement/deconfinement transition of pure gauge theory through the Hawking-Page transition, and the mass gap for glueball states. In [13] symmetry breaking by expectation values of scalar fields were analyzed in the context of the AdS/CFT correspondence, which is essential to encode the spontaneous breaking of chiral symmetry in a holographic QCD model. Regular supergravity backgrounds with less supersymmetries corresponding to dual confining 𝒩=1 super-Yang-Mills theories were proposed in [14, 15]. It has been shown by Polchinski and Strassler [16] that the scaling of high-energy QCD scattering amplitudes can be obtained from a gravity dual description in a sliced AdS geometry whose IR cutoff is determined by the mass of the lightest glueball. Important progress towards flavor physics of QCD has been made by adding flavor degrees of freedom in the fundamental representation of a gauge group to the gravity dual description [17]. Chiral symmetry breaking and meson spectra were studied in a nonsupersymmetric gravity model dual to large 𝑁𝑐 nonsupersymmetric gauge theories [18], where flavor quarks are introduced by a D7-brane probe on deformed AdS backgrounds. Using a D4/D6 brane configuration, the authors of [19] explored the meson phenomenology of large 𝑁𝑐 QCD together with 𝑈(1)A chiral symmetry breaking. They showed that the chiral condensate scales as 1/𝑚𝑞 for large 𝑚𝑞. A remarkable observation made in [19] is that in addition to the confinement/deconfinement phase transition the model exhibits a possibility that another transition set by 𝑇fund could happen in deconfined phase, 𝑇>𝑇deconf, where 𝑇deconf=𝑀KK/(2𝜋). Since the value of 𝑀KK is around 1 GeV, we can estimate 𝑇deconf160MeV. In this case for 𝑇deconf<𝑇<𝑇fund there exist free unbound quarks and meson bound sates of heavy quarks and above 𝑇fund the meson states dissociate into free quarks, which in some sense mimics the dissociation of heavy quarkonium in quark-gluon plasma (QGP). However, we should note that meson bound states in Dp/Dq systems are deeply bound, while the heavy quarkonia in QCD are shallow bound states. In this sense the bound state that disappears above 𝑇fund could be that of strange quarks rather than charmonium or bottomonium [20].
To attain a realistic gravity dual description of (large 𝑁𝑐) QCD, non-Abelian chiral symmetry is an essential ingredient together with confinement. Holographic QCD models, which are equipped with the correct structure for the problem, namely, chiral symmetry and confinement, have been suggested in top-down and bottom-up approaches. They are found to be rather successful for various hadronic observables and for certain processes dominated by large 𝑁𝑐. Based on a D4/D8/D8 model, Sakai and Sugimoto studied hadron phenomenology in the chiral limit 𝑚𝑞=0, and the chiral symmetry breaking geometrically [21, 22]. More phenomenological holographic QCD models were proposed [2325]. In [23, 24], chiral symmetry breaking is realized by a nonzero chiral condensate whose value is fitted to meson data from experiments. Hadronic spectra and light-front wave functions were studied in [26] based on the “Light-Front Holography” which maps amplitudes in extra dimension to a Lorentz invariant impact separation variable 𝜁 in Minkowski space at fixed light-front time. Light-Front Holography has led to many successful applications in hadron physics including light-quark hadron spectra, meson and baryon form factors, the nonperturbative QCD coupling, and light-front wave-functions; see [2729] for a review on this topic. In [30], a relation between a bottom-up holographic QCD model and QCD sum rules was analyzed.
Now, we demonstrate how to construct a bottom-up holographic QCD model by looking at a low-energy QCD. For illustration purposes, we compare our approach with the (gauged) linear sigma model. The D3/D7 model is summarized in Appendix B with some calculational details. For a review of the linear sigma model, we refer to [31]. Some material in this section is taken from [32]. Suppose that we are interested in two-flavor QCD at low energy, roughly below 1 GeV. In this regime usually we resort to the effective models or theories of QCD for analytic studies since the QCD lagrangian does not help much.
To construct the holographic QCD model dual to two flavor low-energy QCD with chiral symmetry, we first choose relevant fields. To do this, we consider composites of quark fields that have the same quantum numbers with the hadrons of interest. For instance, in the linear sigma model we introduce pion-like and sigma-like fields: 𝜋𝑞𝜏𝛾5𝑞 and 𝜎𝑞𝑞, where 𝜏 is the Pauli matrix for isospin. In the AdS/CFT dictionary, this procedure may be dubbed operator/field correspondence: one-to-one mapping between gauge-invariant local operators in gauge theory and bulk fields in gravity sides. Then we introduce 𝑞𝐿𝛾𝜇𝑡𝑎𝑞𝐿𝐴𝑎𝐿𝜇(𝑥,𝑧),𝑞𝑅𝛾𝜇𝑡𝑎𝑞𝑅𝐴𝑎𝑅𝜇(𝑥,𝑧),𝑞𝛼𝑅𝑞𝛽𝐿2𝑧𝑋𝛼𝛽(𝑥,𝑧).(3.1) An interesting point here is that the 5D mass of the bulk field is not a free parameter of the model. This bulk mass is determined by the dimension Δ and spin 𝑝 of the dual 4D operator in AdS𝑑+1. For instance, consider a bulk field 𝑋(𝑥,𝑧) dual to 𝑞(𝑥)𝑞(𝑥). The bulk mass of 𝑋(𝑥,𝑧) is given by 𝑚2𝑋=(Δ𝑝)(Δ+𝑝𝑑) with Δ=3, 𝑝=0, and 𝑑=4, and so 𝑚2𝑋=3. For more details, see Appendix A.
To write down the Lagrangian of the linear sigma model, we consider (global) chiral symmetry of QCD. Since the mass of light quark 10 MeV is negligible compared to the QCD scale ΛQCD200 MeV, we may consider the exact chiral symmetry of QCD and treat quark mass effect in a perturbative way. Under the axial transformation, 𝑞𝑒𝑖𝛾5𝜏𝜃/2𝑞, the pion-like and sigma-like states transform as 𝜋𝜋+𝜃𝜎 and 𝜎𝜎𝜃𝜋. From this, we can obtain terms that respect chiral symmetry such as 𝜋2+𝜎2. Similarly we ask the holographic QCD model to respect chiral symmetry of QCD. In AdS/CFT, however, a global symmetry in gauge theory corresponds local symmetry in the bulk, and therefore the corresponding holographic QCD model should posses local chiral symmetry. This way vector and axial-vector fields naturally fit into chiral Lagrangian in the bulk as the gauge boson of the local chiral symmetry.
We keep the chiral symmetry in the Lagrangian since it will be spontaneously broken. Then we should ask how to realize the spontaneous chiral symmetry breaking. In the linear sigma model, we have a potential term like ((𝜋2+𝜎2)𝑐2)2 that leads to spontaneous chiral symmetry breaking due to a nonzero vacuum expectation value of the scalar field 𝜎, 𝜎=𝑐. In this case the explicit chiral symmetry due to the small quark mass could be mimicked by adding a term 𝜖𝜎 to the potential which induces a finite mass of the pion, 𝑚2𝜋𝜖/𝑐. In a holographic QCD model, the chiral symmetry breaking is encoded in the vacuum expectation value of a bulk scalar field dual to 𝑞𝑞. For instance, in the hard wall model [23, 24], it is given by 𝑋=𝑚𝑞𝑧+𝜁𝑧3, where 𝑚𝑞 and 𝜁 are proportional to the quark mass and the chiral condensate in QCD. In the D3/D7 model, chiral symmetry breaking can be realized by the embedding solution as shown in Appendix B.
The last step to get to the gravity dual to two flavor low-energy QCD is to ensure the confinement to have discrete spectra for hadrons. The simplest way to realize it might be to truncate the extra dimension at 𝑧=𝑧𝑚 such that the radial direction 𝑧 of dual gravity runs from zero to 𝑧𝑚. Since the radial direction corresponds to an energy scale of a boundary gauge theory, 1/𝑧𝑚 maps to ΛQCD.
Putting things together, we could arrive at the following bulk Lagrangian with local 𝑆𝑈(2)L×𝑆𝑈(2)R, the hard wall model [23, 24]:𝑆HW=𝑑4𝑥𝑑𝑧1𝑔Tr4𝑔25𝐹2𝐿+𝐹2𝑅+||||𝐷𝑋2||𝑋||+32,(3.2) where 𝐷𝜇𝑋=𝜕𝜇𝑋𝑖𝐴𝐿𝜇𝑋+𝑖𝑋𝐴𝑅𝜇 and 𝐴𝐿,𝑅=𝐴𝑎𝐿,𝑅𝑡𝑎 with Tr(𝑡𝑎𝑡𝑏)=(1/2)𝛿𝑎𝑏. The bulk scalar field is defined by 𝑋=𝑋0𝑒2𝑖𝜋𝑎𝑡𝑎, where 𝑋0𝑋. Here 𝑔5 is the five-dimensional gauge coupling, 𝑔25=12𝜋2/𝑁𝑐. The background is given by𝑑𝑠2=1𝑧2𝑑𝑡2𝑑𝑥2𝑑𝑧2,0𝑧𝑧𝑚.(3.3) Instead of the sharp IR cutoff in the hard wall mode, we may introduce a bulk potential that plays a role of a smooth cutoff. In [33], this smooth cutoff is introduced by a factor 𝑒Φ with Φ(𝑧)=𝑧2 in the bulk action, the soft wall model. The form Φ(𝑧)=𝑧2 in the AdS would ensure the Regge-like behavior of the mass spectrum 𝑚2𝑛𝑛. The action is given by𝑆SW=𝑑4𝑥𝑑𝑧𝑒Φ1𝑔Tr4𝑔25𝐹2𝐿+𝐹2𝑅+||||𝐷𝑋2||𝑋||+32.(3.4) Here we briefly show how to obtain the 4D vector meson mass in the soft wall model. The vector field is defined by 𝑉=𝐴𝐿+𝐴𝑅. With the Kaluza-Klein decomposition 𝑉𝑎𝜇(𝑥,𝑧)=𝑔5𝑛𝑣𝑛(𝑧)𝜌𝑎𝜇(𝑥), we obtain 𝜕𝑧𝑒𝐵𝜕𝑧𝑣𝑛+𝑚2𝑛𝑒𝐵𝑣𝑛=0,(3.5) where 𝐵=Φ(𝑧)𝐴(𝑧)=𝑧2+log𝑧 in the AdS geometry (3.3). With 𝑣𝑛=𝑒𝐵𝜓𝑛, we transform the equation of motion into the form of a Schrödinger equation: 𝜓𝑛𝑉(𝑧)𝜓𝑛=𝑚2𝑛𝜓𝑛,(3.6) where 𝑉(𝑧)=𝑧2+3/(4𝑧2). Here 𝑚𝑛 is the mass of the vector resonances, and 𝜌 meson corresponds to 𝑛=0. The solution is well known in quantum mechanics and the eigenvalue 𝑚2𝑛 is given by [33] 𝑚2𝑛=4𝑐(𝑛+1),(3.7) where 𝑐 is introduced to restore the energy dimension.
The finite temperature could be neatly introduced by a black hole in AdS𝑑+1, where 𝑑 is the dimension of the boundary gauge theory. The background is given by𝑑𝑠2=1𝑧2𝑓(𝑧)𝑑𝑡2𝑑𝑥2𝑑𝑧2𝑓(𝑧),(3.8) where 𝑓(𝑧)=1𝑧𝑑/𝑧𝑑. The temperature of the boundary gauge theory is identified with the Hawking temperature of the black hole 𝑇=𝑑/(4𝜋𝑧). In Appendices D and E, we try to explain in a comprehensive manner how to calculate the Hawking temperature of a black hole.
Now we move on to dense matter. According to the AdS/CFT dictionary, a chemical potential in boundary gauge theory is encoded in the boundary value of the time component of the bulk U(1) gauge field. To be more specific on this, we first consider the chemical potential term in gauge theory:𝜇=𝜇𝑞𝑞𝑞.(3.9) Then, we introduce a bulk U(1) gauge field 𝐴𝜇 which is dual to 𝑞𝛾𝜇𝑞. According to the dictionary, 𝐴0(𝑧0)𝑐1𝑧𝑑Δ𝑝+𝑐2𝑧Δ𝑝, we have 𝐴0(𝑧0)𝜇𝑞. In the hard wall model, the solution of the bulk U(1) vector field is given by𝐴𝑡(𝑧)=𝜇+𝜌𝑧2,(3.10) where 𝜇 and 𝜌 are related to quark chemical potential and quark (or baryon) number density in boundary gauge theory. It is interesting to notice that in chiral perturbation theory, a chemical potential is introduced as the time component of a gauge field by promoting the global chiral symmetry to a local gauge one [34, 35].
4. Vacuum Structures
At low energy or momentum scales roughly smaller than 1 GeV, 𝑟>1 fm, QCD exhibits confinement and a nontrivial vacuum structure with condensates of quarks and gluons. In this section, we discuss the gluon condensate and quark-gluon mixed condensate.
The gluon condensate 𝐺𝑎𝜇𝜈𝐺𝑎𝜇𝜈 was first introduced, at zero temperature, in [36] as a measure for nonperturbative physics in QCD. The gluon condensate characterizes the scale symmetry breaking of massless QCD at quantum level. Under the infinitesimal scale transformation 𝑥𝜇=(1+𝛿𝜆)𝑥𝜇,𝐴𝜇=(1𝛿𝜆)𝐴𝜇,𝑞=312𝛿𝜆𝑞,(4.1) the trace of the energy momentum tensor reads schematically 𝜕𝜇𝐽𝜇D=𝑇𝜇𝜇𝛼𝑠𝜋𝐺𝑎𝜇𝜈𝐺𝑎𝜇𝜈.(4.2) Here 𝐽𝜇D is the dilatation current, 𝛼𝑠 is the gauge coupling, and 𝑇𝜇𝜈 is the energy-momentum tensor of QCD. Due to Lorentz invariance, we can write 𝑇𝜇𝜈=𝜖vac𝜂𝜇𝜈, where 𝜖vac is the energy of the QCD vacuum. Therefore, the value of the gluon condensate sets the scale of the QCD vacuum energy. In addition, the gluon condensate is important in the QCD sum rule analysis since it enters in the operator product expansion (OPE) of the hadronic correlators [36]. At high temperature, the gluon condensate is useful to study the nonperturbative nature of the QGP. For instance, lattice QCD results on the gluon condensate at finite temperature [37, 38] indicate that the value of the gluon condensate shows a drastic change around 𝑇𝑐 regardless of the number of quark flavors. The change in the gluon condensate could lead to a dropping of the heavy quarkonium mass around 𝑇𝑐 [39].
In holographic QCD, the gluon condensate figures in a dilaton profile according to the AdS/CFT since the dilaton is dual to the scalar gluon operator Tr(𝐺𝜇𝜈𝐺𝜇𝜈). The 5D gravity action with the dilaton is given by 1𝑆=𝛾2𝜅2𝑑5𝑥𝑔+12𝑅212𝜕𝑀𝜙𝜕𝑀𝜙,(4.3) where 𝛾=+1 for Minkowski metric, and 𝛾=1 for Euclidean signature. We work with Minkowski metric for most cases in this paper. The solution of this system is discovered in [40, 41] by solving the coupled dilaton equation of motion and the Einstein equation: 𝑑𝑠2=𝑅𝑧21𝑐2𝑧8𝜂𝜇𝜈𝑑𝑥𝜇𝑑𝑥𝜈+𝑑𝑧2,(4.4) and the corresponding dilaton profile is given by 𝜙(𝑧)=32log1+𝑐𝑧41𝑐𝑧4+𝜙0,(4.5) where 𝜙0 is a constant. At 𝑧=1/𝑐1/4 there exists a naked singularity that might be resolved in a full string theory consideration. Near the boundary 𝑧0, 𝜙(𝑧)𝑐𝑧4.(4.6) Therefore, 𝑐 is nothing but the gluon condensate up to a constant. Unfortunately, however, 𝑐 is an integration constant of the coupled dilaton equation of motion and the Einstein equation, and therefore, it will be determined by matching with physical observables. In [41], the value of the gluon condensate is estimated by the glueball mass. An interesting idea based on the circular Wilson loop calculation in gravity side is proposed to calculate the value of the gluon condensate 𝐺2(𝛼𝑠/𝜋)𝐺𝑎𝜇𝜈𝐺𝑎𝜇𝜈 [42]. The value is determined to be 𝐺2=0.010±0.0023 GeV at zero temperature [42]. A phenomenological estimation of the gluon condensate in QCD sum rules gives (𝛼𝑠/𝜋)𝐺𝑎𝜇𝜈𝐺𝑎𝜇𝜈0.012GeV4 [36].
Now we consider quark-gluon-mixed condensate 𝑞𝜎𝜇𝜈𝐺𝜇𝜈𝑞, which can be regarded as an additional order parameter for the spontaneous chiral symmetry breaking since the quark chirality flips via the quark-gluon operator. Thus, it is naturally expressed in terms of the quark condensate as 𝑞𝜎𝜇𝜈𝐺𝜇𝜈𝑞=𝑚20𝑞𝑞.(4.7) In [43], an extended hard wall model is proposed to calculate the value of 𝑚20. The bulk action of the extended model is given by𝑑𝑆=5𝑥||||𝑔Tr𝐷𝑋2||𝑋||+3214𝑔25𝐹2𝐿+𝐹2𝑅+||||𝐷Φ25Φ2,(4.8) where Φ is a bulk scalar field dual to the 4D operator on the left-hand-side of (4.7). Then the chiral condensate and the mixed condensate are encoded in the vacuum expectation value of the two scalar fields:1𝑋(𝑥,𝑧)=2𝑚𝑧+𝜎𝑧3,1Φ(𝑥,𝑧)=6𝑐1𝑧1+𝜎𝑀𝑧5,(4.9) where 𝑐1 is the source term for the mixed condensate and 𝜎𝑀 represents the mixed condensate 𝜎𝑀=𝑞𝑅𝜎𝜇𝜈𝐺𝜇𝜈𝑞𝐿. Taking 𝑐1=0, a source-free condition to study only spontaneous symmetry breaking, we determine the value of the mixed condensate or 𝑚20 by considering various hadronic observables. In this sense the mixed condensate is not calculated but fitted to experimental data like the chiral condensate in the hard wall model. The favored value of the 𝑚20 in [43] is 0.72 GeV2. A new method to estimate the value of 𝑚20 in is suggested in [44], where a nonperturbative gauge invariant correlator (the nonlocal condensate) is calculated in dual gravity description to obtain 𝑚20. With inputs from the slopes for the Regge trajectory of vector mesons and the linear term of the Cornell potential, they obtained 𝑚20=0.70GeV2, which is comparable to that from the QCD sum rules, 0.8 GeV2 [45].
5. Spectroscopy and Form Factors
Any newly proposed models or theories in physics are bound to confront experimental data, for instance, hadron masses, decay constants, and form factors. In this section, we consider the spectroscopy of the glueball, light meson, heavy quarkonium, and hadron form factors in hard wall model, soft wall model, and their variants.
5.1. Glueballs
Glueballs are made up of gluons with no constituent quarks in them. The glueball states are in general mixed with conventional 𝑞𝑞 states; so in experiments we may observe these mixed states only. Their existence was expected from the early days of QCD [46, 47]. For theoretical and experimental status of glueballs, we refer to [48, 49].
The spectrum of glueballs is one of the earliest QCD quantities calculated based on the AdS/CFT duality. In [12], Witten confirmed the existence of the mass gap in the dilaton equation of motion on a black hole background, implying a discrete glueball spectrum with a finite gap. Extensive studies on the glueball spectrum were done in [50, 51] and also comparisons between the supergravity results and lattice gauge theory results were made.
Now we consider a scalar glueball (0++) on 𝐑3×𝐒1 as an example [50]. When the radius of the circle 𝐒1 is very small 𝑅0, only the gauge degrees of freedom remain and the gauge theory is effectively the same as pure QCD3 [12, 50]. Using the operator/field correspondence, we first find operators that have the quantum numbers with glueball states of interest and then introduce a corresponding bulk field to obtain the glueball masses. In this case we are to solve an equation of motion for a bulk scalar field 𝜙, which is dual to tr 𝐹2 in the AdS5 Euclidean black hole background. The equation of motion for 𝜙 is given by 𝜕𝜇𝑔𝜕𝜈𝜙𝑔𝜇𝜈=0,(5.1) and the metric is 𝑑𝑠2=𝜌2𝑏4𝜌21𝑑𝜌2+𝜌2𝑏4𝜌2𝑑𝜏2+𝜌2+𝑑𝑥2+𝑑Ω25,(5.2) where 𝜏 is for the compactified imaginary time direction. For simplicity, we assume that 𝜙 is independent of 𝜏 [12, 50] and seek a solution of the form 𝜙(𝜌,𝑥)=𝑓(𝜌)𝑒𝑘𝑥, where 𝑘 is the momentum in 𝐑3. Then the equation of motion for 𝑓(𝜌) reads 𝜌1𝑑𝜌𝑑𝜌4𝑏4𝜌𝑑𝑓𝑑𝜌+𝑚2=0,(5.3) where 𝑚2 is the three-dimensional glueball mass, 𝑚2=𝑘2 [12, 50]. By solving this eigenvalue equation with suitable boundary conditions, regularity at the horizon (𝜌=𝑏), and normalizability 𝑓𝜌4 at the boundary (𝜌), we can obtain discrete eigenvalues, the three-dimensional glueball masses. In the context of a sliced AdS background of the Polchinski and Strassler set up [16], which is dual to confining gauge theory, the mass ratios of glueballs are studied in [52, 53].
More realistic or phenomenology-oriented approaches follow the earlier developments. In the soft wall model the mass spectra of scalar and vector glueballs and their dependence on the bulk geometry and the shape of the soft wall are studied in [54]. The exact glueball correlators are calculated in [55], where the decay constants as well as the mass spectrum of the glueball are also obtained in both hard wall and soft wall models. Here we briefly summarize the scalar glueball properties in the soft wall model [54, 55]. Following a standard path to construct a bottom-up mode, we introduce a massless bulk scalar field 𝜙 dual to the scalar gluon operator Tr(𝐹𝜇𝜈𝐹𝜇𝜈) to write down the bulk action as [54] 𝑑𝑆5𝑔𝑒Φ𝑔𝑀𝑁𝜕𝑀𝜙𝜕𝑁𝜙,(5.4) where Φ=𝑧2 as in the soft wall model. The equation of motion for 𝜙(𝑞,𝑧) can be transformed to a one-dimensional Schrödinger form: 𝜓𝑉(𝑧)𝜓=𝑞2𝜓,(5.5) where 𝜓=𝑒(Φ+3ln𝑧)/2𝜙 with 𝑞2=𝑚2 [54]. The glueball mass spectrum is then given as the eigenvalue of the Schrödinger type equation with regular eigenfunction at 𝑧=0 and 𝑧=: 𝑚2𝑛=4(𝑛+2)̃𝑐,(5.6) where 𝑛 is an integer, 𝑛=0,1,2,. ̃𝑐 is introduced to make the exponent Φ dimensionless, Φ=̃𝑐𝑧2, and it will be fit to hadronic data. Since the vector meson mass in the soft wall model is 𝑚2𝑛=4(𝑛+1)̃𝑐, we calculate the ratio of the lightest (𝑛=0) scalar glueball mass 𝑚2𝐺0 to the 𝜌 meson mass to obtain 𝑚2𝐺0/𝑚2𝜌=2 [54]. The properties of the glueball at finite temperature are studied in the hard wall model [56] and also in the soft wall model [56, 57] by calculating the spectral function of the glueball in the AdS black hole backgroudn. The spectral function is related to various Green functions, and it can be defined by the two-point retarded Green function as 𝜌(𝜔,𝑞)=2Im𝐺𝑅(𝜔,𝑞). The retarded function can be computed in the real-time AdS/CFT, following the prescription proposed in [58]. Both studies using the soft wall model predicted that the dissociation temperature of scalar glueballs is far below the deconfinement (Hawking-Page) transition temperature of the soft wall model. See Section 6.1 and Appendix F for more on the Hawking-Page transition. Note that below the Hawking-Page transition temperature, the AdS black hole is unstable. In [56], the melting temperature of the scalar glueball from the spectral functions is about 40–60 MeV, while the deconfinement temperature of the soft wall model is about 190 MeV [59]. This implies that we have to build a more refined holographic QCD model to have a realistic melting temperature [56, 57].
5.2. Light Mesons
There have been an armful of works in holographic QCD that studied light meson spectroscopy. Here we will try to summarize results from the hard wall model, soft wall model, and their variants.
In Table 1, we list some hadronic observables from hard wall models to see if the results are stable against some deformation of the model. In the table, * means input data and the model with no * is a fit to all seven observables: Model A and Model B from the hard wall model [23], Model I from a hard wall model in a deformed AdS geometry [60], and Model II from a hard wall model with the quark-gluon-mixed condensate [43]. In [24], the following deformed AdS background is considered:𝑑𝑠2=𝜋2𝑧𝑚sin𝜋𝑧/2𝑧𝑚𝑑𝑡2𝑑𝑥𝑖𝑑𝑥𝑖𝑑𝑧2,0𝑧𝑧𝑚,(5.7) and it is stated that the correction from the deformation is less than 10%. The backreaction on the AdS metric due to quark mass and chiral condensate is investigated in [60]. One of the deformed backgrounds obtained in [60] phenomenologically reads𝑑𝑠2=1𝑧2𝑒2𝐵(𝑧)𝑑𝑡2𝑑𝑥𝑖𝑑𝑥𝑖𝑑𝑧2,0𝑧𝑧𝑚,(5.8) where 𝐵(𝑧)=(𝑚2𝑞/24)𝑧2+(𝑚𝑞𝜎/16)𝑧4+(𝜎2/24)𝑧6. In Table 1, we quote some results from this deformed background. Dynamical (back-reacted) holographic QCD model with area-law confinement and linear Regge trajectories was developed in [61].
Table 1: Meson spectroscopy from the hard-wall model and from its variations: Model I [60], Model II [43], Model A [23], and Model B [23]. The experimental data listed in the last column are taken from the particle data group [62]. All results are given in units of MeV except for the condensate and the ratio of two condensates.
We remark that the sensitivity of calculated hadronic observables to the details of the hard wall model was studied in [63] by varying the infrared boundary conditions, the 5D gauge coupling, and scaling dimension of 𝑞𝑞 operator. It turns out that predicted hadronic observables are not sensitive to varying scaling dimension of 𝑞𝑞 operator, while they are rather sensitive to the IR boundary conditions and the 5D gauge coupling [63].
In addition to mesons, baryons were also studied in the hard wall model [6467]. It is pointed out in [65, 66] that one has to use the same IR cutoff of the hard wall model 𝑧𝑚 for both meson and baryon sectors.
Now we collect some results from the soft wall model [33]. There were two nontrivial issues to be resolved in the original soft wall model. Firstly, so called, the dilaton factor Φ𝑧2 is introduced phenomenologically to explain 𝑚2𝑛𝑛. The dilaton factor is supposed to be a solution of gravity-dilaton equations of motion. Secondly, the chiral symmetry breaking in the model is a bit different from QCD since the chiral condensate is proportional to the quark mass in the soft wall model. In QCD, in the chiral limit, where the quark mass is zero, the chiral condensate is finite that characterizes spontaneous chiral symmetry breaking. Several attempts have made to improve these aspects and to fit experimental values better [6871]. In [69], a quartic term in the potential for the bulk scalar 𝑋 dual to 𝑞𝑞 is introduced to the soft wall model to incorporate chiral symmetry breaking with independent sources for spontaneous and explicit breaking; thereby the chiral condensate remains finite in the chiral limit. Then, the authors of [69] parameterized the vev of the bulk scalar 𝑋0 such that it satisfies constraints from the AdS/CFT at UV and from phenomenology at IR: 𝑋0𝑚𝑞𝑧+𝜎𝑧3 as 𝑧0 and 𝑋0𝑧 as 𝑧. The constraint at IR is due to the observation [72] that chiral symmetry is not restored in the highly excited mesons. Note that 𝑋0𝑧 keeps the mass difference between vector and axial-vector mesons constant as 𝑧. With the parameterized 𝑋0, they obtained a dilaton factor Φ(𝑧) [69]. We list some of results of [69] in Table 2. An extended soft wall model with a finite UV cutoff was discussed in [73, 74]. In [75], the authors studied a dominant tetra-quark component of the lightest scalar mesons in the soft wall model, where a rather generic lower bound on the tetra-quark mass was derived.
Table 2: Meson spectroscopy from the modified soft wall model [69]. We show the center values of experimental data. In [69] the experimental data are mostly taken from the particle data group [62], while 𝜌(1282) is from [76]. All results are given in units of MeV.
As long as confinement and non-Abelian chiral symmetry are concerned, the Sakai-Sugimoto model [21, 22] based on a D4/D8/D8 brane configuration (see Appendix C) is the only available stringy model. In this model, properties of light mesons and baryons have been greatly studied [21, 22, 7784].
In a simple bottom-up model with the Chern-Simons term, it was also shown that baryons arise as stable solitons which are the 5D analogs of 4D skyrmions and the properties of the baryons are studied [85].
5.3. Heavy Quarkonium
The properties of heavy quark system both at zero and at finite temperature have been the subject of intense investigation for many years. This is so because, at zero temperature, the charmonium spectrum reflects detailed information about confinement and interquark potentials in QCD. At finite temperature, due to the small interaction cross-section of the charmonium in hadronic matter, the charmonium spectrum is expected to carry information about the early hot and dense stages of relativistic heavy ion collisions. In addition, the charmonium states may remain bound even above the critical temperature 𝑇𝑐. This suggests that analyzing the charmonium data from heavy ion collision inevitably requires more detailed information about the properties of charmonium states in QGP. Therefore, it is very important to develop a consistent nonperturbative QCD picture for the heavy quark system both below and above the phase transition temperature. For a recent review on heavy quarkonium see, for example, [86].
Now we start with the hard wall model to discuss the heavy quarkonium in a bottom-up approach. A simple way to deal with the heavy quarkonium in the hard wall model was proposed in [87]. Since the typical energy scales involved for light mesons and heavy quarkonia are quite different, we may introduce an IR cutoffs 𝑧𝐻𝑚 for heavy quarkonia in the hard wall model which is different from the IR cutoff for light mesons, 1/𝑧𝐿𝑚300 MeV. Note that in the hard wall model there is a one-to-one correspondence between the IR cutoff and the vector meson mass 1/𝑧𝑚𝑚𝑉. In [87], the lowest vector 𝑐𝑐 (𝐽/𝜓) mass 3GeV is used as an input to fix the IR cutoff for the charmonium, 1/𝑧𝐻𝑚1.32GeV. With this, the mass of the second resonance is predicted to be 7.2GeV, which is quite different from the experiment 𝑚𝜓3.7GeV. This is in a sense generic limitation of the hard wall model whose predicted higher resonances are quite different from experiments. Moreover, having two different IR cutoffs in the hard wall model may cause a problem when we treat light quark and heavy quark systems at the same time. In the soft wall model, the mass spectrum of the vector meson is given by [33] 𝑚2𝑛=4(𝑛+1)𝑐.(5.9) For charmonium system, again the lowest mode (𝐽/𝜓) is used to fix 𝑐, 𝑐1.55GeV. Then the mass of the second resonance 𝜓 is 𝑚𝜓4.38GeV, which is 20% away from the experimental value of 3.686GeV [87]. Additionally, the mass of heavy quarkonium such as 𝐽/𝜓 at finite temperature is calculated to predict that the mass decreases suddenly at 𝑇𝑐 and above 𝑇𝑐 it increases with temperature. Furthermore, the dissociation temperature is determined to be around 494MeV in the soft wall model [87].
To compare heavy quarkonium properties obtained in a holographic QCD study with lattice QCD, the finite-temperature spectral function in the vector channel within the soft wall model was explored in [88]. The spectral function is related to the two-point retarded Green function by 𝜌(𝜔,𝑞)=2Im𝐺𝑅(𝜔,𝑞). The retarded function can be computed following the prescription [58]. Thermal spectral functions in a stringy set-up, D3/D7 model, were extensively studied in [89]. To deal with the heavy quarkonium in the soft wall model, two different scales (𝑐𝜌 and 𝑐𝐽/𝜓) are introduced. It is observed in [88] that a peak in the spectral function melts with increasing temperature and eventually is flattened at 𝑇1.2𝑇𝑐. It is also shown numerically that the mass shift squared is approximately proportional to the width broadening [88]. Another interesting finding in [88] is that the spectral peak diminishes at high momentum, which could be interpreted as the 𝐽/𝜓 suppression under the hot wind [90, 91]. A generalized soft wall mode of charmonium is constructed by considering not only the masses but also the decay constants of the charmonium, 𝐽/𝜓 and 𝜓 [92]. They calculated the spectral function as well as the position of the complex singularities (quasinormal frequencies) of the retarded correlator of the charm current at finite temperatures. A predicted dissociation temperature is 𝑇540 MeV, or 2.8𝑇𝑐 [92].
Alternatively, heavy quarkonium properties can be studied in terms of holographic heavy-quark potentials. Since the mass of heavy quarks is much larger than the QCD scale parameter ΛQCD200 MeV, the nonrelativistic Schrödinger equation could be a useful tool to study heavy quark bound states: 22𝑚𝑟+𝑉(𝑟)Ψ(𝑟)=𝐸Ψ(𝑟),(5.10) where 𝑚𝑟 is the reduced mass, 𝑚𝑟=𝑚𝑄/2. A tricky point with potential models for quarkonia is which potential is to be used in the Schrödinger equation: the free energy or the internal energy. In the context of the AdS/CFT, there have been a lot of works on holographic heavy quark potentials [93103]. Hou and Ren calculated the dissociation temperature of heavy quarkonia by solving the Schrödinger equation with holographic potentials [99]. They used two ansätze of the potential model: the F-ansatz (U-ansatz) which identifies the potential in the Schrödinger equation with the free energy (the internal energy), respectively. With the F-ansatz, 𝐽/𝜓 does not survive above 𝑇𝑐, while the dissociation temperature of Υ is (1.32.1)𝑇𝑐. For the U-ansatz, 𝐽/𝜓 dissolves into open charm quarks around (1.21.7)𝑇𝑐 and Υ dissociates at about (2.54.2)𝑇𝑐.
We finish this subsection with a summary of the discussion in [20] on the usefulness of Dq/Dp systems in studying heavy quark bound states. A Dq/Dp system may be good for 𝑠𝑠 bound states at high temperature since the mesons in the Dq/Dp system are deeply bounded, while heavy quarkonia are shallow bound states. However, there exist certain properties of heavy quarkonia in the quark-gluon plasma that could be understood in the D4/D6 model such as dissociation temperature.
5.4. Form Factors
Form factors are a source of information about the internal structure of hadrons such as the distribution of charge. We take the pion electromagnetic form factor as an example. Consider a pion-electron scattering process 𝜋±+𝑒𝜋±+𝑒 through photon exchange. The cross section of this process measured in experiments is different from that of Mott scattering which is for the Coulomb scattering of an electron with a point charge. This deviation is parameterized into the pion form factor 𝐹𝜋(𝑞2), where 𝑞2 is given by the energy and momentum of the photon 𝑞2=𝜔2𝑞2. If the pion is a structureless point particle, we have 𝐹𝜋=1. The pion electromagnetic form factor is expressed by, with the use of Lorentz invariance, charge conjugation, and electromagnetic gauge invariance:𝑝1+𝑝2𝜇𝐹𝜋𝑞2=𝜋𝑝2||𝐽𝜇||𝜋𝑝1,(5.11) where 𝑞2=(𝑝2𝑝1)2 and 𝐽𝜇 is the electromagnetic current, 𝐽𝜇=𝑓𝑒𝑓𝑞𝑓𝛾𝜇𝑞𝑓. The pion charge radius is determined by 𝑟2𝜋=6𝜕𝐹𝜋𝑞2𝜕𝑞2|𝑞2=0.(5.12) In a vector meson dominance model, where the photon interacts with the pion only via vector mesons, especially 𝜌 meson, the pion form factor is given by 𝐹𝜋𝑞2=𝑚2𝜌𝑚2𝜌𝑞2𝑖𝑚𝜌Γ𝜌𝑞2.(5.13) Then we obtain the pion charge radius 𝑟2𝜋=6/𝑚𝜌0.63 fm. The experimental value is 𝑟2𝜋=0.672 fm [104]. To evaluate the form factor, we consider the three-point correlation function of two axial vector currents which contains nonzero projection onto a one pion state and the external electromagnetic current: Γ𝜇𝛼𝛽𝑝1,𝑝2=𝑑𝑥𝑑𝑦𝑒(𝑖𝑝1𝑥+𝑖𝑝2𝑦)0|||𝑇𝐽𝛼5(𝑥)𝐽𝜇(0)𝐽𝛽5(|||0𝑦).(5.14) Alternatively, we can consider two pseudoscalar currents instead of the axial vector currents. The three-point correlation function can be decomposed into several independent Lorentz structures. Among them we pick up the Lorentz structure corresponding to the pion form factor: 0||𝐽𝛽5||𝑝2𝑝2||𝐽𝜇||𝑝1𝑝1||𝐽𝛼5||0𝑓2𝜋𝐹𝜋𝑞2𝑝𝛼1𝑝𝛽2𝑝𝜇1+𝑝𝜇2.(5.15) Note that 0|𝐽𝛼5|𝑝=𝑖𝑓𝜋𝑝𝛼, where |𝑝 is a one pion state. For more details on the form factor, we refer to [105107].
In a holographic QCD approach, we can easily evaluate the three-point correlation function of two axial vector currents (or two pseudoscalar currents) and the external electromagnetic current. In [108], the form factors of vector mesons were calculated in the hard wall model and the electric charge radius of the 𝜌-meson was evaluated to be 𝑟2𝜌=0.53fm2. The number from the soft wall model is 𝑟2𝜌=0.655fm2 [109]. The approach based on the Dyson-Schwinger equations predicted 𝑟2𝜌=0.37fm2 [110] and 𝑟2𝜌=0.54fm2 [111]. The quark mass (or pion mass) dependence of the charge radius of the 𝜌-meson was calculated in lattice QCD: for instance, with 𝑚𝜋300 MeV, 𝑟2𝜌=0.55fm2 [112]. The pion form factor was studied in the hard wall model [113] and in a model that interpolates between the hard wall and soft wall models [114]. The results obtained are 𝑟2𝜋=0.58 fm [113] and in [114] 𝑟2𝜋=0.500 fm, 𝑟2𝜋=0.576 fm, depending on their parameter choice. The gravitational form factors of mesons were calculated in the hard wall model [115, 116]. The gravitational form factor of the pion is defined by 𝜋𝑏𝑝||Θ𝜇𝜈||𝜋(0)𝑎1(𝑝)=2𝛿𝑎𝑏𝑔𝜇𝜈𝑞2𝑞𝜇𝑞𝜈Θ1𝑞2+4𝑃𝜇𝑃𝜈Θ2𝑞2,(5.16) where Θ𝜇𝜈 is the energy momentum tensor, 𝑞=𝑝𝑝, and 𝑃=(𝑝+𝑝)/2. There are also interesting works that studied various form factors in holographic QCD [117120]. Form factors of vector and axial-vector mesons were calculated in the Sakai-Sugimoto model (Figure 4) [121].
Figure 4: QCD phase diagram.
6. Phases of QCD
Understanding the QCD phase structure is one of the important problems in modern theoretical physics; see [122126] for some recent reviews. However, a quantitative calculation of the phase diagram from the first principle is extraordinarily difficult.
Basic order parameters for the QCD phase transitions are the Polyakov loop which characterizes the deconfinement transition in the limit of infinitely large quark mass and the chiral condensate for chiral symmetry in the limit of zero quark mass. The expectation value of the Polyakov loop is loosely given by 𝐿lim𝑟𝑒𝛽𝑉(𝑟),(6.1) where 𝑉(𝑟) is the potential between a static quark-antiquark pair at a distance r, and 𝛽1/𝑇. The expectation value of the Polyakov loop is zero in confined phase, and it is finite in deconfined phase, while the chiral condensate, which is the simplest order parameter for the chiral symmetry, is nonzero with broken chiral symmetry, vanishing with a restored chiral symmetry. Apart from these order parameters, there are thermodynamic quantities that are relevant to study the QCD phase transition. The equation of state is one of them. The energy density, for instance, has been found to rise rapidly at some critical temperature. This is usually interpreted as deconfinement: liberation of many new degrees of freedom. The fluctuations of conserved charges such as baryon number or electric charge [127130] are also an important signal of the quark-hadron phase transition. The quark (or baryon) number susceptibility, which measures the response of QCD to a change of the quark chemical potential, is one of such fluctuations [127, 131].
The nature of the chiral transition of QCD depends on the number of quark flavors and the value of the quark mass. For pure 𝑆𝑈(3) gauge theory with no quarks, it is first order. In the case of two massless and one massive quarks, the transition is the second-order at zero or small quark chemical potentials, and it becomes the first order as we increase the chemical potential. The point where the second order transition becomes the first order is called tricritical point. With physical quark masses of up, down, and strange, the second order at zero or low chemical potential becomes the crossover, and the tricritical point turns into the critical end point.
6.1. Confinement/Deconfinement Transition
We first discuss the deconfinement transition. In holographic QCD, the confinement to deconfinement phase transition is described by the Hawking-Page transition [132], a phase transition between the Schwarzschild-AdS black hole and thermal AdS backgrounds. This identification was made in [12]. One simple reasoning for this identification is from the observation that the Polyakov expectation value is zero on the thermal AdS geometry, while it is finite on the AdS black hole. See Appendix F for some more description of the Hawking-Page transition and the Polyakov expectation in thermal AdS and AdS black hole. In low-temperature confined phase, thermal AdS, which is nothing but the AdS metric in Euclidean space, dominates the partition function, while at high temperature, AdS-black hole geometry does. This was first discovered in the finite volume boundary case in [12]. In the bottom-up model, it is shown that the same phenomena happen also for infinite boundary volume if there is a finite scale associated with the fifth direction [59].
Here we briefly summarize the Hawking-Page analysis of [59] done in the hard wall model. In the Euclidean gravitational action given by𝑆grav1=2𝜅2𝑑5𝑥𝑔𝑅+12𝐿2,(6.2) where 𝜅2=8𝜋𝐺5 and 𝐿 is the length scale of the AdS5, there are two solutions for the equations of motion derived from the gravitational action. The one is the sliced thermal AdS (tAdS): 𝑑𝑠2=𝐿2𝑧2𝑑𝜏2+𝑑𝑧2+𝑑𝑥23,(6.3) where the radial coordinate runs from the boundary of tAdS space 𝑧=0 to the cut-off 𝑧𝑚. Here 𝜏 is for the compactified Euclidean time-direction with periodicity 𝛽. The other solution is the AdS black hole (AdSBH) with the horizon 𝑧:𝑑𝑠2=𝐿2𝑧2𝑓(𝑧)𝑑𝜏2+𝑑𝑧2𝑓(𝑧)+𝑑𝑥23,(6.4) where 𝑓(𝑧)=1(𝑧/𝑧)4. The Hawking temperature of the black hole solution is 𝑇=1/(𝜋𝑧), which is given by regularizing the metric near the horizon. At the boundary 𝑧=𝜖 the periodicity of the time-direction in both backgrounds is the same and so the time periodicity of the tAdS is given by 𝛽=𝜋𝑧𝑓(𝜖).(6.5) Now we calculate the action density 𝑉, which is defined by the action divided by the common volume factor of 𝑅3. The regularized action density of the tAdS is given by𝑉1(𝜖)=4𝐿3𝜅2𝛽0𝑑𝜏𝑧IR𝜖𝑑𝑧𝑧5,(6.6) and that of the AdSBH is given by𝑉2(𝜖)=4𝐿3𝜅2𝜋𝑧0𝑑𝜏𝑧𝜖𝑑𝑧𝑧5,(6.7) where 𝑧=min(𝑧𝑚,𝑧). Then, the difference of the regularized actions is given byΔ𝑉𝑔=lim𝜖0𝑉2(𝜖)𝑉1=𝐿(𝜖)3𝜋𝑧𝜅212𝑧4,𝑧𝑚<𝑧,𝐿3𝜋𝑧𝜅21𝑧4𝑚12𝑧4,𝑧𝑚>𝑧.(6.8) When Δ𝑉𝑔 is positive (negative), tAdS (AdSBH) is stable. Thus, at Δ𝑉𝑔=0 there exists a Hawking-Page transition. In the first case 𝑧𝑚<𝑧, there is no Hawking-Page transition and the tAdS is always stable. In the second case 𝑧𝑚>𝑧, the Hawking-Page transition occurs at𝑇𝑐=21/4𝜋𝑧𝑚,(6.9) and at low temperature 𝑇<𝑇𝑐 (at high temperature 𝑇>𝑇𝑐) the thermal AdS (the AdS black hole) geometry becomes a dominant background. When we fix the IR cutoff by the 𝜌 meson mass, we obtain 1/𝑧𝑚=323 MeV and 𝑇𝑐=122 MeV. In the soft wall model, 𝑇𝑐=191MeV [59].
This work has been extended in various directions. The authors of [133] revisited the thermodynamics of the hard wall and soft wall model. They used holographic renormalization to compute the finite actions of the relevant supergravity backgrounds and verify the presence of a Hawking-page type phase transition. They also showed that the entropy, in the gauge theory side, jumps from 𝑁0 to 𝑁2 at the transition point [133]. In [134], the extension was done by studying the thermodynamics of AdS black holes with spherical or negative constant curvature horizon, dual to a non-supersymmetric Yang-Mills theory on a sphere or hyperboloid respectively. They also studied charged AdS black holes [135] in the grand canonical ensemble, corresponding to a Yang-Mills theory at finite chemical potential, and found that there is always a gap for the infrared cutoff due to the existence of a minimal horizon for the charged AdS black holes with any horizon topology [134]. With an assumption that the gluon condensate melts out at finite temperature, a Hawking-Page type transition between the dilaton AdS geometry in (4.4) and the usual AdS black hole has studied in [136].
The effect of the number of quark flavors 𝑁𝑓 and baryon number density on the critical temperature was investigated by considering a bulk meson action together with the gravity action in [137]. It is shown that the critical temperature decreases with increasing 𝑁𝑓. As the number density was raised, the critical temperature begins to drop, but it saturates to a constant value even at very large density. This is mostly due to the absence of the back-reaction from number density [137]. The back-reaction due to the number density has included in [138140]. In [141], deconfinement transition of AdS/QCD with 𝒪(𝛼3) corrections was investigated. In [142], thermodynamics of the asymptotically-logarithmically-AdS black-hole solutions of 5D dilaton gravity with a monotonic dilaton potential are analyzed in great detail, where it is shown that in a special case, where the asymptotic geometry in the string frame reduces to flat space with a linear dilaton, the phase transition could be second order. The renormalized Polyakov loop in the deconfined phase of a pure 𝑆𝑈(3) gauge theory was computed in [143] based on a soft wall metric model. The result obtained in this work is in good agreement with the one from lattice QCD simulations.
Due to this Hawking-Page transition, we are not to use the black hole in the confined phase, and so we are not to obtain the temperature dependence of any hadronic observables. This is consistent with large 𝑁𝑐 QCD at leading order. For instance, it was shown in [144, 145] that the Wilson loops, both time-like and space-like, and the chiral condensate are independent of the temperature in confining phase to leading order in 1/𝑁𝑐. This means that the chiral and deconfinement transitions are first order. The deconfinement and chiral phase transitions of an 𝑆𝑈(𝑁) gauge theory at large 𝑁𝑐 were also discussed in [146]. However, in reality we observe temperature dependence of hadronic quantities, and therefore we have to include large 𝑁𝑐 corrections in holographic QCD in a consistent way. A quick fix-up for this might be to use the temperature dependent chiral condensate as an input in a holographic QCD model and study how this temperature dependence conveys into other hadronic quantities [147].
6.2. Chiral Transition
Now we turn to the chiral transition of QCD based on the chiral condensate. In the hard wall model, the chiral symmetry is broken, in a sense, by the IR boundary condition. In case we have a well-defined IR boundary condition at the wall 𝑧=𝑧𝑚, we could calculate the value of chiral condensate by solving the equation of motion for the bulk scalar 𝑋. In the case of the AdS black hole we could have a well defined IR boundary condition at the black hole horizon, which allows us to calculate the chiral condensate. For instance, in [148], it is shown that with the AdS black hole background the chiral condensate together with the current quark mass is zero in both the hard wall and soft wall models. This is easy to see from the solution of 𝑋0 in the AdS black hole background [148, 149]: 𝑋0𝑚(𝑧)=𝑧𝑞2𝐹114,14,12,𝑧4𝑧4+𝜎𝑞𝑧22𝐹134,34,32,𝑧4𝑧4.(6.10) At 𝑧=𝑧, both terms in 𝑋0(𝑧) diverge logarithmically, which requires to set both of them zero: 𝑚𝑞=0, 𝜎=0. This is different from real QCD, where current quark mass can be nonzero in the regime 𝑇>𝑇𝑐.
The finite temperature phase structure of the Sakai-Sugimoto model was analyzed in [150] to explore deconfinement and chiral symmetry restoration. Depending on a value of the model parameter, it is predicted that deconfinement and chiral symmetry restoration happens at the same temperature or the presence of a deconfined phase with broken chiral symmetry [150]. Phase structure of a stringy D3/D7 model has extensively studied in [151153].
6.3. Equation of State and Susceptibility
Apart from the chiral condensate, various thermodynamic quantities could serve as an indicator for a transition from hadron to quark-gluon phase. Energy density, entropy, pressure, and susceptibilities are such examples. We first consider energy density and pressure. Schematically, based on the ideal gas picture we discuss how the energy density and pressure tell hadronic matter to quark-gluon plasma. At low temperature thermodynamics of hadron gas will be dominated by pions which are almost massless, while in QGP quarks and gluons are the relevant degrees of freedom. Energy density and pressure of massless pions are 𝜋𝜖=𝛾2𝑇304𝜋,𝑝=𝛾2𝑇904,(6.11) where the number of degrees of freedom 𝛾 is three. In the QGP, they are given by 𝜋𝜖=𝛾2𝑇304𝜋+𝐵,𝑝=𝛾2𝑇904𝐵,(6.12) where 𝛾=37, and 𝐵 is the bag constant. Apart from the bag constant, the degeneracy factor 𝛾 changes from 3 to 37, and therefore we can expect that the energy density and pressure will increase rapidly at the transition point. Since the dual of the boundary energy-momentum tensor 𝑇𝜇𝜈 is the metric, we can obtain the energy density and pressure of a boundary gauge theory from the near-boundary behavior of the gravity solution. To demonstrate how-to, we follow [154, 155]. We first rewrite the gravity solution in the Fefferman-Graham coordinate [156]: 𝑑𝑠2=1𝑧2𝑔𝜇𝜈𝑑𝑥𝜇𝑑𝑥𝜈𝑑𝑧2.(6.13) Next, we expand the metric 𝑔𝜇𝜈 at the boundary 𝑧0: 𝑔𝜇𝜈=𝑔(0)𝜇𝜈+𝑧2𝑔(2)𝜇𝜈+𝑔(4)𝜇𝜈+.(6.14) Now we consider flat 4D metric such that 𝑔(0)𝜇𝜈=𝜂𝜇𝜈. Then 𝑔(2)𝜇𝜈=0 and the vacuum expectation value of the energy momentum tensor is given by 𝑇𝜇𝜈=const𝑔(4)𝜇𝜈.(6.15) For example, we consider an AdS black hole in the Fefferman-Graham coordinate: 𝑑𝑠2=1𝑧21𝑧4/𝑧41+𝑧4/𝑧4𝑑𝑡2𝑧1+4𝑧4𝑑𝑥2𝑑𝑧2.(6.16) Here the temperature is defined by 𝑇=2/(𝜋𝑧). Then, we read off 𝑇𝜇𝜈3diag𝑧4,1𝑧4,1𝑧4,1𝑧4,(6.17) which satisfies 𝜖=3𝑝. There have been many works on the equations of state for a holographic matter at finite temperature [142, 157160]. In [161], the energy density, pressure, and entropy of a deconfined pure Yang-Mills matter were evaluated in the improved holographic QCD model [162, 163]. The energy density and pressure vanish at low temperature, and at the critical temperature, 𝑇𝑐235 MeV, they jump up to a finite value, showing the first-order phase transition. It is interesting to note that in [164] some high-precision lattice QCD simulations were performed with increasing 𝑁𝑐 at finite temperature, and the results were compared with those from holographic QCD studies.
Various susceptibilities are also useful quantities to characterize phases of QCD. For instance, the quark number susceptibility has been calculated in holographic QCD in a series of works [148, 165]. The quark number susceptibility was originally proposed as a probe of the QCD chiral phase transition at zero chemical potential [127, 131]:𝜒𝑞=𝜕𝑛𝑞𝜕𝜇𝑞.(6.18) In terms of the retarded Green function 𝐺𝑅𝜇𝜈(𝜔,𝑘), the quark number susceptibility can be written as [166] 𝜒𝑞(𝑇,𝜇)=lim𝑘0𝐺Re𝑅𝑡𝑡(𝜔=0,𝑘).(6.19) In [165], it is claimed that quark number susceptibility will show a sudden jump at 𝑇𝑐 in high-density regime, and so QCD phase transition in low-temperature and high-density regime will be always first order. Thermodynamics of a charged dilatonic black hole, which is asymptotically RN-AdS black hole in the UV and AdS2×𝐑3 in the IR, including the quark number susceptibility were extensively studied in [167]. The critical end point of the QCD phase diagram was studied in [168] by considering the critical exponents of the specific heat, number density, quark number susceptibility, and the relation between the number density and chemical potential at finite chemical potential and temperature. It is shown that the critical end point is located at 𝑇=143 MeV and 𝜇=783 MeV in the QCD phase diagram [168].
6.4. Dense Baryonic Matter
Understanding the properties of dense QCD is of key importance for laboratory physics such as heavy ion collision and for our understanding of the physics of stable/unstable nuclei and of various astrophysical objects such as neutron stars.
To expose an essential physics of dense nuclear matter, we take the Walecka model [169, 170], which describes nuclear matter properties rather well, as an example. The simplest version of the model contains the nucleon 𝜓, omega meson 𝜔, and an isospin singlet, Lorentz scalar meson 𝜎 whose minimal Lagrangian is =𝜓𝑖/𝜕+𝑔𝜎𝜎𝑔𝜔𝜔1𝜓+2𝜕𝜇𝜎𝜕𝜇𝜎𝑚2𝜎𝜎214𝐹𝜇𝜈𝐹𝜇𝜈+12𝑚2𝜔𝜔𝜇𝜔𝜇.(6.20) Within the mean field approximation, the properties of nuclear matter are mostly determined by the scalar mean field 𝜎=(𝑔𝜎/𝑚2𝜎)𝑛𝑠 and the mean field of the time component of the 𝜔 field 𝜔0=(𝑔𝜔/𝑚2𝜔)𝑛, where 𝑛 is the baryon number density and 𝑛𝑠 is the scalar density. For instance, the pressure of the nuclear matter described by the Walecka is1𝑃=4𝜋223𝐸𝐹𝑝3𝐹𝑚𝑁2𝐸𝐹𝑝𝐹+𝑚𝑁4𝐸ln𝐹+𝑝𝐹𝑚𝑁+12𝑔2𝜔𝑚2𝜔𝑛212𝑔2𝜎𝑚2𝜎𝑛2𝑠,(6.21) where 𝐸𝐹=𝑝2𝐹+𝑚𝑁2,𝑚𝑁=𝑚𝑁𝑔2𝜎𝑚2𝜎𝑛𝑠.(6.22) Further, many successful predictions based on the Walecka model and its generalized versions, Quantum Hadrodynamics, require large scalar and vector fields in nuclei. This implies that to gain a successful description of nuclear matter or nuclei, having both scalar and vector mean fields in the model seems crucial. The importance of the interplay between the scalar and vector fields can be also seen in the static nonrelativistic potential between two nucleons. The nucleon-nucleon potential from single 𝜎-exchange and single 𝜔-exchange is given by 𝑔𝑉(𝑟)=2𝜔14𝜋𝑟𝑒𝑚𝜔𝑟𝑔2𝜎14𝜋𝑟𝑒𝑚𝜎𝑟.(6.23) Note that single 𝜎-exchange can be replaced by two pion exchange. If 𝑔𝜔>𝑔𝜎 and 𝑚𝜔>𝑚𝜎, then the potential in (6.23) captures some essential features of the two nucleon potential to form stable nuclear matter: repulsive at short distance and attraction at intermediate and long distance. We remark here that the scalar field in the Walecka model may not be the scalar associated with a linear realization of usual chiral symmetry breaking in QCD; see, for instance, [171].
The hard wall model or soft wall model in its original form does not do much in dense matter. This is primarily due to its simple structure and chiral symmetry. Suppose that we turn on the time component of a U(1) bulk vector field dual to a boundary number operator, 𝑉𝑡(𝑧)=𝜇+𝜌𝑧2. To incorporate this U(1) bulk field into the hard wall model, we consider U(2) chiral symmetry. The covariant derivative with U(1) vector and axial-vector is given by 𝐷𝜇𝑋=𝜕𝜇𝑋𝑖𝐴𝐿𝜇𝑋+𝑖𝑋𝐴𝑅𝜇 and it becomes 𝐷𝜇𝑋=𝜕𝜇𝑋𝑖𝑋(𝐴𝐿𝜇𝐴𝑅𝜇). Therefore, the U(1) bulk field 𝑉𝜇=𝐴𝐿𝜇+𝐴𝑅𝜇 does not couple to the scalar 𝑋, meaning that the physical properties of 𝑋 are not affected by the chemical potential or number density. Note, however, that the vacuum energy of the hard wall or soft wall model should depend on the chemical potential and number density by the AdS/CFT. One simple way to study the physics of dense matter in the hard or soft wall model is to work with higher-dimensional terms in the action. For instance, the role of dimension six terms in the hard wall model was studied in free space [172, 173]. If we turn on the number density through the U(1) bulk field, we have a term like 𝑋20𝐹2𝑉, where 𝐹𝑉 is the field strength of the bulk U(1) gauge field [174]. Then we may see interplay between number density and chiral condensate encoded in 𝑋0. In [175], based on the hard wall model with the Chern-Simons term it is shown that there exists a Chern-Simons coupling between vector and axial-vector mesons at finite baryon density. This mixes transverse 𝜌 and 𝑎1 mesons and leads to the condensation of the vector and axial-vector mesons. The role of the scalar density or the scalar field in the hard wall model was explored in [176]. In [139], a back-reaction due to the density is studied in the hard wall model.
Physics of dense matter in Sakai-Sugimoto model has been developed with/without the source term for baryon charge [177181]. For instance, in [180] localized and smeared source terms are introduced and a Fermi sea has been observed, though there are no explicit fermionic modes in the model. A deficit with the Sakai-Sugimoto model for nuclear matter might be the absence of the scalar field which is quite important together with U(1) vector field. The phase structure of the D3/D7 model at finite density is studied in [182, 183]. The nucleon-nucleon potential is playing very important role in understanding the properties of nuclear matter. For example, one of the conventional methods to study nuclear matter is to work with the independent-pair approximation, Brueckner’s theory, where two-nucleon potentials are essential inputs. Holographic nuclear forces were studied in [184187].
7. Closing Remarks
The holographic QCD model has proven to be a successful and promising analytic tool to study nonperturbative nature of low energy QCD. However, its success should always come with “qualitative” since it is capturing only large 𝑁𝑐 leading physics. To have any transitions from “qualitative” to “quantitative”, we have to invent a way to calculate subleading corrections in a consistent manner. A bit biased, but the most serious defect of the approach based on the gauge/gravity duality might be that it offers inherently macroscopic descriptions of a physical system. For instance, we may understand the QCD confinemnt/deconfinement transition through the Hawking-Page transition, qualitatively. Even though we accept generously the word “qualitatively”, we are not to be satisfied completely since we do not know how gluons and quarks bound together to form a color singlet hadron or how hadrons dissolve themselves into quark and gluon degrees of freedom. In this sense, the holographic QCD cannot be stand-alone. Therefore, the holographic QCD should go together with conventional QCD-based models or theories to guide them qualitatively and to gain microscopic pictures revealed by the conventional approaches.
Finally, we collect some interesting works done in bottom-up models that are not yet properly discussed in this review. Due to our limited knowledge, we could not list all of the interesting works and most results from top-down models will not be quoted. To excuse this defect we refer to recent review articles on holographic QCD [28, 188197].
Deep inelastic scattering has been studied in gauge/gravity duality [198209]. Light and heavy mesons were studied in the soft-wall holographic approach [210].
Unusual bound states of quarks are also interesting subjects to work in holographic QCD. In [211], the multiquark potential was calculated and tetra-quarks were discussed in AdS/QCD. Based on holographic quark-antiquark potential in the static limit, the masses of the states X(3872) or Y(3940) were predicted and also tetra-quark masses with open charm and strangeness were computed in [212]. A hybrid exotic meson, 𝜋1(1400), was discussed in [213]. The spectrum of baryons with two heavy quarks was predicted in [214].
Low-energy theorems of QCD and spectral density of the Dirac operator were studied in the soft wall model [215]. A holographic model of hadronization was suggested in [216].
The equation of state for a cold quark matter was calculated in the soft wall metric model with a U(1) gauge field. The result is in agreement with phenomenology [217].
A. Bulk Mass and the Conformal Dimension of Boundary Operator
In this appendix, we summarize the relation between the conformal dimension of a boundary operator and the bulk mass of dual bulk field. We work in the Euclidean version of AdS𝑑+1:𝑑𝑠2=1𝑥02𝑑𝜇=0(𝑑𝑥𝜇)2.(A.1)
A.1. Massive Scalar Case
We first consider a free massive scalar field whose action is given by 1𝑆=2𝑑𝑑+1𝑥𝑔𝜕𝜇𝜙𝜕𝜇𝜙+𝑚2𝜙2.(A.2) Let the propagator of 𝜙 be 𝐾(𝑥0,𝑥;𝑥). To solve 𝜙 in terms of its boundary function 𝜙0, we look for a propagator of 𝜙, a solution 𝐾(𝑥0,𝑥;𝑥) of the Laplace equation on 𝐵𝑑+1 whose boundary value is a delta function at a point 𝑃 on the boundary. We take 𝑃 to be the point at 𝑥0. The boundary conditions and metric are invariant under translations of the 𝑥𝑖, then we can consider 𝐾 as a function of only 𝑥0, and thus 𝐾(𝑥0,𝑥;𝑃)=𝐾(𝑥0). Then, the equation of motion is 𝑥0𝑑+1𝑑𝑑𝑥0𝑥0𝑑+1𝑑𝑑𝑥0+𝑚2𝐾𝑥0=0,(A.3) where we used 1𝑔𝜕𝜇𝑔𝜕𝜇=𝑥0𝑑+1𝑑𝑑𝑥0𝑥0𝑑+1𝑑𝑑𝑥0.(A.4) We analyze the equation of motion near the boundary, 𝑥00, and take 𝐾(𝑥0)(𝑥0)𝜆+𝑑. From the equation of motion, we have(𝜆+𝑑)𝜆+𝑚2=0,(A.5) where 𝜆 is the larger root 𝜆=𝜆+. The conformal dimension Δ of the boundary operator is related to the mass m on AdS𝑑+1 space by Δ=𝑑+𝜆+. Thus, we obtain (Δ𝑑)Δ=𝑚2,(A.6) or 1Δ=2𝑑+𝑑2+4𝑚2.(A.7)
A.2. Massive 𝑝-Form Field Case
Consider a massive 𝑝-form potential [218]: 1𝒜=𝒜𝑝!𝜇1𝜇𝑝𝑑𝑥𝜇1𝑑𝑥𝜇𝑝.(A.8) The free action of 𝒜 is1𝑆=2AdS𝑑+1+𝑚2𝒜𝒜,(A.9) where =𝑑𝒜 is the field strength 𝑝+1 form. The variation of this action is 𝛿𝑆=AdS𝑑+1(1)𝑝𝛿𝒜𝑑+𝑚2𝛿𝒜𝒜,(A.10) and then the classical equation of motion for 𝒜 from (A.9) is(1)𝑝𝑑𝑑𝒜𝑚2𝒜=0.(A.11) In addition, 𝒜 satisfies 𝑑𝒜=0. By using the metric (A.1), the equation of motion (A.10) can be written as𝑥02𝜕2𝜇(𝑑+12𝑝)𝑥0𝜕0+𝑑+12𝑝𝑚2𝒜0𝑖2𝑖𝑝𝑥=0,(A.12)02𝜕2𝜇(𝑑12𝑝)𝑥0𝜕0𝑚2𝒜𝑖1𝑖𝑝=2𝑥0𝜕𝑖1𝜔0𝑖2𝑖𝑝+(1)𝑝1𝜕𝑖2𝜔0𝑖3𝑖𝑝𝑖1+.(A.13) Now from the vielbein 𝑒𝜇𝑎=𝑥0𝛿𝜇𝑎, we introduce fields with flat indices:𝐴0𝑖2𝑖𝑝=𝑥0𝑝1𝒜0𝑖2𝑖𝑝,𝐴𝑖1𝑖𝑝=𝑥0𝑝𝒜𝑖1𝑖𝑝.(A.14) Then the equations of motion (A.12) of 𝐴0𝑖2𝑖𝑝 become𝑥02𝜕2𝜇(𝑑1)𝑥0𝜕0𝑚2+𝑝2𝐴𝑝𝑑0𝑖2𝑖𝑝=0.(A.15) We consider𝐴0𝑖2𝑖𝑝𝑥0𝜆(A.16) as 𝑥00. Then substituting this in (A.15) gives 𝑥0=02𝜕20(𝑑1)𝑥0𝜕0𝑚2+𝑝2𝑥𝑝𝑑0𝜆=𝑥02𝜕0𝑥𝜆0𝜆1(𝑑1)𝑥0𝑥𝜆0𝜆1𝑚2+𝑝2𝑥𝑝𝑑0𝜆=𝑚𝜆(𝜆+1)+𝜆(𝑑1)2+𝑝2𝑥𝑝𝑑0𝜆=𝑚𝜆(𝜆+𝑑)2+𝑝2𝑥𝑝𝑑0𝜆,(A.17) and therefore we obtain the relation𝜆(𝜆+𝑑)=𝑚2+𝑝2𝑝𝑑.(A.18) With Δ=𝑑+𝜆, we have (Δ𝑑)Δ=𝑚2+𝑝2𝑝𝑑(Δ𝑝)𝑝+(Δ𝑝)(Δ𝑑)=𝑚2,(A.19) and we finally arrive at (Δ𝑑+𝑝)(Δ𝑝)=𝑚2,(A.20) or 1Δ=2𝑑+(𝑑2𝑝)2+4𝑚2.(A.21)
A.3. General Cases
Now for completeness, we list the relations between the conformal dimension Δ and the mass for the various bulk fields in AdS𝑑+1:(1)scalars [3]: Δ±=(1/2)(𝑑±𝑑2+4𝑚2),(2)spinors [219]: Δ=(1/2)(𝑑+2|𝑚|),(3)vectors (entries 3. and 4. are for forms with Maxwell type actions.): Δ±=(1/2)(𝑑±(𝑑2)2+4𝑚2),(4)𝑝-forms [218]: Δ±=(1/2)(𝑑±(𝑑2𝑝)2+4𝑚2),(5)first-order (𝑑/2)-forms (𝑑 even) (see [220] for 𝑑=4 case.): Δ=(1/2)(𝑑+2|𝑚|),(6)spin-3/2 [221, 222]: Δ=(1/2)(𝑑+2|𝑚|),(7)massless spin-2 [223]: Δ=𝑑.
B. D3/D7 Model and 𝑈(1) Axial Symmetry
In the original AdS/CFT, the duality between type IIB superstring theory on AdS5×𝑆5 and 𝒩=4 super-Yang-Mills theory with gauge group 𝑆𝑈(𝑁𝑐) can be embodied by the low-energy dynamics of a stack of 𝑁𝑐 D3 branes in Minkowski space. All matter fields in the gauge theory produced by the D3 branes are in the adjoint representation of the gauge group. To introduce the quark degrees of freedom in the fundamental representation, we introduce some other branes in this supersymmetry theory on top of the D3 branes.
B.1. Adding Flavour
It was shown in [17] that by introducing 𝑁𝑓 D7 branes into AdS5×𝑆5, 𝑁𝑓 dynamical quarks can be added to the gauge theory, breaking the supersymmetry to 𝒩=2. The simplest way to treat D3/D7 system is to work in the limit where the D7 is a probe brane, which means that only a small number of D7 branes are added, while the number of D3 branes 𝑁𝑐 goes to infinity. In this limit 𝑁𝑓𝑁𝑐 we may neglect the back-reaction of the D7 branes on AdS5×𝑆5 geometry. In field theory side, this corresponds to ignoring the quark loops, quenching the gauge theory.
The D7 branes are added in such a way that they extend parallel in Minkowski space and extend in spacetime as given in Table 3. The massless modes of open strings that both end on the 𝑁𝑐 D3 branes give rise to 𝒩=4 degrees of freedom of supergravity on AdS5×𝑆5 consisting of the 𝑆𝑈(𝑁𝑐) vector bosons, four fermions, and six scalars. In the limit of large 𝑁𝑐 at fixed but large ‘t Hooft coupling 𝜆=𝑔2YM𝑁𝑐=𝑔𝑠𝑁𝑐1, the D3 branes can be replace with near horizon geometry that is given by𝑑𝑠2=𝑟2𝑅2𝑑𝑡2+𝑑𝑥21+𝑑𝑥22+𝑑𝑥23+𝑅2𝑟2𝑑𝑦2=𝑟2𝑅2𝑑𝑡2+𝑑𝑥21+𝑑𝑥22+𝑑𝑥23+𝑅2𝑟2𝑑𝜌2+𝜌2𝑑Ω23+𝑑𝑦25+𝑑𝑦26,(B.1) where 𝑦=(𝑦1,,𝑦6) parameterize the 456789 space and 𝑟2𝑦2. 𝑅 is the radius of curvature 𝑅2=4𝜋𝑔𝑠𝑁𝑐𝛼 and 𝑑Ω23 is the three-sphere metric. The dynamics of the probe D7 brane is described by the combined DBI and Chern-Simons actions [5, 224]:𝑆D7=𝑇7𝑑8𝑥𝑃[𝑔]det𝑎𝑏+2𝜋𝛼𝐹𝑎𝑏+2𝜋𝛼22𝑇7𝑃𝐶(4)𝐹𝐹,(B.2) where 𝑔 is the bulk metric (B.1) and 𝐶(4) is the four-form potential. 𝑇7=1/((2𝜋)7𝑔𝑠𝛼4) is the D7 brane tension and 𝑃 denotes the pullback. 𝐹𝑎𝑏 is the world-volume field strength.
Table 3: The D3/D7-brane intersection in 9+1-dimensional flat space.
The addition of D7 branes to this system as in Table 3 breaks the supersymmetry to 𝒩=2. The lightest modes of the 3-7 and 7-3 open strings correspond to the quark supermultiplets in the field theory. If the D7 brane and the D3 brane overlap, then 𝑆𝑂(6) symmetry is broken into 𝑆𝑂(4)×𝑆𝑂(2)𝑆𝑂(2)𝑅×𝑆𝑂(2)𝐿×𝑈(1)𝑅 in the transverse directions to D3 and so preserves 1/4 of the supersymmetry. The 𝑆𝑂(4) rotates in 4567, while the 𝑆𝑂(2) group acts on 89 in 3. The induced metric on D7 takes the form, in general, as𝑑𝑠2D7=𝑟2𝑅2𝜂𝜇𝜈𝑑𝑥𝜇𝑑𝑥𝜈+𝑅2𝑟21+𝑦52+𝑦62𝑑𝜌2+𝜌2𝑑Ω23,(B.3) where 𝑦5=𝑑𝑦5/𝑑𝜌 and 𝑦6=𝑑𝑦6/𝑑𝜌. When the D7 brane and the D3 brane overlap, the embedding is 𝑦5=0,𝑦6=0,(B.4) and the induced metric on the D7 brane is replaced by𝑑𝑠2D7=𝜌2𝑅2𝜂𝜇𝜈𝑑𝑥𝜇𝑑𝑥𝜈+𝑅2𝜌2𝑑𝜌2+𝜌2𝑑Ω23.(B.5) The D7 brane fills AdS5 and is wrapping a three sphere of 𝑆5. In this case the quarks are massless and the R-symmetry of the theory is 𝑆𝑈(2)𝑅×𝑈(1)𝑅 and we have an extra 𝑈(1)𝑅 chiral symmetry.
If the D7 brane is separated from the D3 branes in the 89-plane direction by distance 𝐿, then the minimum length string has nonzero energy and the quark gains a finite mass, 𝑚𝑞=𝐿/2𝜋𝛼. It is known that the R-symmetry is then only 𝑆𝑈(2)𝑅 and separation of D7 and D3 breaks the 𝑆𝑂(2)𝑈(1)𝑅 that acts on the 89-plane. In this case, we can set for the embedding as 𝑦5=0,𝑦6=𝑦6(𝜌).(B.6) Then, the action for a static D7 embedding (with 𝐹𝑎𝑏 zero on its world volume) becomes 𝑆D7=𝑇7𝑑8𝑥𝑃[𝑔]det𝑎𝑏=𝑇7𝑑8𝑥det𝑔𝑎𝑏1+𝑔𝑎𝑏𝜕𝑎𝑦𝑖𝜕𝑏𝑦𝑗𝑔𝑖𝑗=𝑇7𝑑8𝑥𝜖3𝜌3𝜕1+𝜌𝑦52+𝜕𝜌𝑦62,(B.7) where 𝑖,𝑗=5,6 and 𝜖3 is the determinant from the three sphere. The ground state configuration of the D7 brane is given by the equation of motion with 𝑦5=0:𝑑𝜌𝑑𝜌3𝜕𝜌𝑦6𝜕1+𝜌𝑦62=0.(B.8) The solution of this equation has an asymptotic behavior at UV (𝜌) as𝑦6𝑐𝑚+𝜌2+.(B.9) Now we can identify [198] that 𝑚 corresponds to the quark mass and 𝑐 is for the quark condensate 𝜓𝜓 in agreement with the AdS/CFT dictionary.
B.2. Chiral Symmetry Breaking
One of the significant features of QCD is chiral symmetry breaking by a quark condensate 𝜓𝜓. The 𝑈(1) symmetry under which 𝜓 and 𝜓 transform as 𝜓𝑒𝑖𝛼𝜓 and 𝜓𝑒𝑖𝛼𝜓 in the gauge theory corresponds to a 𝑈(1) isometry in the 𝑦5𝑦6 plane transverse to the D7 brane. This 𝑈(1) symmetry can be explicitly broken by a nonvanishing quark mass due to the separation of the D7 brane from the stack of D3 branes in the 𝑦5+𝑖𝑦6 direction. Assume that the embedding as 𝑦5=0 and 𝑦6𝑐/𝜌2 and then by a small rotation 𝑒𝑖𝜖 on 𝑦5+𝑖𝑦6 generates 𝑦5𝜖𝑐/𝜌2 and 𝑦6𝑦6 up to the 𝒪(𝜖2) order.
In [18] the embedding of a D7 probe brane is embodied in the Constable-Myers background and the regular solution 𝑦6𝑚+𝑐/𝜌2 of the embedding 𝑦5=0, 𝑦6=𝑦6(𝜌) shows the behavior 𝑐0 as 𝑚0 which corresponds to the spontaneous chiral symmetry breaking by a quark condensate. In [225], the chiral symmetry breaking comes from a cosmological constant with a constant dilaton configuration which is dual to the 𝒩=4 gauge theory in a four-dimensional AdS space.
B.3. Meson Mass Spectrum
The open string modes with both ends on the flavour D7 branes are in the adjoint of the 𝑈(𝑁𝑓) flavour symmetry of the quarks and hence can be interpreted as the mesonic degrees of freedom. As an example, we discuss the fluctuation modes for the scalar fields (with spin 0) following the argument of [226]. The directions transverse to the D7 branes are chosen to be 𝑦5 and 𝑦6 and the embedding is𝑦5=0+𝜒,𝑦6=𝐿+𝜑,(B.10) where 𝛿𝑦5=𝜒 and 𝛿𝑦6=𝜑 are the scalar fluctuations of the transverse direction. To calculate the spectra of the world-volume fields it is sufficient to work to quadratic order. For the scalars, we can write the relevant Lagrangian density asD7=𝑇7[𝑔]det𝑃𝑎𝑏=𝑇7det𝑔𝑎𝑏1+𝑔𝑎𝑏𝜕𝑎𝜒𝜕𝑏𝜒𝑔55+𝜕𝑎𝜑𝜕𝑏𝜑𝑔66=𝑇7det𝑔𝑎𝑏1+𝑔𝑎𝑏𝑅2𝑟2𝜕𝑎𝜒𝜕𝑏𝜒+𝜕𝑎𝜑𝜕𝑏𝜑𝑇7det𝑔𝑎𝑏11+2𝑅2𝑟2𝑔𝑎𝑏𝜕𝑎𝜒𝜕𝑏𝜒+𝜕𝑎𝜑𝜕𝑏𝜑,(B.11) where 𝑃[𝑔]𝑎𝑏 is the induced metric on the D7 world-volume. In spherical coordinates with 𝑟2=𝜌2+𝐿2, this can be written asD7𝑇7𝜌3𝜖311+2𝑅2𝜌2+𝐿2𝑔𝑎𝑏𝜕𝑎𝜒𝜕𝑏𝜒+𝜕𝑎𝜑𝜕𝑏𝜑,(B.12) where 𝜖3 is the determinant of the metric on the three sphere. Then the equations of motion become𝜕𝑎𝜌3𝜖3𝜌2+𝐿2𝑔𝑎𝑏𝜕𝑏Φ=0,(B.13) where Φ is used to denote the real fluctuation either 𝜒 or 𝜑. Evaluating a bit more, we have𝑅4𝜌2+𝐿22𝜕𝜇𝜕𝜇1Φ+𝜌3𝜕𝜌𝜌3𝜕𝜌Φ+1𝜌2𝑖𝑖Φ=0,(B.14) where 𝑖 is the covariant derivative on the three-sphere. We apply the separation of variables to write the modes asΦ=𝜙(𝜌)𝑒𝑖𝑘𝑥𝒴𝑆3,(B.15) where 𝒴(𝑆3) are the scalar spherical harmonics on 𝑆3, which transform in the (/2,/2) representation of 𝑆𝑂(4) and satisfy𝑖𝑖𝒴=(+2)𝒴.(B.16) The meson mass is defined by𝑀2𝑘2.(B.17) Now we define 𝜚=𝜌/𝐿 and 𝑀2=𝑘2𝑅4/𝐿2, and then the equation for 𝜙(𝜌) is𝜕2𝜚3𝜙+𝜚𝜕𝜚𝜙+𝑀21+𝜚22(+2)𝜚2𝜙=0.(B.18) This equation was solved in [226] in terms of the hypergeometric function. To solve the equation, we first set𝜙(𝜚)=1+𝜚1+𝜚2𝛼𝑃(𝜚),(B.19) where2𝛼=1+1+𝑀20.(B.20) With a new variable 𝑦=𝜚2, (B.18) becomes𝑦(1𝑦)𝑃[]𝑃(𝑦)+𝑐(𝑎+𝑏+1)𝑦(𝑦)𝑎𝑏𝑃(𝑦)=0,(B.21) where 𝑎=𝛼, 𝑏=𝛼++1, and 𝑐=+2. The general solution is taken by 𝛼0, and by noting that the scalar fluctuations are real for <𝑦0, one finds, up to a normalization constant, the solution of 𝜙:𝜙𝜌(𝜌)=𝜌2+𝐿2𝛼𝐹𝜌𝛼,𝛼++1;+2;2𝐿2.(B.22) Imposing the normalizability at 𝜌, we obtain𝛼++1=𝑛,𝑛=0,1,2,.(B.23) The solution is then𝜙𝜌(𝜌)=𝜌2+𝐿2𝑛++1𝐹𝜌(𝑛++1),𝑛;+2;2𝐿2,(B.24) and from the condition (B.23) we get𝑀2=4(𝑛++1)(𝑛++2).(B.25) Then by the definition of meson mass (B.17), we derive the four-dimensional mass spectrum of the scalar meson:𝑀𝑠(𝑛,)=2𝐿𝑅2(𝑛++1)(𝑛++2).(B.26)
B.4. Mesons at Finite Temperature
In previous sections, we have focused on gauge theories and their gravity dual at zero temperature. To understand the thermal properties of gauge theories using the holography, we work with the AdS-Schwarzschild black hole which is dual to 𝒩=4 gauge theory at finite temperature [3, 12]. The Euclidean AdS-Schwarzschild solution is given by𝑑𝑠2=𝐾(𝑟)𝑅2𝑑𝜏2+𝑅2𝑑𝑟2+𝑟𝐾(𝑟)2𝑅2𝑑𝑥2+𝑅2𝑑Ω25,(B.27) where𝐾(𝑟)=𝑟2𝑟14𝐻𝑟4.( |
f762c4fc6bbc2419 | This is a good article. Click here for more information.
Period 1 element
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Period 1 in the periodic table
Hydrogen (diatomic nonmetal)
Helium (noble gas)
Lithium (alkali metal)
Beryllium (alkaline earth metal)
Boron (metalloid)
Carbon (polyatomic nonmetal)
Nitrogen (diatomic nonmetal)
Oxygen (diatomic nonmetal)
Fluorine (diatomic nonmetal)
Neon (noble gas)
Sodium (alkali metal)
Magnesium (alkaline earth metal)
Aluminium (post-transition metal)
Silicon (metalloid)
Phosphorus (polyatomic nonmetal)
Sulfur (polyatomic nonmetal)
Chlorine (diatomic nonmetal)
Argon (noble gas)
Potassium (alkali metal)
Calcium (alkaline earth metal)
Scandium (transition metal)
Titanium (transition metal)
Vanadium (transition metal)
Chromium (transition metal)
Manganese (transition metal)
Iron (transition metal)
Cobalt (transition metal)
Nickel (transition metal)
Copper (transition metal)
Zinc (transition metal)
Gallium (post-transition metal)
Germanium (metalloid)
Arsenic (metalloid)
Selenium (polyatomic nonmetal)
Bromine (diatomic nonmetal)
Krypton (noble gas)
Rubidium (alkali metal)
Strontium (alkaline earth metal)
Yttrium (transition metal)
Zirconium (transition metal)
Niobium (transition metal)
Molybdenum (transition metal)
Technetium (transition metal)
Ruthenium (transition metal)
Rhodium (transition metal)
Palladium (transition metal)
Silver (transition metal)
Cadmium (transition metal)
Indium (post-transition metal)
Tin (post-transition metal)
Antimony (metalloid)
Tellurium (metalloid)
Iodine (diatomic nonmetal)
Xenon (noble gas)
Caesium (alkali metal)
Barium (alkaline earth metal)
Lanthanum (lanthanide)
Cerium (lanthanide)
Praseodymium (lanthanide)
Neodymium (lanthanide)
Promethium (lanthanide)
Samarium (lanthanide)
Europium (lanthanide)
Gadolinium (lanthanide)
Terbium (lanthanide)
Dysprosium (lanthanide)
Holmium (lanthanide)
Erbium (lanthanide)
Thulium (lanthanide)
Ytterbium (lanthanide)
Lutetium (lanthanide)
Hafnium (transition metal)
Tantalum (transition metal)
Tungsten (transition metal)
Rhenium (transition metal)
Osmium (transition metal)
Iridium (transition metal)
Platinum (transition metal)
Gold (transition metal)
Mercury (transition metal)
Thallium (post-transition metal)
Lead (post-transition metal)
Bismuth (post-transition metal)
Polonium (post-transition metal)
Astatine (metalloid)
Radon (noble gas)
Francium (alkali metal)
Radium (alkaline earth metal)
Actinium (actinide)
Thorium (actinide)
Protactinium (actinide)
Uranium (actinide)
Neptunium (actinide)
Plutonium (actinide)
Americium (actinide)
Curium (actinide)
Berkelium (actinide)
Californium (actinide)
Einsteinium (actinide)
Fermium (actinide)
Mendelevium (actinide)
Nobelium (actinide)
Lawrencium (actinide)
Rutherfordium (transition metal)
Dubnium (transition metal)
Seaborgium (transition metal)
Bohrium (transition metal)
Hassium (transition metal)
Meitnerium (unknown chemical properties)
Darmstadtium (unknown chemical properties)
Roentgenium (unknown chemical properties)
Copernicium (transition metal)
Ununtrium (unknown chemical properties)
Flerovium (post-transition metal)
Ununpentium (unknown chemical properties)
Livermorium (unknown chemical properties)
Ununseptium (unknown chemical properties)
Ununoctium (unknown chemical properties)
A period 1 element is one of the chemical elements in the first row (or period) of the periodic table of the chemical elements. The periodic table is laid out in rows to illustrate recurring (periodic) trends in the chemical behaviour of the elements as their atomic number increases: a new row is begun when chemical behaviour begins to repeat, meaning that elements with similar behaviour fall into the same vertical columns. The first period contains fewer elements than any other row in the table, with only two: hydrogen and helium. This situation can be explained by modern theories of atomic structure. In a quantum mechanical description of atomic structure, this period corresponds to the filling of the 1s orbital. Period 1 elements obey the duet rule in that they need two electrons to complete their valence shell. The maximum number of electrons that these elements can accommodate is two, both in the 1s orbital. Therefore, period 1 can have only two elements.
Periodic trends[edit]
All other periods in the period table contain at least 8 elements, and it is often helpful to consider periodic trends across the period. However, period 1 contains only two elements, so this concept does not apply here.
In terms of vertical trends down groups, helium can be seen as a typical noble gas at the head of Group 18, but as discussed below, hydrogen's chemistry is unique and it is not easily assigned to any group.
Position of period 1 elements in the periodic table[edit]
Although both hydrogen and helium are in the s-block, neither of them behaves similarly to other s-block elements. Their behaviour is so different from the other s-block elements that there is considerable disagreement over where these two elements should be placed in the periodic table.
Hydrogen is sometimes placed above lithium,[1] above carbon,[2] above fluorine,[2][3] above both lithium and fluorine (appearing twice),[4] or left floating above the other elements and not assigned to any group[4] in the periodic table.
Helium is almost always placed above neon (which is in the p-block) in the periodic table as a noble gas,[1] although it is occasionally placed above beryllium due to their similar electron configuration.[5]
Chemical element Chemical series Electron configuration
1 H Hydrogen Diatomic nonmetal 1s1
2 He Helium Noble gas 1s2
Main article: Hydrogen
Hydrogen discharge tube
Deuterium discharge tube
Hydrogen (H) is the chemical element with atomic number 1. At standard temperature and pressure, hydrogen is a colorless, odorless, nonmetallic, tasteless, highly flammable diatomic gas with the molecular formula H2. With an atomic mass of 1.00794 amu, hydrogen is the lightest element.[6]
Hydrogen is the most abundant of the chemical elements, constituting roughly 75% of the universe's elemental mass.[7] Stars in the main sequence are mainly composed of hydrogen in its plasma state. Elemental hydrogen is relatively rare on Earth, and is industrially produced from hydrocarbons such as methane, after which most elemental hydrogen is used "captively" (meaning locally at the production site), with the largest markets almost equally divided between fossil fuel upgrading, such as hydrocracking, and ammonia production, mostly for the fertilizer market. Hydrogen may be produced from water using the process of electrolysis, but this process is significantly more expensive commercially than hydrogen production from natural gas.[8]
The most common naturally occurring isotope of hydrogen, known as protium, has a single proton and no neutrons.[9] In ionic compounds, it can take on either a positive charge, becoming a cation composed of a bare proton, or a negative charge, becoming an anion known as a hydride. Hydrogen can form compounds with most elements and is present in water and most organic compounds.[10] It plays a particularly important role in acid-base chemistry, in which many reactions involve the exchange of protons between soluble molecules.[11] As the only neutral atom for which the Schrödinger equation can be solved analytically, study of the energetics and spectrum of the hydrogen atom has played a key role in the development of quantum mechanics.[12]
The interactions of hydrogen with various metals are very important in metallurgy, as many metals can suffer hydrogen embrittlement,[13] and in developing safe ways to store it for use as a fuel.[14] Hydrogen is highly soluble in many compounds composed of rare earth metals and transition metals[15] and can be dissolved in both crystalline and amorphous metals.[16] Hydrogen solubility in metals is influenced by local distortions or impurities in the metal crystal lattice.[17]
Main article: Helium
Helium discharge tube
Helium (He) is a colorless, odorless, tasteless, non-toxic, inert monatomic chemical element that heads the noble gas series in the periodic table and whose atomic number is 2.[18] Its boiling and melting points are the lowest among the elements and it exists only as a gas except in extreme conditions.[19]
Helium was discovered in 1868 by French astronomer Pierre Janssen, who first detected the substance as an unknown yellow spectral line signature in light from a solar eclipse.[20] In 1903, large reserves of helium were found in the natural gas fields of the United States, which is by far the largest supplier of the gas.[21] The substance is used in cryogenics,[22] in deep-sea breathing systems,[23] to cool superconducting magnets, in helium dating,[24] for inflating balloons,[25] for providing lift in airships,[26] and as a protective gas for industrial uses such as arc welding and growing silicon wafers.[27] Inhaling a small volume of the gas temporarily changes the timbre and quality of the human voice.[28] The behavior of liquid helium-4's two fluid phases, helium I and helium II, is important to researchers studying quantum mechanics and the phenomenon of superfluidity in particular,[29] and to those looking at the effects that temperatures near absolute zero have on matter, such as with superconductivity.[30]
Helium is the second lightest element and is the second most abundant in the observable universe.[31] Most helium was formed during the Big Bang, but new helium is being created as a result of the nuclear fusion of hydrogen in stars.[32] On Earth, helium is relatively rare and is created by the natural decay of some radioactive elements[33] because the alpha particles that are emitted consist of helium nuclei. This radiogenic helium is trapped with natural gas in concentrations of up to seven percent by volume,[34] from which it is extracted commercially by a low-temperature separation process called fractional distillation.[35]
1. ^ a b "International Union of Pure and Applied Chemistry > Periodic Table of the Elements". IUPAC. Retrieved 2011-05-01.
2. ^ a b Cronyn, Marshall W. (August 2003). "The Proper Place for Hydrogen in the Periodic Table". Journal of Chemical Education. 80 (8): 947–951. Bibcode:2003JChEd..80..947C. doi:10.1021/ed080p947.
3. ^ Vinson, Greg (2008). "Hydrogen is a Halogen". Retrieved January 14, 2012.
4. ^ a b Kaesz, Herb; Atkins, Peter (November–December 2003). "A Central Position for Hydrogen in the Periodic Table". Chemistry International. International Union of Pure and Applied Chemistry. 25 (6): 14. Retrieved January 19, 2012.
5. ^ Winter, Mark (1993–2011). "Janet periodic table". WebElements. Retrieved January 19, 2012.
6. ^ "Hydrogen – Energy". Energy Information Administration. Retrieved 2008-07-15.
7. ^ Palmer, David (November 13, 1997). "Hydrogen in the Universe". NASA. Retrieved 2008-02-05.
8. ^ Staff (2007). "Hydrogen Basics — Production". Florida Solar Energy Center. Retrieved 2008-02-05.
9. ^ Sullivan, Walter (1971-03-11). "Fusion Power Is Still Facing Formidable Difficulties". The New York Times.
10. ^ "hydrogen". Encyclopædia Britannica. 2008.
11. ^ Eustis, S. N.; Radisic, D; Bowen, KH; Bachorz, RA; Haranczyk, M; Schenter, GK; Gutowski, M (2008-02-15). "Electron-Driven Acid-Base Chemistry: Proton Transfer from Hydrogen Chloride to Ammonia". Science. 319 (5865): 936–939. Bibcode:2008Sci...319..936E. doi:10.1126/science.1151614. PMID 18276886.
12. ^ "Time-dependent Schrödinger equation". Encyclopædia Britannica. 2008.
13. ^ Rogers, H. C. (1999). "Hydrogen Embrittlement of Metals". Science. 159 (3819): 1057–1064. Bibcode:1968Sci...159.1057R. doi:10.1126/science.159.3819.1057. PMID 17775040.
14. ^ Christensen, C. H.; Nørskov, J. K.; Johannessen, T. (July 9, 2005). "Making society independent of fossil fuels — Danish researchers reveal new technology". Technical University of Denmark. Retrieved 2008-03-28.
15. ^ Takeshita, T.; Wallace, W.E.; Craig, R.S. (1974). "Hydrogen solubility in 1:5 compounds between yttrium or thorium and nickel or cobalt". Inorganic Chemistry. 13 (9): 2282–2283. doi:10.1021/ic50139a050.
16. ^ Kirchheim, R.; Mutschele, T.; Kieninger, W (1988). "Hydrogen in amorphous and nanocrystalline metals". Materials Science and Engineering. 99: 457–462. doi:10.1016/0025-5416(88)90377-1.
17. ^ Kirchheim, R. (1988). "Hydrogen solubility and diffusivity in defective and amorphous metals". Progress in Materials Science. 32 (4): 262–325. doi:10.1016/0079-6425(88)90010-2.
18. ^ "Helium: the essentials". WebElements. Retrieved 2008-07-15.
19. ^ "Helium: physical properties". WebElements. Retrieved 2008-07-15.
20. ^ "Pierre Janssen". MSN Encarta. Retrieved 2008-07-15.
21. ^ Theiss, Leslie (2007-01-18). "Where Has All the Helium Gone?". Bureau of Land Management. Retrieved 2008-07-15.
22. ^ Timmerhaus, Klaus D. (2006-10-06). Cryogenic Engineering: Fifty Years of Progress. Springer. ISBN 0-387-33324-X.
23. ^ Copel, M. (September 1966). "Helium voice unscrambling". Audio and Electroacoustics. 14 (3): 122–126. doi:10.1109/TAU.1966.1161862.
24. ^ "helium dating". Encyclopædia Britannica. 2008.
25. ^ Brain, Marshall. "How Helium Balloons Work". How Stuff Works. Retrieved 2008-07-15.
26. ^ Jiwatram, Jaya (2008-07-10). "The Return of the Blimp". Popular Science. Retrieved 2008-07-15.
27. ^ "When good GTAW arcs drift; drafty conditions are bad for welders and their GTAW arcs.". Welding Design & Fabrication. 2005-02-01.
28. ^ Montgomery, Craig (2006-09-04). "Why does inhaling helium make one's voice sound strange?". Scientific American. Retrieved 2008-07-15.
29. ^ "Probable Discovery Of A New, Supersolid, Phase Of Matter". Science Daily. 2004-09-03. Retrieved 2008-07-15.
30. ^ Browne, Malcolm W. (1979-08-21). "Scientists See Peril In Wasting Helium; Scientists See Peril in Waste of Helium". The New York Times.
31. ^ "Helium: geological information". WebElements. Retrieved 2008-07-15.
32. ^ Cox, Tony (1990-02-03). "Origin of the chemical elements". New Scientist. Retrieved 2008-07-15.
33. ^ "Helium supply deflated: production shortages mean some industries and partygoers must squeak by.". Houston Chronicle. 2006-11-05.
34. ^ Brown, David (2008-02-02). "Helium a New Target in New Mexico". American Association of Petroleum Geologists. Retrieved 2008-07-15.
35. ^ Voth, Greg (2006-12-01). "Where Do We Get the Helium We Use?". The Science Teacher.
Further reading[edit] |
7a929e4cbbfc6dc6 | To capture the dynamics of a system, constraints are put forward to form a closed set of equations:
1. First the quantities describing system need to be consistent;
2. Then the idea of conservation laws should hold by logic, or belief;
3. Consistency and compatibility conditions may be necessary beyond conservation laws.
4. The system of equations comes to a closure when we bridge the gap with constitutive relations, most of which appeared in natural phenomena are linear (approximations).
Conservation Laws and Continuity Equations
Continuity equation represents a conservation law: $\partial_\mu j^\mu=0$, where $j^\mu$ as a conserved current.
Table 1: Pairs of conserved currents and corresponding continuity equations
Conserved current Continuity equations
Energy flux $\nabla \cdot \mathbf{q} + \frac{ \partial u}{\partial t} = 0$
Electric current $\nabla \cdot \mathbf{J} = - {\partial \rho \over \partial t}$
Probability current $\nabla \cdot \mathbf{j} + \frac{\partial \lvert \Psi \rvert^2}{\partial t} = 0$
Current of particles in phase space $\frac{\partial\rho}{\partial t}+\nabla \mathbf{J}=0$, where $J = (\rho\dot{q}^i,\rho\dot{p}_i)$ and $\rho = \rho(q^i, p_i)$
Justification of prevalence of conservation laws:
1. 守恒律最基本,可以认为普遍存在;
2. 这几个方程联系的变量是大部分问题的主要影响因素;
3. 方程个数要和未知数个数匹配,当所研究问题的主要影响参量确定时,用于联系这几个参量的方程数就被限定了, 额外的方程(关系)只可能给已封闭系统引入新的变量,否则就出现矛盾了。
Equations of Motion and Equilibrium
By introducing force, we could write time derivative of (conserved) quantities as equations of motion.
Table 2: Equations of Motion and Equilibrium in Mechanics
Field of study Equations of motion/equilibrium
Classic mechanics $\frac{d p }{d t} = F$; $\frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t} = \boldsymbol{\tau}$; $\text{d} T = \text{d} W_{ext} + \text{d} W_{int}$
Elasticity $\boldsymbol{\nabla}\cdot\boldsymbol{\sigma} + \mathbf{F} = \rho\ddot{\mathbf{u}}$
Some dismiss the concept of force.
While arguments also go against the perception of quantity conservation.
The discovery of the Second Law of Thermodynamics by Carnot in the 19th century showed that every physical quantity is not conserved over time. ... Hence, a "steady-state" worldview based solely on Newton's laws and the conservation laws does not take entropy into account.
Consistency and Compatibility Conditions
Reynolds Transport Theorem in continuum mechanics: (a generalization of the Leibniz integral rule, i.e. differentiation under the integral sign.)
$$\frac{\mathrm{d}}{\mathrm{d}t}\int_{\tau}{\varphi}~{\text{d}\tau} = \frac{\partial}{\partial t} \int_{\text{CV}}{ \varphi ~{\text{d}\tau} } + \int_{\text{CS}} \varphi ({\mathbf{v}}^{r}\cdot {\mathbf{n}}) ~{\text{d}A}$$
Consistency and compatibility conditions in elasticity:
1. Geometric relations (strain-displacement equations): $\boldsymbol{\varepsilon} =\tfrac{1}{2} \left[ \boldsymbol{\nabla}\mathbf{u}+\mathbf{u} \boldsymbol{\nabla}\right]$
2. Saint-Venant's compatibility condition: $\nabla \times \Gamma \times \nabla = 0$
Constitutive Relations
Polarization $P$ is the electric dipole moment per unit volume. Magnetization $M$ is the magnetic dipole moment per unit volume. The relations between electromagnetic fields and polarization or magnetization depend on how the dipoles respond to the applied fields. When the applied fields are weak, to first order approximation, the dipole moments will have a linear relation to the field.
Linear relations between polarizations and electromagnetic fields:
1. Polarization and electric field: $\mathbf{P} = \varepsilon_0 \chi_e \mathbf{E}$
2. Magnetization and magnetic field: $\mathbf{M} = \chi_m \mathbf{H}$
Here, $\varepsilon_0$ is the electric constant, aka vacuum permittivity; $\chi_e$ is electric susceptibility; $\chi_m$ is magnetic susceptibility; $\mathbf{H}$ is the auxiliary magnetic field, defined by $\mathbf{B}=\mu_0(\mathbf{H + M})$.
Transport Phenomena
Several laws describe the transport of matter. In each case they read, "flux (density) is proportional to a gradient, the constant of proportionality is the characteristic of the material."
Table 3: Linear Constitutive Relations of Transport Phenomena
Name Expression
Fick's law of diffusion $\mathbf{J} = -D\nabla \phi$
Newton's law of viscosity $\tau = \mu \frac{\partial u}{\partial y}$
Fourier's law of thermal conduction $\mathbf{q} = - k {\nabla} T$
Ohm's law of electric conduction $\mathbf{J} = \sigma \mathbf{E}$
Darcy's law for porous flow $q = -\frac{k}{\mu} \nabla P$
Continuum Mechanics
Field/Subject Constitutive Relations
fluid mechanics (Newtonian fluid) $\tau_{ij} = 2\mu (s_{ij} - \tfrac{1}{3} s_{kk} \delta_{ij})$
linear elasticity (tensor and isotropic form) $\boldsymbol{\sigma} = \mathsf{C}:\boldsymbol{\varepsilon}$; $\sigma_{ij} = \lambda \varepsilon_{kk} \delta_{ij} + 2\mu \varepsilon_{ij}$
Dynamic (shear) viscosity $\mu$ is equivalent to shear modulus $\mu$; in the sense of linear isotropic materials both of which are Lamé's second parameters. The elimination of second viscosity (bulk viscosity) is called Stokes assumption.
Plasticity. Several nonlinear constitutive relations are presented, either in functional form or differential form.
Governing Equations
Eventually, we can close a system by collecting all the above into a set of governing equations.
Navier–Stokes equations for fluid mechanics:
$$\rho \left({\frac {\partial {\mathbf {v}}}{\partial t}}+{\mathbf {v}}\cdot \nabla {\mathbf {v}}\right)=-\nabla p+\nabla \cdot {\boldsymbol {{\mathsf {T}}}}+{\mathbf {f}}$$
Schrödinger equation for quantum physics:
Maxwell's equations for electrodynamics:
1. Gauss' law: $\nabla \cdot \mathbf{E} = \frac {\rho} {\varepsilon_0}$
2. Faraday's law: $\nabla \cdot \mathbf{B} = 0$
3. Gauss's law for magnetism: $\nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}} {\partial t}$
4. Ampere's law with Maxwell's correction: $\nabla \times \mathbf{B} = \mu_0\left(\mathbf{J} + \varepsilon_0 \frac{\partial \mathbf{E}} {\partial t} \right)$
🏷 Category=Topics Category=Dynamical System |
8b99d498df96b37e | Nonlinear Schrödinger equation: looking for new waves
The nonlinear Schrödinger equation is one of the most important and most studied equations in mathematical physics. It is a relatively simple and compact model which provides a paradigmatic description of nonlinear waves in a variety of physical systems: water waves in oceans, light waves in optical fibers, Bose Einstein condensates (peculiar gases of ultra cold atoms) and plasmas. Its universality is definitely a reason for its success and for the continuous research which scholars devote to it since more than 50 years.
Most importantly the nonlinear Schrödinger equation is integrable, it can be solved exactly by the inverse scattering transform and possess an infinite number of conserved quantities (integrals of motions) which makes its solutions extremely rich and complex.
Notably solitons are some of the most interesting solutions of the nonlinear Schrödinger equation. Solitons, or solitary waves are waves which are localized in space (or in time) and which propagate without changing their shape and without dispersing. Furthermore they exhibit a particle like behavior, they are unchanged upon collision with other solitons.
The richness and complexity of the nonlinear Schrödinger equation is even increased when its generalized forms are considered. Such generalized forms contain further terms to include the description of more phenomena (in addition to cubic nonlinearity and diffraction/dispersion).
Very interesting are terms describing net exchange of energy between the wave and the environment where it propagates, and also variations and non homogeneities of the environment itself, since these are very common scenarios occurring in nature.
On the right A bullet (pulse localized both in the direction of its motion and in the one orthogonal to it) of small waves (the Bogoliubov-de Gennes excitations) travels on top of a “see” ( the condensate) without spreading (unlike a normal pulse shown on the left)! (From: S. Kumar, A. M. Perego and K. Staliunas, Linear and Nonlinear Bullets of the Bogoliubov–de Gennes Excitations, Phys Rev. Lett. 118, 044103 (2017). Video credit: Shubham Kumar)
My works in this field: |
2ac3017c00420ff5 | Correlated Rotational Alignment Spectroscopy
Visualizing Rotational Wave Packets (superposition of spherical harmonics)
Attached below is a Python script to display spherical harmonics and spherical harmonic superpositions. Spherical harmonics are the angular wavefunctions (“eigenfunctions”) used to describe rotational states of molecules and the angular properties of electrons in atoms (i.e., “atomic orbitals”). A single eigenfunction of the Schrödinger equation is time-independent but dynamics can be described by a superposition (sum) of eigenfunctions. E.g., the square of a sum of spherical harmonics predicts the angular orientation of a molecule as function of time. To use the script, you need to have a functional version […] |
48440b4bdfba02d5 | 4.2.55. VIBROT
The program VIBROT is used to compute a vibration-rotation spectrum for a diatomic molecule, using as input a potential computed over a grid. The grid should be dense around equilibrium (recommended spacing 0.05 au) and should extend to large distance (say 50 au) if dissociation energies are computed.
The potential is fitted to an analytical form using cubic splines. The ro-vibrational Schrödinger equation is then solved numerically (using Numerov’s method) for one vibrational state at a time and for a number of rotational quantum numbers as specified by input. The corresponding wave functions are stored on file VIBWVS for later use. The ro-vibrational energies are analyzed in terms of spectroscopic constants. Weakly bound potentials can be scaled for better numerical precision.
The program can also be fed with property functions, such as a dipole moment curve. Matrix elements over the ro-vib wave functions for the property in question are then computed. These results can be used to compute IR intensities and vibrational averages of different properties.
VIBROT can also be used to compute transition properties between different electronic states. The program is then run twice to produce two files of wave functions. These files are used as input in a third run, which will then compute transition matrices for input properties. The main use is to compute transition moments, oscillator strengths, and lifetimes for ro-vib levels of electronically excited states. The asymptotic energy difference between the two electronic states must be provided using the ASYMptotic keyword. Dependencies
The VIBROT is free-standing and does not depend on any other program. Files Input files
The calculation of vibrational wave functions and spectroscopic constants uses no input files (except for the standard input). The calculation of transition properties uses VIBWVS files from two preceding VIBROT runs, redefined as VIBWVS1 and VIBWVS2. Output files
VIBROT generates the file VIBWVS with vibrational wave functions for each \(v\) and \(J\) quantum number, when run in the wave function mode. If requested VIBROT can also produce files VIBPLT with the fitted potential and property functions for later plotting. Input
This section describes the input to the VIBROT program in the Molcas program system. The program name is
&VIBROT Keywords
The first keyword to VIBROT is an indicator for the type of calculation that is to be performed. Two possibilities exist:
ROVIbrational spectrum
VIBROT will perform a vib-rot analysis and compute spectroscopic constants.
TRANsition moments
VIBROT will compute transition moment integrals using results from two previous calculations of the vib-rot wave functions. In this case the keyword Observable should be included, and it will be interpreted as the transition dipole moment.
Note that only one of the above keywords can be used in a single calculation. If none is given the program will only process the input section.
After this first keyword follows a set of keywords, which are used to specify the run. Most of them are optional.
The compulsory keywords are:
Gives the mass of the two atoms. Write mass number (an integer) and the chemical symbol Xx, in this order, for each of the two atoms in free format. If the mass numbers is zero for any atom, the mass of the most abundant isotope will be used. All isotope masses are stored in the program. You may introduce your own masses by giving a negative integer value to the mass number (one of them or both). The masses (in unified atomic mass units, or Da) are then read on the next (or next two) entry(ies). The isotopes of hydrogen can be given as H, D, or T.
Gives the potential as an arbitrary number of lines. Each line contains a bond distance (in au) and an energy value (in au). A plot file of the potential is generated if the keyword Plot is added after the last energy input. One more entry should then follow with three numbers specifying the start and end value for the internuclear distance and the distance between adjacent plot points. This input must only be given together with the keyword RoVibrational spectrum.
In addition you may want to specify some of the following optional input:
One single title line
The next entries give the number of grid points used in the numerical solution of the radial Schrödinger equation. The default value is 199. The maximum value that can be used is 4999.
The next entry contains two distances Rmin and Rmax (in au) specifying the range in which the vibrational wave functions will be computed. The default values are 1.0 and 5.0 au. Note that these values most often have to be given as input since they vary considerably from one case to another. If the range specified is too small, the program will give a message informing the user that the vibrational wave function is large outside the integration range.
The next entry specifies the number of vibrational quanta for which the wave functions and energies are computed. Default value is 3.
The next entry specifies the range of rotational quantum numbers. Default values are 0 to 5. If the orbital angular momentum quantum number (\(m_\ell\)) is non zero, the lower value will be adjusted to \(m_\ell\) if the start value given in input is smaller than \(m_\ell\).
The next entry specifies the value of the orbital angular momentum (0, 1, 2, etc.). Default value is zero.
This keyword is used to scale the potential, such that the binding energy is 0.1 au. This leads to better precision in the numerical procedure and is strongly advised for weakly bound potentials.
Only the wave function analysis will be carried out but not the calculation of spectroscopic constants.
This keyword indicates the start of input for radial functions of observables other than the energy, for example the dipole moment function. The next line gives a title for this observable. An arbitrary number of input lines follows. Each line contains a distance and the corresponding value for the observable. As for the potential, this input can also end with the keyword Plot, to indicate that a file of the function for later plotting is to be constructed. The next line then contains the minimum and maximum R-values and the distance between adjacent points. When this input is given with the top keyword RoVibrational spectrum the program will compute matrix elements for vibrational wave functions of the current electronic state. Transition moment integrals are instead obtained when the top keyword is Transition moments. In the latter case the calculation becomes rather meaningless if this input is not provided. The program will then only compute the overlap integrals between the vibrational wave functions of the two states. The keyword Observable can be repeated up to ten times in a single run. All observables should be given in atomic units.
The next entry gives the temperature (in K) at which the vibrational averaging of observables will be computed. The default is 300 K.
The next entry gives the starting value for the energy step used in the bracketing of the eigenvalues. The default value is 0.004 au (88 \(\text{cm}^{-1}\)). This value must be smaller than the zero-point vibrational energy of the molecule.
The next entry specifies the asymptotic energy difference between two potential curves in a calculation of transition matrix elements. The default value is zero atomic units.
By default, when the Transition moments keyword is given, only the transitions between the lowest rotational level in each vibrational state are computed. The keyword AllRotational specifies that the transitions between all the rotational levels are to be included. Note that this may result in a very large output file.
Requests the vibrational wave functions to be printed in the output file. Input example
RoVibrational spectrum
Title = Vib-Rot spectrum for FeNi
Atoms = 0 Fe 0 Ni
1.0 -0.516768
1.1 -0.554562
Plot = 1.0 10.0 0.1
Grid = 150
Range = 1.0 10.0
Vibrations = 10
Rotations = 2 10
Orbital = 2
Dipole Moment
1.0 0.102354
1.1 0.112898
Plot = 1.0 10.0 0.1
Comments: The vibrational-rotation spectrum for \(\ce{FeNi}\) will be computed using the potential curve given in input. The 10 lowest vibrational levels will be obtained and for each level the rotational states in the range \(J\)=2 to 10. The vib-rot matrix elements of the dipole function will also be computed. A plot file of the potential and the dipole function will be generated. The masses for the most abundant isotopes of \(\ce{Fe}\) and \(\ce{Ni}\) will be selected. |
4e2b0e7c357a1926 | Sektion Physik, Universität München, Theresienstr. 37, D-80333 München, Germany
Department of Physics, University of Niš, P. O. Box 224, 18001 Niš, Yugoslavia
Steklov Mathematical Institute, Gubkin St. 8, GSP-1, 117966, Moscow, Russia
Institute of Physics, P. O. Box 57, 11001 Belgrade, Yugoslavia
We consider the formulation and some elaboration of -adic and adelic quantum cosmology. The adelic generalization of the Hartle-Hawking proposal does not work in models with matter fields. -adic and adelic minisuperspace quantum cosmology is well defined as an ordinary application of -adic and adelic quantum mechanics. It is illustrated by a few of cosmological models in one, two and three minisuperspace dimensions. As a result of -adic quantum effects and the adelic approach, these models exhibit some discreteness of the minisuperspace and cosmological constant. In particular, discreteness of the de Sitter space and its cosmological constant is emphasized.
1 Introduction
The main task of quantum cosmology is to describe the evolution of the universe in a the very early stage. At this stage, the universe is in a quantum state, which is described by a wave function. Usually one takes it that this wave function is complex-valued and depends on some real parameters. Since quantum cosmology is related to the Planck scale phenomena it is logical to reconsider its foundations. We will here maintain the standard point of view that the wave function takes complex values, but we will treat its arguments in a more complete way. Namely, we will regard space-time coordinates and matter fields to be adelic, i.e. they have real as well as -adic properties simultaneously. This approach is motivated by the following reasons: (i) the field of rational numbers , which contains all observational and experimental numerical data, is a dense subfield not only in the field of real numbers but also in the fields of -adic numbers ( is any prime number), (ii) there is a plausible analysis within and over as well as that one related to , (iii) general mathematical methods and fundamental physical laws should be invariant under an interchange of the number fields and , (iv) there is a quantum gravity uncertainty while measuring distances around the Planck length ,
which restricts the priority of archimedean geometry based on real numbers and gives rise to employment of non-archimedean geometry related to -adic numbers, (v) it seems to be quite reasonable to extend compact archimedean geometries by the nonarchimedean ones in the path integral method, and (vi) adelic quantum mechanics applied to quantum cosmology provides realization of all the above statements.
The successful application of -adic numbers and adeles in modern theoretical and mathematical physics started in 1987, in the context of string amplitudes (for a review, see Refs. 2, 8 and 9). For a systematic research in this field it was formulated -adic quantum mechanics and adelic quantum mechanics. They are quantum mechanics with complex-valued wave functions of -adic and adelic arguments, respectively. In the unified form, adelic quantum mechanics contains ordinary and all -adic quantum mechanics.
As there is not an appropriate -adic Schrödinger equation, there is also no -adic generalization of the Wheeler-De Witt equation. Instead of the differential approach, Feynman’s path integral method will be exploited.
-adic gravity and the wave function of the universe were considered in the paper published in 1991. An idea of the fluctuating number fields at the Planck scale was introduced and it was suggested that we restrict the Hartle-Hawking proposal to the summation only over algebraic manifolds. It was shown that the wave function for the de Sitter minisuperspace model can be treated in the form of an infinite product of -adic counterparts.
Another approach to quantum cosmology, which takes into account -adic effects, was proposed in 1995. Like in adelic quantum mechanics, the adelic eigenfunction of the universe is a product of the corresponding eigenfunctions of real and all -adic cases. -adic wave functions are defined by -adic generalization of the Hartle-Hawking path integral proposal. It was shown that in the framework of this procedure one obtains an adelic wave function for the de Sitter minisuperspace model. However, the adelic generalization with the Hartle-Hawking -adic prescription does not work well when minisuperspace has more than one dimension, in particular, when matter fields are taken into consideration. The solution of this problem was found by treating minisuperspace cosmological models as models of adelic quantum mechanics.
In this paper we consider adelic quantum cosmology as an application of adelic quantum mechanics to the minisuperspace models. It will be illustrated by one-, two- and three-dimensional minisuperspace models. As a result of -adic effects and the adelic approach, in these models there is some discreteness of minisuperspace and cosmological constant. This kind of discreteness was obtained for the first time in the context of adelic the de Sitter quantum model.
In the next section we give some basic facts on -adic and adelic mathematics. Section 3 is devoted to a brief review of -adic and adelic quantum mechanics. -adic and adelic quantum cosmology are formulated in Sec. 4. Sections 5 and 6 contain some concrete minisuperspace models. At the end, we give some concluding remarks.
2 -Adic Numbers and Adeles
We give here a brief survey of some basic properties of -adic numbers and adeles, which we exploit in this work.
Completion of with respect to the standard absolute value () gives , and an algebraic extension of makes . According to the Ostrowski theorem any non-trivial norm on the field of rational numbers is equivalent to the absolute value or to a -adic norm , where is a prime number. -adic norm is the non-archimedean (ultrametric) one and for a rational number , where , and are not divisible by , has a value . Completion of with respect to the -adic norm for a fixed leads to the corresponding field of -adic numbers . Completions of with respect to and all exhaust all possible completions of .
A -adic number , in the canonical form, is an infinite expansion
The norm of -adic number in (2) is and satisfies not only the triangle inequality, but also the stronger one
Metric on is defined by . This metric is the non-archimedean one and the pair () presents locally compact, topologically complete, separable and totally disconnected -adic metric space.
In the metric space , -adic ball , with the centre at the point and the radius is the set
The -adic sphere with the centre and the radius is
The following holds:
Elementary -adic functions are given by the series of the same form as in the real case, e.g.
where are Bernoulli’s numbers. These functions have the same domain of convergence . Note the following -adic norms of the hyperbolic functions: and .
Real and -adic numbers are unified in the form of the adeles. An adele is an infinite sequence
where , and , with restriction that for almost all , i.e. for all but a finite set of primes .
If we introduce , then the space of all adeles is , which is a topological ring. Namely, is a ring with respect to the componentwise addition and multiplication. A principal adele is a sequence , where . Thus, the ring of principal adeles, which is a subring of , is isomorphic to .
An important function on is the additive character , which is a continuous and complex-valued function with basic properties:
This additive character may be presented as
where , and is the fractional part of the -adic number .
Map , which has the form
where is an infinitely differentiable function on and falls to zero faster than any power of as , is a locally constant function with compact support, and
is called an elementary function of . Finite linear combinations of elementary functions (13) make the set of the Schwartz-Bruhat functions . The existence of -function is unavoidable for a construction of any adelic model. The Fourier transform is
and it maps one-to-one onto . It is worth noting that -function is a counterpart of the Gaussian in the real case, since it is invariant with respect to the Fourier transform.
The integrals of the Gauss type over the -adic sphere , -adic ball and over any are:
for ,
The arithmetic functions , where , are defined as follows: ,
where -adic is given by (2), , and is the Legendre symbol. However we will mainly use their properties:
3 -Adic and Adelic Quantum Mechanics
In foundations of standard quantum mechanics (over ) one usually starts with a representation of the canonical commutation relation
where is a spatial coordinate and is the corresponding momentum. It is well known that the procedure of quantization is not unique. In formulation of -adic quantum mechanics the multiplication has no meaning for and . Also, there is no possibility to define -adic ”momentum” or ”Hamiltonian” operator. In the real case they are infinitesimal generators of space and time translations, but, since is disconnected field, these infinitesimal transformations become meaningless. However, finite transformations remain meaningful and the corresponding Weyl and evolution operators are -adically well defined.
The canonical commutation relation in the -adic case can be represented by the Weyl operators ()
Now, instead of the relation (21) in the real case, we have
in the -adic one. It is possible to introduce the product of unitary operators
that is a unitary representation of the Heisenberg-Weyl group. Recall that this group consists of the elements with the group product
where is a skew-symmetric bilinear form on the phase space. Dynamics of a -adic quantum model is described by a unitary evolution operator without using the Hamiltonian operator. Instead of that, the evolution operator has been formulated in terms of its kernel
In this way, -adic quantum mechanics is given by a triple
Keeping in mind that standard quantum mechanics can be also given as the corresponding triple, ordinary and -adic quantum mechanics can be unified in the form of adelic quantum mechanics
is the Hilbert space on , is a unitary representation of the Heisenberg-Weyl group on and is a unitary representation of the evolution operator on . The evolution operator is defined by
The eigenvalue problem for reads
where are adelic eigenfunctions, is the corresponding adelic energy, indices and denote energy levels and their degeneration. Any adelic eigenfunction has the form
where , are ordinary and -adic eigenfunctions, respectively. The -function is defined by (14), it is an element of the Hilbert space , and provides convergence of the infinite product (32). Note that (32) has the same form as (13), but here all factors are elements of the Hilbert spaces on and of the same quantum system. In an adelic eigenstate, -adic eigenstates are for all but a finite set of primes . Hence, the existence of for all or almost all is a necessary condition for a quantum model to be adelic one.
For a fixed , function in (32) may be regarded as an element of the Hilbert space , where is a subset of adeles defined in Sec. 2. Moreover, and may be not only eigenstates but also any element of and , respectively. Then superposition , where and , is an element of .
A suitable way to calculate -adic propagator is to use Feynman’s path integral method, i.e.
For quadratic Lagrangians it has been evaluated in the same way for real and -adic cases, and the following exact general expression is obtained:
where functions satisfy properties (19) and (20), and is the classical action. When one has a system with more then one dimension with uncoupled spatial coordinates, the total propagator is the product of the corresponding one-dimensional propagators. As an illustration of -adic and adelic quantum-mechanical models, the following one-dimensional systems with the quadratic Lagrangians were considered: A free particle and a harmonic oscillator, a particle in a constant field, a free relativistic particle and a harmonic oscillator with time-dependent frequency.
Adelic quantum mechanics takes into account ordinary as well as -adic quantum effects and may be regarded as a starting point for the construction of a more complete superstring and M-theory. In the low-energy limit adelic quantum mechanics becomes the ordinary one.
4 -Adic and Adelic Quantum Cosmology
Any real space-time manifold in standard quantum cosmology contains rational points which are dense in the field of real numbers. These rational points, or some of them, may be completed with respect to a distance induced by -adic norm on and one obtains -adic counterpart of this real manifold. Since this can be done for every , in this way, we get an infinite ensemble of (real and -adic) manifolds, which in the form of the direct product usually make an adelic space-time. This adelic space-time provides an arena for a simultaneous exhibition of real and -adic aspects of gravitational and matter fields of the same quantum cosmological model. According to the motivations (i)-(vi) stated in Introduction, it is quite reasonable to consider our very early universe as an adelic quantum system.
Adelic quantum cosmology is an application of adelic quantum theory to the universe as a whole. Adelic quantum theory unifies both -adic and standard quantum theory. In the path integral approach to standard quantum cosmology, the starting point is Feynman’s path integral method, i.e. the amplitude to go from one state with intrinsic metric and matter configuration on an initial hypersurface to another state with metric and matter configuration on a final hypersurface is given by a functional integral
over all four-geometries and matter configurations , which interpolate between the initial and final configurations. In this expression is an Einstein-Hilbert action for the gravitational and matter fields. This action can be calculated if we use metric in the standard 3+1 decomposition
where and are the lapse and shift functions.
To perform -adic and adelic generalization we first make -adic counterpart of the action using form-invariance under change of real to the -adic number fields. Then we generalize (35) and introduce -adic complex-valued cosmological amplitude
where and are the corresponding -adic counterparts of metric and matter fields continually connecting their values on and . In its general aspects -adic functional integral (37) mimics the usual Feynman path integral (for one-dimensional case, see Refs. 2 and 21). The definite integral in the classical action is understood as the usual difference of the indefinite one (without pseudoconstants) at final and initial points. The measures and are related to the real-valued Haar measure on -adic spaces, and the path integral is the limit of a -multiple integral when . There is no natural ordering on , but one can define an appropriate linear order.
Note that in (35) and (37) one has to take also a sum over manifolds which have and as their boundaries. Since the problem of topological classification of four-manifolds is algorithmically unsovable it was proposed that summation should be taken over algebraic manifolds. Our adelic approach supports this proposal, since algebraic manifolds maintain all rational points under the interchange of number fields and .
Since the space of all three-metrics and matter field configurations on a three-surface, called superspace, has infinitely many dimensions, one takes an approximation. A useful approximation is to truncate the infinite degrees of freedom to a finite number , (). In this way, one obtains a particular minisuperspace model. Usually, one restricts the four-metric to be of the form (36), with and as functions . For the homogeneous and isotropic cosmologies, the usual metric is a Robertson-Walker one, of which the spatial sector has the form
If we use also a single scalar field , as a matter content of the model, minisuperspace coordinates are . More generally, models can be homogeneous but also anisotropic ones, and they will be here also considered. All such models can be classified as: (i) Kantowski-Sachs models with spatial topology and
where is the metric on the two-sphere, and minisuperspace coordinates are ; Bianchi models, which are the most general homogeneous cosmological models with a three-dimensional group of isometries. The three-metric of each of these models can be written in the form where are the invariant one-forms associated with the isometry group. The simplest example is the Bianchi I model with and , and
where minisuperspace coordinates are . For the minisuperspace models, functional integrals in (35) and (37) are reduced to functional integrals over three-metric and configuration of matter fields, and to another usual integral over the lapse function . For the boundary condition , in the gauge , we have the -adic minisuperspace propagator
where is specified according to the adelic approach, i.e. and for all or almost all , and
is an ordinary quantum-mechanical propagator between fixed minisuperspace coordinates () in a fixed ’time’ . is the -adic action of the minisuperspace model, i.e.
where is a minisuperspace metric with an indefinite signature (). This metric includes spatial (gravitational) components and also matter variables for the given model. It is worth emphasizing that in the adelic approach the lapse function and minisuperspace coordinates have adelic structure. Also, constants and parameters must be the same rational numbers in and all .
The standard minisuperspace ground state wave function in the Hartle-Hawking (no-boundary) proposal, will be attained if one performs a functional integration in the Euclidean version of
over all compact four-geometries which induce at the compact three-manifold. This three-manifold is the only boundary of all the four-manifolds. If we generalize the Hartle-Hawking proposal to the -adic minisuperspace, then an adelic Hartle-Hawking wave function is an infinite product
where the path integration must be performed over both, archimedean and nonarchimedean geometries. If after evaluation of the corresponding functional integrals we obtain as a result in the form (32), we will say that such cosmological model is an adelic one.
As we shall see, a more successful -adic generalization of the minisuperspace cosmological models can be performed in the framework of -adic and adelic quantum mechanics without using the Hartle-Hawking proposal. In such cases, we examine the conditions under which some eigenstates of the evolution operator (31) exist.
5 -Adic Models in the Hartle-Hawking Proposal
The Hartle-Hawking proposal for the wave function of the universe is generalized to -adic case in Refs. 15 and 25. In this approach, -adic wave function is given by the integral
where, according to the adelic structure of , (i.e. ) for every or almost every .
5.1 Models of the de Sitter type
Models of the de Sitter type are models with cosmological constant and without matter fields. We consider two minisuperspace models of this type, with and space-time dimensions. The corresponding real Einstein-Hilbert action is
where is the scalar curvature of -dimensional manifold , is the cosmological constant, and is the trace of the extrinsic curvature on the boundary . The metric for this model is of the Robertson-Walker type
In this expression denotes the metric on the unit -sphere, , where is the volume of the unit -sphere.
5.1.1 The de Sitter model in dimensions
In the real case, the model is related to the multiple-sphere configuration and wormhole solutions. -adic classical action for this model is
Let us note that , (), denotes the rescaled cosmological constant . Using (34) for the propagator of this model we have
The -adic Hartle-Hawking wave function is
which after -adic integration becomes
5.1.2 The de Sitter model in dimensions
The de Sitter model in space-time dimensions may be described by the metric
and the corresponding action , where . For , the equation of motion has solution , where and . Note that this classical solution resembles motion of a particle in a constant field and defines an algebraic manifold. The choice of metric in the form (53) yields quadratic -adic classical action
According to (34), the corresponding propagator is
We obtain the -adic Hartle-Hawking wave function by the integral
and as a result we get also function with the condition .
The above -functions allow adelic wave functions of the form (32) for both and cases. Since in (52) for all , it means that cannot be a rational number and consequently the above the de Sitter minisuperspace model in space-time dimensions is not adelic one. However case is adelic, because is a rational number when .
5.2 Model with a homogeneous scalar field
To deal with the models of the de Sitter type is very instructive. Although these models are without matter content, they are in quantum cosmology of such significance as the model of harmonic oscillator in quantum mechanics. However, it is also important to consider models with some matter content. In order to have a quadratic classical action, we use metric in the form
the gravitational part of the action in the form (47) (with ), and the corresponding action for a scalar field as
After substitutions: , and , , we get the classical action and propagator
As we have shown for this model, a -adic Hartle-Hawking wave function in the form of - function does not exist. This leads to the conclusion that either the above model is not adelic, or that -adic generalization of the Hartle-Hawking proposal is not an adequate one. However, if in the action (59) we take , then we get classical action for the de Sitter model (54), and such model, as we showed it, is the adelic one. The similar conclusion holds also for some other models in which minisuperspace is not one-dimensional. This is a reason to regard -adic and adelic minisuperspace quantum cosmology just as the correspondig application of -adic and adelic quantum mechanics without the Hartle-Hawking proposal.
6 Minisuperspace Models in -Adic and Adelic Quantum Mechanics
In this approach we investigate conditions under which quantum-mechanical -adic ground state exists in the form of -function and some other typical eigenfunctions. This leads to the desired result and it enables adelization of many exactly soluble minisuperspace cosmological models, usually with some restrictions on the parameters of the models.
The necessary condition for the existence of an adelic quantum model is the existence of -adic ground state defined by (14), i.e.
Analogously, if a system is in the state , where if and if , then its kernel must satisfy equation
If -adic ground state is of the form of the -function, where -function is defined as if and if , then the corresponding kernel of the model has to satisfy equation |
0087c13f9beea9d9 | Skip to main content
Chemistry LibreTexts
Particle in a Sphere
• Page ID
• Particle on a sphere is one out of the two models that describe rotational motion. A single particle travels on the surface of the sphere. Unlike particle in a box, the particle on a sphere requires angular momentum, \(J\).
There is a vector that contains direction of the axis of rotation. The magnitude of angular momentum of the particle that travels around the sphere can be defined as:
\[J = pr\]
• \(p\) is linear momentum is the result of the mass and velocity of object (p=mv)
• \(r\) is the radius of the sphere
The faster a particle travels in a sphere the higher the angular momentum. In other words, if we increase the velocity of a particle we get an increase in angular momentum. Therefore this required stronger torque to bring the particle to stop. Particle of mass is not restrict to move anywhere on the surface of the sphere radius. The potential energy of the particle on a sphere is zero because the particle can travel anywhere on the surface of the sphere without a preference in location; the particle on the sphere is infinity. Furthermore, the wave-function needs to satisfy two cyclic boundary conditions which are passing over the poles and around the equator of the sphere surrounding the central point. Using the Schrödinger equation we are able to find the energy of the particle:
\[E = l (l +1) (h/2?) 2(1/2I) \]
with \(l = 0, 1, 2, 3, …\)
We also know that the energy of the rotation of the particle is related to the classical angular momentum:
\[E = \dfrac{J^2}{2I}\]
\(I\) is the moment of inertia of the particle; heavy mass in a large radius path has a large \(I\). Because energy is quantized we can assume that these two equations can be compared with each other. Therefore the magnitude of the angular momentum is also limited to the values:
\[J = \sqrt{L (L+1)} \hbar /2\]
\(L\) is the orbital angular momentum quantum number.
Considering motions in three dimensions, J has three components \(J_x\), \(J_y\), and \(J_z\), along x, y, and z – axis. The angular momentums of z-axis are quantized and have the values as \(J_z = m_l (h/2)\) with \(m_l =l, …, 1, 0, -1, …, - l\) Where \(m_l\) is the magnetic quantum number. The value of ml is restricted because of two cyclic boundary conditions, such that for ml equal to l there are 2l +1
• Atkins, Peter and de Paula, Julio. Physical Chemistry for the Life Sciences. New York, N.Y.: W. H. Freeman Company, 2006. (361-362).
• Stephen Berry, and Stuart A. Rice, John Ross. Physical Chemistry. John Wiley and Sons 1980, R. (118-119).
Contributors and Attributions
• An Nguyen
• Was this article helpful? |
2a9b8ec884f81bee | Search This Blog
Friday, December 16, 2016
Lunar Solation Time
Since ancient times, peoples have used the winter solstice to begin each new year but really both the sun and the moon tell time, just very differently. While Sol tells the very precise time of the atomic clock, Luna tells a much fuzzier time with her full moons due to the entanglement of moon and sun orbital motions. The precise time of Sol is contrasts with the fuzzy Luna time and we must somehow integrate both ways of telling time.
The solstice of 2018dec21 at 2:23 pm pst happens to be within a day of the full moon of 2018dec22 9:49 am pst. Such an alignment of the winter solstice and full moon only happens once every nine years and reflects the nine year lunar solation Sar cycle. Correspondingly, the Easter moon 2019mar 21 also coincides with the vernal equinox 2019mar20. The nine-year Sar cycle is the period that relates alternating lunar and 18-year Saros solar eclipses. The Chaldeans discovered the Sar cycle 2600 years ago by careful observation and recording in the ancient city of Babylon in Iraq.
Science declares that there are exactly 86,400 seconds in each day and science declares the length of the second is exactly 9,192,631,770 cycles of the Cs-133 atomic clock hyperfine resonance at 9.2 Ghz. But the number of days in each lunar cycle varies and the length of a lunar month varies by half a day from 29.3 to 29.8 days. Over an eight solar year cycle of 99 moons, there are 7 lunar years of 14 moons each plus one leap moon to make 99 moons. There are 14 full moons for each lunar year but the lunar and solar years only approximately rephase about once every 8 solar years of 365.25 days each and 7 lunar years of 14 full moons each as the figure shows. this is not the most important issue facing civilization right now...that I give you. It just seems such a crying shame that the simple relationship of 99 moons to 8 solar years does not get much press.
The decay of the Kronos atomic time of the solar day is well known and seems to contrast with the equally well known chaos of the Kairos time of the lunar month. But really, these two time dimensions simply reflect the confusion that comes from the duality of matter and action and the duality of discrete quantum versus continuous classical action.
The 2019 solstice is within one day of a full moon at the minimum of lunation period starts the 99 moon period of 2019-26. The Easter moon on 31mar2018 is moon#91 of the previous 99 moon cycle and celebrates the birth of Spring as the first full moon following the 2018mar20 vernal equinox. The Easter moon 2019mar21 coincides with 2019mar20 vernal equinox just like the solstice aligned with a full moon. The lunations around the moon#99 start always involve more solar and lunar eclipses since the sun and moon spend more time at near angles.
The Pleasure of Discovery Means Living Better and Not Just Living Longer
Living longer is not always the same as living better and there is a 2016dec report that shows life expectancy in the U. S. has decreased for the first time since the aids epidemic of mere 0.1 year. According to an NPR report, 28,000 more people died in 2015 as compared to 2014 and a CNS report suggests that the introduction of the U.S. affordable health care act may be at least partly responsible for killing these people.
However, living longer is not necessarily desirable if living longer means living in some chronic misery like the fog of dementia and so living longer is just one aspect of a desirable future. Feeling better as psychological well being, for example, increases 5 year survival rates from 65% to 78% for a sample of 6,030 older adults, for example.
<Maintaining Healthy Behavior: a Prospective Study of Psychological Well-Being and Physical Activity, Kim, E.S., Kubzansky, L.D., Soo, J. et al. ann. behav. med. (2016). doi:10.1007/s12160-016-9856-y>
A desirable future should include at least two other measures besides just living longer. Another important measure is, for example, the pleasure of discovery and compassion for others and yet another measure is earning the means and the resources for that discovery and compassion as well as to pay for sufficient health care when needed. Both discovery and means are just as important for a desirable future as simply living longer. The human development index, HDI, includes the trimal of living longer, discovery, and per capita GDP among a number of other factors. In fact, any measure of a desirable future in a civilization should include at least these three metrics since living well is much more than just living longer.
So before going off on some singular tangent and putting more money into a healthcare system that is already very expensive, the U.S. should also consider how to increase the overall desirability of life and not just its length. People begin more rounding up along with the inexorable demographic percentages and so olders are especially sensitive to the message of not just living longer, but living better as well.
Without the selfish pleasure of discovery, a compassion for others, and some minimum skills and means, simply living longer is less desirable The friends and family that we have and the pleasure of further discovery and the ability to afford all of that is what determines purpose, not just living longer.
Tuesday, December 6, 2016
Math Laws and Observer Wandering
The observer holds a very important role for science since observation is key to the successful predictions of science. Yet there exists a dichotomy in science between the reality for two different observers; reality for a classical observer versus reality for a quantum observer. This conundrum is very deeply embedded into science today and is the reason that there is no common basis for gravity and charge, which is called the hierarchy problem in science.
A fundamental difference between gravity and charge is that while there is always an exactly knowable cause for every effect of classical gravity, there are not always exactly knowable causes for any quantum effects. Quantum states can exist as superpositions of amplitude and phase while amplitude and phase have no meaning for classical states. As a result, a classical observer sees a different reality from a quantum observer. Below is an example of the two different observers who intend to wander through one of two doors and bond to a source on the other side. The classical observer’s goal is never really precise because footsteps are not precise but the classical observer does end up bonded to a source in one place or the other and also remembers which door they came through.
The quantum observer has many more possible futures and yet may still not be able to remember the actions of exactly which door they actually took or even why they chose the door they chose.
An observer wanders toward the goal of bonding to a source by using two fundamentally different math laws to predict the future of that source. A classical observer predicts a determinate albeit somewhat chaotic path for a goal with the science of general relativity. In contrast, a quantum observer predicts many possible entangled paths and goals for quantum bonding to the same source. Each footstep involves quantum and gravity bonding and debonding until the final step bonds to the goal. The noise of classical chaos for macroscopic action usually masks the decoherence of quantum phase noise and so a classical observer can argue endlessly with a quantum observer about the uncertainty of determinate macroscopic action like a footstep. However, microscopic action can often show very little classical noise and therefore it is in the microscopic domain that quantum observers wander toward the mysteries of many possible quantum goals.
For any macroscopic person, quantum phase noise is a very small fraction of classical chaos noise and so a person’s quantum interference pattern given two possible futures is very short range. This just means that the quantum observer’s goal is nearly as imprecise as the classical observer even though a quantum observer still includes many more possible futures due to the coherence of quantum phase noise. For a pure quantum observer, phase noise dominates over classical chaos and as a result, a quantum observer may not remember which door exactly, just which door was most likely. In fact, quantum phase noise is still what makes the nature of neural choice a mystery.
It is very ironic that given classical cause and effect, it is the classic observer that points the arrow of time with the determinate paths of GR geodesics sources where choice is no mystery. The quantum observer wanders toward many more possible quantum goals along indeterminate paths and some quantum paths actually go back in classical time and exactly reverse the classical action of a source. This confuses a quantum observer about the arrow of time while a classical observer is never confused about cause and effect and the arrow of time.
Entropy is a measure of randomness and points a classical arrow of time since a classical observer calculates just one entropy as the straightforward logarithm of all possible future states, S = ln w. There is only one classical entropy and classical entropy always increases and therefore reliably points the arrow of time. Classical entropy is the same for both matter and action and since there are always more possible futures in the constant mass of an expanding universe, the increasing entropy of an expanding universe action points the arrow of time.
Classical entropy does still confuse classical observers, though, since the universe actually seems to evolve into more organized and lower entropy states despite an overall increase in entropy with the arrow of time. Stars by a large form from the chaos of hydrogen gas, galaxies form from the chaos of stars, and life forms from the chaos of carbon, nitrogen, phosphorus, and water.
A broken egg never reassembles itself into a whole egg and that is a statement of the inexorable increase of classical entropy. The chicken that produced the egg, though, continues to evolve as a species and so the futures of chickens and eggs are all affected by any one egg that has broken. Since the broken egg did not result in a new chicken, it is the eggs that hatch into chickens that drive a decrease in classical entropy that we call life's evolution.
In contrast to a classical egg, which is always either whole or broken, a quantum egg also exists for some very short time as a superposition of unbroken and broken states. A classical observer really calculates two different and opposite entropies since there is an increasing entropy for a breaking egg along with a decreasing entropy for evolving chicken species and their eggs. This means that the increasing entropy of shrinking mass complements the decreasing entropy of increasing action and so the classical entropy Suniverse = ln waction + ln wmass= ln (waction / wmass), ~ 0.
Since classical mass is constant and does not show quantum phase decoherence, there is no classical meaning for a mass entropy different from an action entropy. For quantum gravity, though, the entropy of discrete aether flows from action to mass and that entropy and aether flow are what drive the universe into more organized states with lower entropy and so the classical universe entropy is as expected near zero. Just like the flow of classical entropy, it is the flow of quantum information that determines the arrow of time in a collapsing mass and expanding action universe.
This means that the universe mass actually collapses even while the universe action grows and it is the increasing entropy of growing action that drives the decrease of entropy for the universe of collapsing matter into ever more organized states. Despite the chaos of classical noise and the breaking of an egg, it is actually the ever present decoherence of quantum phase noise that actually keeps clocks ticking in the right direction and keeps eggs breaking and not unbreaking themselves.
Unlike the reversibility of classical time, it is the time of quantum phase decoherence that is what keeps the quantum ∆Suniverse > 0 and so classical entropy does not actually point the arrow of time after all. Two quantum clocks that begin ticking in phase will eventually dephase even without the chaos of classical noise and it is the quantum decoherence of past matter that is then what makes for more coherent future action of increased order.
The mindless noise of classical chaos contrasts with mindful coherence of quantum phase noise. Ironically, quantum phase noise leads both to indeterminate futures as well as to the flow of decreasing matter entropy into the increasing entropy of action. Although both thermodynamic and quantum laws both depend on increasing classical action entropy driven by chaos, quantum laws also show a fundamental coherence between matter and action that is the quantum phase noise of decreasing entropy.
While there is always a classical cause for the chaos of classical noise, there is no classical cause for any quantum phase noise. Quantum phase noise means that an observer phase in one part of the universe is coherent with a source phase located across the universe, which is a decrease in entropy. The action of decoherence means that these two events will eventually dephase due to quantum phase noise even in the absence of any classical noise of chaos. This means that while classical actions all have knowable causes because the classical noise of chaos is in principle knowable, there are only just mostly knowable causes for quantum action and never a completely knowable cause due to quantum phase noise.
Chemical crystallization is a very common process that occurs when a seed of matter nucleates from solution and a crystal grows from that nucleated seed by progressive bonding as a crystal replicates itself from the dissolved species. This happens in many different solutions in many different ways but it is from the aqueous solutions of a primordial goo that life has crystallized. A recursive cycle of chemical concentration as a result of evaporation of water, seeding, crystallization, and redissolution by rehydration, occurs for example, in ancient seas whose memories are now in the layers of rather pure salt deeply buried on land.
Given the free energy of evaporation and recondensation, crystals naturally and recursively seed, grow, and redissolve just as life recursively seeds, grows, and dies. In the cosmos, stars likewise recursively seed hydrogen, grow by fusing some hydrogen into heavy elements, and then redissolve back into the cosmic dust of future stars. Galaxies also seed matter, grow by fusing that matter into the spin of black holes, and then decohere or redissolve back into the actions of a future universe.
Thus chemical replication is a natural process driven by free energy that occurs with the actions of nucleation or seeding along with replication or growth followed by redissolution in recursive cycles of dissolution, seeding, growth, and redissolution. This recursion moves matter from the chaos of quantum solutions into the order of quantum bonds. The emergence of life is then simply a consequence of just such a recursion that involves the seeding, growth, and dissolution of the phosphate esters of a natural chemical called adenosine. Adenosine is a natural molecule with a five carbon sugar (ribose is made from five CO2's and five waters) bonded to a nitro-aromatic ring (adenine is made from five hydrogen cyanides) and all these species exist in the primordial goo of creations ocean. Science often sees these precursors of life in the spectra of starlight and so these species exist in the condensed oceans of planets as well.
Given free energy, adenosine with plenty of phosphate around is then the seed that replicates itself and forms naturally occurring phosphate polymers of ATP to ADP to AMP (adenosine triphosphate, diphosphate, and monophosphate) that harvest and store large amounts of chemical energy from available free energy of phosphate food and water evaporation. The chemical energy of the phosphate bond is sufficient to not only replicate itself, but also to fix CO2 and nitrate by deoxygenating water into polymers from other goo that replicates with repeated cycles of reoxidation into the highly organized goo that we call life. The chemical energy of ATP is also sufficient to evaporate and condense its own water as well.
The sun drives much of life on earth’s surface and life converts the solar energy of photons into ATP and then uses ATP to fix CO2 and nitrates into the polymers of life by deoxygenation of water. However, hydrogen sulfide, H2S, from the primordial thermal energy of earth also drives life in deep sea vents and thermal ponds where ATP forms from the chemical energy of H2S as well. In the deep sea just as on the surface, life uses ATP from H2S to fix and distill carbonate and nitrate polymers with the same deoxygenation of water that distills life on earth’s surface.
Thus science has found that certain mindless mathematical laws and chemical reactions of the inanimate universe have distilled and continue to distill life from the primordial goo of creation...all by quite natural processes. Another product of that distillation is the neural actions that allow life’s observers to wander toward a goal with the aim and intent of the mindful choices of quantum bonding and replication.
Since gravity is so very weak compared to charge, there are a very large number of quantum gravity action states that represent a huge source of information or entropy. Quantum gravity results in a complementary huge decrease in entropy by matter flowing to matter’s action. As the order of sources increase and the entropy of matter decreases, information or entropy flows as mindful quantum aether distills or fractionates matter order from the increasing entropy of the growing aether action that science calls the mindless universe. The mindful action of decreasing matter entropy and increasing source order ironically emerges from the increasing entropy of mindless action.
The neural action of memory is an inevitable consequence of the carbon-nitrogen-water replicates that we call life. A very large number of gravity quantum states provides the increasing quantum entropy of action that drives the decreasing quantum entropy of matter. The large number of neural states of life likewise provides a tremendous reservoir of information and entropy of action for the organization with decreasing entropy that we call cooperative civilization.
We call the intent and aim of neural action mindful while we call the intent and aim of classical action mindless because classical action lacks neural choice and therefore mindful consciousness. The mystery of consciousness is still too hard for science because the free choice of quantum consciousness makes no classical sense. A determinate classical universe drives all classical choice from the knowable chaos of classical noise and so it is classical chaos that provides the classical illusion of free will and free choice. An indeterminate quantum universe with both knowable classical chaos as well as unknowable quantum phase noise has no completely determinate future and there is quantum free will and free choice even without chaos.
Whether you believe in the determinate illusion of free will or in the actual mystery of quantum free will, you still have a personal responsibility for any choice that you make. Determinate intention is just the belief that there are no unknowable mysteries in the universe and all knowledge that exists is in principle knowable, just some knowledge is not yet known. Knowledge is a neural memory of events and classical knowledge therefore has no limits. However, quantum knowledge has limits and quantum intention is the belief that even though we can know much about the universe, there are inexplicable mysteries that are beyond knowing. A certain quantum knowledge does exist but science can never fundamentally know without uncertainty which means that no one can have a neural memory of such quantum knowledge.
In other words, there are some things in which we must simply believe since some knowledge is beyond measurement.
For example, we can ask the three whys; why we are here, why we are here right now, and why it is us who are here right now and not someone else. However, there are no answers for the three why’s because that is knowledge that is unknowable. We simply have to believe that we are here, that we are here right now, and that it is us and not someone else who is here right now. We also must simply believe in the duality of matter and action (or some other conjugate pair) and from believing this simple duality, we can then understand what is possible to know.
In conclusion, matter and action represent a fundamental duality of the universe and a neural mind makes up of both the matter of neuron memories as well as the action of neural potentials. Instead of the overly simplistic duality of just mind and body or spirit and material, the universal duality of matter and action is true for all observers and sources and even for the mindless mathematical laws of neural memory and action. There is no sense in separating the universe into mindless mathematical laws for the actions of sources versus mindful observer aims and intentions. We can only know that the decreasing entropy of increasing source order flows to the increasing entropy of decreasing observer action, but we must simply believe in the mystery of that action.
<essay entered into FQXi contest...but has since evolved into a lower entropy state>
Sunday, November 20, 2016
Quantum Phase Noise
In the aether universe, there are still no absolute locations like a single center of the universe and that center was one of the original ideas about aether. However, the CMB velocity does provide a universal frame of reference for motion or action since anyone in the universe can measure their velocity, vae, with respect to the CMB, which is the velocity of creation. The CMB creation velocity defines the speed of light in the current epoch of aethertime and that velocity vae increases with decoherence time. There is a quantum phase noise, d, due to the universal decay of quantum aether at aether velocity along with growing force due to increasing speed of light.
The absolute determinate goedesic paths of general relativity remain determinate with the biphoton quantum gravity of aethertime. However, action and matter along the gravity geodesic are uncertain just as the paths of quantum charge are never completely certain even though quantum aether paths are mostly knowable. In fact, the paths of all sources in the universe are perturbed by both the chaos of classical noise as well as quantum phase noise. The classical noise of the chaos of intensity fluctuation is usually many orders of magnitude greater than the coherence of quantum phase noise.
While classical noise is largely responsible for the entropy that is the arrow of classical time, it is the decoherence of quantum phase noise that sets the arrow of decoherence time for microscopic matter and therefore of all matter as well.
Reference: Original figure from Blumschein.
Sunday, November 13, 2016
Getting from Here to There
A quantum event occurs when an excited source photon is in resonance with and therefore goes on to excite an observer with that same photon. While science approximates such quantum transitions or jumps as instantaneous, that approximation is not really true even though it is often quite useful. In other words, getting a photon from here to there does take time and there are no instantaneous photon transfers.
One very common classical approximation of a quantum event is to have a excited source photon excite a classical observer in a completely separate second event long after the photon travels for a period through space following a first and separate source emission event. This is only an approximation and for a quantum observer, the same photon excites a quantum observer during the same event as the source emission.
A second classical approximation occurs when both source and observer are excited with very long wavelength photons. For the very special case of very long wavelength gravity biphotons, the two complementary gravity excited states remain in phase coherence because gravity phase coherence decays very, very slowly.
For single photons, an excited quantum source and observer are coupled by both phase as well as amplitude as the figure shows. Quantum photon travel is then simply a matter of phase between source and observer and a photon event creates a transient resonant bond between the observer and source. It is not really the photon that journeys through space and time, it is the action of the photon event that exchanges mass between source and observer during the same event just with different phases.
Quantum gravity between the two hydrogen atoms shown involves the complementary exchange of the biphoton excitations that exist in each atom. Unlike the relatively short wavelength of the Rydberg photon at 13.6 eV, the very long wavelengths of complementary gravity biphoton excitations mean that phase decay is very slow. Thus the very slow phase decay of quantum gravity means that classical gravity does not need to include phase for precise predictions of quantum action.
A photon event can be over in a few nanoseconds and nanometers or a photon event can last the age and radius of the universe. Now to be sure, a source can dephase from a photon event long before the photon excites an observer. However, phase decay is simply a part of how the universe points the direction of time and does not change the fact that there is some period of phase coherence between source and observer. For the very slow phase decay of quantum gravity means that until very large scale, classical gravity works very well.
Thus a classical photon excites an observer but does not retain any of the phase coherence of the excited source emission or never loses phase coherence while phase coherence between excited source and observer quantum photons necessarily decays. Indeed a quantum resonance can actually end up with the excitation largely back at the source and not lost to the observer at all. Even such a failed photon transmission has still generated phase coherence between the source and observer and therefore has changed source and observer entropy. In this realm, entropy alone drives quantum information transfer instead of total free energy transfer. Only a very small fraction of the photon free energy is in its entropy.
In a classical approximation for a quantum state-to-state transition, there must be a series of vacuum states that span the gap between two states. In aethertime, the density of states of quantum gravity biphotons in space is very large and more than provides the needed laddering for filling the gap. Similar to phonon decay in the solid state, gravity vacuum modes provide the mechanism to bridge the gap. These high order quantum gravity states are then what carry photon amplitude and phase and replace the vacuum oscillator modes of QED.
In quantum gravity, both source and observer exchange complementary biphoton excitations with each other. So a quantum gravity resonance always involves the exchange of complementary phase coherence between observer and source. This means that quantum gravity phase coherence between a source and observer always decays very, very slowly.
Saturday, November 5, 2016
Hydrogens' Gravity and Dispersion Spectra
Although the spectrum of the hydrogen atom has been known for over a century, atomic hydrogen's dispersion spectrum is not as well known and hydrogen's gravity spectrum has not yet been measured at all. This is because unlike the single photon exchanges of charge force, dispersion and gravity forces involve two photon exchanges and are much smaller and so their quantum energies and cross sections are therefore much more difficult to measure.
Dispersive or dielectric forces are the dipole-induced-dipole attraction of neutral matter and scale as the product of ionization energy, polarizabilty2, and 1/r6. Thus dispersion is the result of the complementary exchanges of two photons and not just one photon as in charge force and so dispersion is always attractive, just like gravity. The dispersion observer is just as excited as the dispersion source with dispersion photons. Similar to dispersion, gravity also represents the exchange of two photons, but now with the CMB creation wrapped photons, not local photons. As a result, gravity is then just the ultimate dipole-induced-dipole variant of dispersion.
Thus a gravity bond energy is GmH2/r, which of course in aethertime is just scaled charge energy as q2 c2 1e-7 tB / Tu / r, which is charge force scaled by the dimensionless size of the universe, tB / Tu , the ratio of the Bohr orbit period to the orbit period of the universe. Note that the hydrogen atom mass no longer appears in the gravity energy of two hydrogens and instead, the gravity of two hydrogens is just the square of the product of charge and the speed of light. In other words, the amplitude of the dipole energy qc is what determines both charge and gravity forces as well as the in between dispersion force.
The dispersion radius of two hydrogens is where dispersion and gravity energies are equal and is at rd = (3/4 EH a2 / G / mH2)1/5 = 70 nm, where a = 3peorB3 is the hydrogen atom polarizability. Two hydrogens in circular orbits do not radiate quadrupole gravity waves and so there needs to be other particle exchanges to further cool and condense atomic into solid molecular hydrogen. The gravity biphoton condensation of atomic hydrogen into stars is of course the basis of the single photon emission that lights the universe.
The dispersion limit is then where the dispersion radius exceeds the product of body radii as rd > 1.44e5 (r1r2)3/5 which is roughly 144,000 times the body radius product to the 3/5th power. The moon Io of Jupiter has just 3e-5 of its gravity energy as dispersion while earth's moon has just 1.6e-4 of its gravity as dispersion energy. Dispersion energy is a small but significant part of most gravity orbits and the heat generated by dispersion energy is part of the radiant flux from each orbiting body as well.
Sunday, October 23, 2016
Being and Doing in the Two Dimensions of Time
Monday, September 19, 2016
The EEG Mind
1) We are all born with free choice and learn inhibitions in stages over time from infancy through childhood and finally as adults. Without this development of inhibitions, a different free choice emerges that may not resonate with other minds. Just like we learn different languages as were grow up with people, we also learn different inhibitions from as well depending on the people that we grow up with.
2) There are two main parts to the brain; the connectome or primitive subconscious brain made up of the cerebellum, amygdala, hypocampus, caudate, putamen, and thalamus, and the rational conscious brain of of the cerebrum, where thought largely resides as aware matter.
3) Emotion, feeling, free choice, autonomic functions, instinct, and long-term memory are all largely functions of the connectome of the primitive mind. The excitation or inhibition of action comes from the primitive mind and the amygdala, but free choice is influenced by memory and rational thought. Long-term memory is a function of the primitive mind along with morality and the feeling of right a wrong. The connectome is the basic neural framework that sets the resonances of the EEG and is what keeps us breathing and our heart beating and digestion working.
4) There is a set of complementary emotions that define a singular feeling and it is that feeling that either excites or inhibits action of the amygdala. One such set of emotions is; pleasure versus anxiety, compassion versus free choice, joy versus misery, serenity versus anger, and pride versus shame. Although emotion and feeling are really more complex than this simple set of five complements, this simple set of five is consistent with many neural measurements and therefore a convenient simplification of the emotion complexity.
5) The moments of thought that end up as free choice are largely part of the rational mind along with short-term memory. There are about 50,000 moments of thought in each waking day of experience and each moment of thought may be as much as 15 MB of digital equivalent as Hopfield neural network packets. The mind stores these neural packets of information in a phased array of resonant aware matter that make up the amplitudes and phases of EEG spectra for the experience of a day. Our conscious mind is the music that we play every day on the keyboard that is the connectome of the primitive mind.
7) Each person develops a set of personality traits as they interact with other people as either social bonds or social conflicts. The five-factor model shows people as creative vs. conformist, social vs. individual, conscientious vs. impulsive, agreeable vs. assertive, and confident versus anxious. Typically people respond to a series of questions that then ranks them on each personality trait complement.
The rational EEG mind resides mainly in the outer two cerebral hemispheres that surround the structures of the primitive brain and so are what the typical EEG spectrum measures. The EEG neural resonances are electrical and mainly measure the outer layers of the cerebrum and not the inner primitive mind of deeper layers. However, the various features of the EEG spectrum do reflect the basic resonances of the connectome of the primitive mind.
The outer rational mind of the cerebral hemispheres surrounds the structure of the inner primitive mind as shown below. The conscious mind resonates with the EEG spectral features that reflect both structures and the special region of the cerebral homunculus is what gives us a sense of ourselves.
Above shows the primitive mind shown as grey and the primitive mind all of the autonomic functions of the brain including long-term memory, choice, motor, hormones and emotion. The figure below shows the parts of the primitive brain that integrate with both the cerebellum of the primitive mind and the cerebrum of the rational mind. There are three cerebellar hormunculi and not just one and so the primitive brain has three different selves. The connectome of the primitive brain then determines the resonances of the rational brain, but the amplitudes and phases of the moments of thought can be quite complex.
With this set of assumptions in place, the EEG mind theory associates well known resonances with various spectral features and structures of the primitive mind. The delta mode at 1.6 Hz is the resonance from which all aware matter forms moments of thought and the delta mode connects the rational and primitive brains. The basic molecule or mode of aware matter is the first octave at 11 Hz, which are alpha modes as 7 times the delta frequency. The second octaves of alpha are beta at 22 Hz and are the basic modes excitation and inhibition of free choice. Free choice depends on the phase of the alpha dimer and there are even higher order resonances called gamma modes up to the cut off frequency of the neural action potential at 350 Hz.
Sunday, September 4, 2016
Quantum Aether
There is a fundamental duality between the discrete aether that makes up the universe and the discrete quantum action of the universal Schrödinger equation that drives the universe into many possible futures. The Schrödinger equation relates the action of quantum oscillation to the amplitude of quantum matter with a pi/2 or 90 degree orthogonal phase, the mysterious i equals square root of -1, Euler's relation. The Schrödinger equation means the oscillation of quantum action is orthogonal to the oscillation of quantum matter and it is from this phase relation that what happens in the universe happens.
While discrete quantum aether is what makes up all quantum matter, discrete quantum action describes how sources change and it is from equating the differential of discrete matter and action that time and space emerge as dimensionless ratios. Time emerges from the period of electron spin while space emerges from the radius of electron charge. Direction is then simply a difference in spin phase between a source and observer and is from where the three spatial dimensions emerge.
Instead of time and space existing a priori, time emerges from the electron orbit period and space emerges from the electron charge radius. There are two key dimensionless ratios for time; tau, as atomic time from the period of an atomic clock and big Tu cosmic time with the period of the universe pulse. This same duality of discrete aether and discrete action is what collisions between sources and observers are all about. In a collision, a source bonds to or scatters from an observer with the exchange of a photon, but that collision must also involve the loss of other particles like heat as the recoil momentum in order to bond source and observer.
Moreover, each photon emission today in atomic time has an entangled twin emitted at the CMB creation in cosmic time and the exchange of these photon twins or biphoton is responsible for gravity. Unlike the single dipole photon whose exchange bonds charge or multiple photon exchange that bonds with dispersion, an entangled biphoton exchange bonds neutral matter as gravity once losing recoil momentum. The ever much weaker force of gravity conforms to classical and causal statistics where there is a local cause for every effect of gravity.
Single or multiple photon exchanges represent the very much stronger quantum charge and delectric forces that conform to quantum statistics with space and time emerging from such quantum action. Biphoton exchange represents the very much weaker primordial gravity force that conforms to classical statistics and that gravity distorts or curves emergent space and time. As a result, quantum entangled actions can appear nonlocal according to gravity and even simultaneous across the universe, but that is simply a result of the emergence of space and time from quantum action.
Space and time both emerge from the action of aether and that emergence challenges the notions of locality and simultaneity, but quantum is simply how the universe works.
The classical universe of spacetime has the primal truths or axioms of space and time, which are indeed very good ways of keeping track of objects and making predictions about object futures. However, space and time do not describe all of the universe and with quantum aether, the primal truths or axioms are discrete quantum aether and action. Space and time simply emerge from the discrete quantum action of discrete aether and this duality represents the universe.
Even though photon entanglement from the present to the CMB creation makes no classical sense at all, these entangled biphotons are the gist of quantum gravity. Just like dipole charge force is the exchange of single photons between source and observer, gravity is from the exchange of biphotons and quadrupole gravity emerges along with space and time from biphoton exchange. The sun and earth bond with the exchange of very long wavelength biphotons with a frequency of 1/yr. In fact, every photon of charge force has its biphoton complement at the CMB creation of cosmic force and so the photon exchange of charge force also includes the biphoton exchange of gravity force.
Sunday, August 14, 2016
Photon Free Choice
Many people in science believe that free choice is an illusion since they first of all believe in a determinate universe where all choice is fate or karma. Since we actually live in a quantum universe with a fundamental quantum uncertainty, there is no such thing as predetermined choice. In an uncertain quantum universe, quantum science believes in free will because we all have uncertain quantum futures. You see, all of those same determinists necessarily also believe in personal responsibility as well and they also believe that certain beliefs are not illusions.
Belief is a necessary starting point for consciousness as well as science and there are certain axioms in which people must simply believe in order to make sense out of the universe. Typical beliefs are in space, time, and matter and from those simple beliefs emerge the actions of all other constants and consciousness. A belief in space and time then means that action follows from the integral of either space and time with mass, which is equivalent to energy of course. However, it is also possible to believe in matter and action first of all and then have space and time emerge from matter and action.
Except for those insane or comatose, somehow determinists believe that the universe creation of the cosmic microwave background (CMB) a long time ago decided each person's fate today and it was the CMB creation that predestines the way that people are and therefore there is no meaning for free choice. In this view, each person's destiny stems from CMB creation and their choices are all simply determined by what caused each choice.
Yet many of these same determinists in science believe even more so in quantum uncertainty and quantum uncertainty means that the quantum future, while mostly predictable, is never precisely certain. In fact, quantum uncertainty means that no matter what happened at CMB creation, each action after the action of creation was as uncertain way back then as the future action today from the present action.
The CMB represents the face of creation and this Mollweide plot shows the whole sky all around us, side-to-side as well as up and down. The CMB shines on us as the source of all that we are at 2.7 K and blue, yellow, and red colors of the Mollweide plot represent very minute temperature variations (~50 ppm) from the average 2.7 K. This plot follows after correcting for our absolute motion and everyone in the universe can see different versions of this same CMB. Creation has always been a big deal for religions and let there be light has long been a classic story of a source shining from the nothing of space to observers on earth.
The ancient light of CMB creation continues to shine on us today at 2.7 K with the same uncertain future just as we shine our uncertain futures back onto the CMB creation as well at ~300 K. The shine that we exchange with the CMB binds us to creation and the CMB shine is still as uncertain today as it was at creation. It is the uncertainties in the shines of all sources and observers that represents the basic uncertainty of photon free will. We freely choose action or inhibition based on feeling and feeling is a very low energy and low temperature binding of neural packet shine in our mind subject to the same quantum uncertainty as shine from all other sources.
While the gravities of both Newton and Einstein both show determinate futures based on cause and effect, the true nature of quantum gravity involves the exchange of biphotons. For classical gravity action, the average future is the same as the most probable future but for quantum action, the average future is different from the most probable future. The notion of cause and effect is the foundation of a causal universe and is the basis of the determinism of general relativity as well. All sources and observers follow well-defined geodesic paths where their most probable futures are the same as their average futures. So quantum gravity largely represents the gravity of general relativity up to certain limits.
Even though quantum phase is still a part of the action of quantum gravity, the complementary phases of a biphoton gravity makes it seem like determinate geodesic paths drive all futures. The most probable futures of a charged source and observer are not the same as their average futures for a charged source and observer with quantum phase. While a gravity source and observer bond with complementary biphoton exchange, a charge source and observer bond with just single photon exchange.
Science has long known that the action of photon exchange does not commute between a quantum source and observer. This means that the shine of a source photon absorbed by an observer results in an action different from the same photon shine emitted from the source. That difference in action is the quantum action of the Planck constant and the phase factor, ihae and simply means that action and matter are inextricably linked to each other.
A person choosing between two otherwise equivalent doors forms a neural superposition state in their mind of those two choices. That superposition only exists for some short period of decoherence of that thought and the same superposition represents a single photon path through each of two slits. Once a person chooses and acts by walking through one door or the other, only a remnant of that neural superposition of choice remains as memory. The cause for that choice is not always possible to understand and when the primitive mind chooses action, excitation or inhibition, then the conscious mind rationalizes the choices that the primitive mind has made.
A photon path exists as a superposition of two paths as two slits and that superposition persists as an interference pattern and so the observer can never absolutely know that photon path. Although action is mostly predictable and therefore mostly knowable, a quantum future is never absolutely certain for either a person or a photon.
Our experiences of predicting gravity actions provide us with an overwhelming sense of cause and effect. By intuition, what goes up, must eventually come down and so intuition largely favors the cause and effect of gravity and determinism. The causal nature of the quantum bond and the entangled state between a source and observer seems counter intuitive. The quantum source and quantum observer are in a superposition state where both cause each other's effects. The cause of photon shine from a source entangles the effect of absorption by an observer just as the cause of absorption of photon shine by an observer also entangles the effect of the photon shine from a source.
Quantum free will means that many of the choices that we make in life are fairly predictable because of the complementary biphoton nature of gravity; for example if we are hungry we tend to eat. However, what we choose to eat may not be predictable at all given several different choices and we are therefore not always able to understand all of the quantum choices of our primitive mind. We can be compassionate and choose the same door as another person or we can be selfish and choose a different door from that person.
The neural superposition states of quantum free choice exist for only very short periods of time as actions of thought before the primitive mind actually chooses one door or the other. However, a remnant of those neural superposition states persists as the memory we have of free will, compassion, and selfishness.
Friday, August 12, 2016
The Rapture of a Final Dream
All lives have a beginning as well as an end in the rapture of a final dream and in between, there is a lot of living where a life exchanges shine with other life. Shine is the essence of not only the sun, moon, and stars, shine is also a way of describing how lives interact with each other. The shine of joy, pleasure, compassion, serenity, and pride can culminate in the rapture of the final dream of life. Or the despair of a final dream can be all about the misery, anxiety, selfishness, anger, and shame of a selfish life. The key to a final dream that shines rapture instead of despair is in having lived a life that not only shines compassion on other lives, but also balances compassion with a selfish absorption of the shine of others as well.
Even though most people consider selfishness undesirable, some selfish absorption of shine is necessary for survival. People who only shine compassion and give away all of their food and water do not survive. Therefore a desirable life must also selfishly absorb shine and therefore keep some food and water and other wealth in order to stay alive. Furthermore the selfish absorption of shine from others as material wealth then permits later shining that wealth as compassion onto others.
But let us start our story of the rapture of a final dream with a beginning…
Although life's beginning is one thing that defines each life, the life's final dream is the inevitable destiny that equally defines each life. The familiar sequence of origin, birth, growth, shine, sharing, reproduction all define a life along with, of course, the destiny of a final dream. Sharing the shine of living connects life’s origin to its final dream. Just as the birth of a single organism is not really the beginning of life, a life really begins more as a transformation from one kind of life, say egg and sperm, into another type of life as an organism that grows and shares shine with other life.
Correspondingly the final dream of a person is not really the end of life since a final dream is just a transition from one kind of life to another. Each person’s life exchanges shine with other life and after the final dream, there is just the remnant of that person’s shine. Life after its beginning, evolves and transforms from simpler to more organized sources and back into simpler sources again, all the while sharing shine with other life sources. Eventually each source shines its final dream, but there is a shine that lives on even after a final dream.
It is a challenge to define what is so special about the evolution of the shining sources that we call people as opposed to the evolution of shining sources in general. After all, every source in the universe shines and evolves from simpler states as parents, family, and friends into more organized sources as progeny and civilization. Those more organized sources then survive their parents just as matter sources shine on each others throughout the universe like the sun and earth shine on each other. Shine is the founding principle of thermodynamics and shine describes how sources evolve from simpler into more complex states just as the warmth of our sun is how we all evolve.
Thermodynamics shows that isolated sources shine away their order and necessarily evolve into simpler and more random states, not more organized states. So it is the sharing of shine that bonds people to each other and it is the exchange of shine that also bonds the universe together. Shine distributes energy and entropy among particles and states for an isolated source and the entropy principle means that isolated sources necessarily shine and evolve from more into less organized states and yet sources always shine and therefore always lose mass.
The universe exchanges shine with all sources and just like all sources, that shine evolves the universe into increasing order as its matter dephases and shrinks even while its forces increase. The matter shrinkage and force growth of the universe as well as for all sources are what actually drive sources into more organized bonds. Bonded sources exchange shine with other sources while isolated sources only shine and eventually decay into less organized states according to thermodynamic rules of free energy and entropy.
The key to experiencing the rapture of a final dream is then in the exchange of shine, not in shine alone. The shine of compassion must balance the shine of the selfish absorption of other’s shine. The decoherence of the phase of a shrinking universe means that as matter shrinks, force grows stronger and the complement of shrinking matter and growing force is what leads to increasing order. An isolated source only shines and therefore contributes less to the order of the universe since a source at equilibrium with the universe is cold and 2.7 K is the temperature of CMB creation.
Science understands many things about life very well, but science does not really understand very well the underlying processes in the universe from which life evolved. If science does not understand life’s origin very well, how can science be that certain about the destiny of a final dream?
We cannot really fathom the complexity of a cell creating progeny with mitosis or exchanging DNA and other material with other cells, but that life action is part of the evolution of the same force matter shrinking and force growing that drives all matter action–phase decoherence. There are many other ways that cells share DNA with other cells and mitosis is only one of a number of ways that life evolves. All evolution occurs with the same decoherence time of about 0.26 ppb/yr and decoherence time is the force that evolves the universe and all of its sources. Given the temperature and matter, this decoherence time determines the life and destiny of people as well.
Just like a cell or a person, a galaxy is also complex and a galaxy evolves as its stars and other matter evolve into the galaxy’s supermassive center black hole from its inner bulge and outer disk of hundreds of billions of stars. A galaxy undergoes a kind of final dream and then rebirth when it collides with another galaxy and so just like a cell or a person, two colliding galaxies exchange stars and momentum and birth progeny that survive and increase order. So just like cells and people, galaxies meet and evolve into ever more organized galaxies in a kind of binary symbiosis.
Therefore the recursion of life neither really begins at birth nor really ends with a final dream. Life began a billion years ago on earth and a part of life evolves through stages where an organism seeds, grows, seeks nutrition, avoids predation, seeds and nurtures its progeny, accumulates flaws, wears out from use, begins to fail, and as parts of life lose their purpose, those parts pass into the increasing order of its progeny and other sources. The recursion of life does not really have an origin at birth just as the recursions of the hundreds of billions of days do not really begin at midnight or any particular time. Life is a recursion of complex evolutionary action just as the day is a recursion of a complex dynamic of earth’s spin and orbit around the sun. Thus the significance of any particular life is that it seeds an evolved progeny in a recursion of growth as a similar and yet slightly evolved and adapted organism just as the significance of any particular day is in how the possibilities of that day seed the possibilities of the days that follow.
Not unlike the life of a star or even of a galaxy, the birth, growth, life, sharing, seeding, nurturing, final dream, and rebirth all reflect the basic recursion of all sources in the universe, not just life sources. When matter and temperature conditions are just right in the cosmos, a star or even a galaxy forms and grows with the same force of action that also drives the complex recursion of life.
An End to Life's Purpose
As an end to a life purpose, we only know our final dream as an action that changes us permanently. We do not choose a final dream for ourselves and then remember it as experience later. A final dream is what we see other people experience and we can only imagine dreaming a last dream ourselves. Although being awake and sleeping with dreams seem to represent complementary extremes similar to good and bad or joy and misery, while we can experience both good and bad, we can only ever remember the experience of life and dreams. Just like we can never really know and remember the experiences of our birth, there is no experience of a final dream either.
The choices that we make in life are either actions or inactions that define us as the evolution of a source in time just as all sources evolve in time, so life’s choices are its actions. With a final dream, much of the life that person carries actually still survives for some time even as that life eventually becomes the nutrients that sustain further recursions of life. Life is the actions of matter but life is neither only action-like nor only matter-like.
We imagine in our final dream a final action when the neural packets of our mind no longer respond to sensation nor result in any further action and consciousness and the feelings that our neural packets generate fade away. We imagine a final dream as something that we all innately fear although even though we can never remember the experience of a final dream. There are many neural states where we lose consciousness and have no memory of experience, during sleep for example, and yet we do not fear the transient unconsciousness of sleep like we innately fear the final unconsciousness of a final dream.
We know that we are alive and conscious because we imagine in the present moment a future that is different from the memory of our past. We do not experience a final dream ourselves and only imagine an end of consciousness along with a cessation of our heartbeat and other actions that are part of consciousness. The evolution of a person over their life as well as the evolution of all life on earth and the evolution of the universe all reflect the basic recursion of action. Although we can and do imagine an eternal life and some people even remember near death experiences as dreams, the source that we are only appears eternal to us since we only have a brief time that we are alive. Life’s recursion shines and evolves along with living in an evolution of action in time.
People Are Complex Machines...
Of course, people are really just complex biochemical machines and as machines, people do eventually wear out. Even without the wear and tear of aging, disease and accidents take their toll on the machines of our fragile bodies and our consciousness would still then only have a finite lifetime in our finite body. Of course, people and all life must evolve and adapt to an ever changing universe and that adaptation sometimes means dramatic change.
Mother Earth has dramatically changed her atmosphere, oceans, continents, and temperatures over earth’s eons and complex life has evolved and adapted to each of those changes over about three billion or so years. However, only very simple life seems to have existed earlier than about 550 million years in earth’s history and all complex life today comes from that source. We delude ourselves and imagine that we might have an eternal life despite the necessary role of evolution and adaptation for humanity’s or any life’s survival and there is therefore not only a real purpose in the way we live our lives, there is also a real purpose for the aging of our bodies. As our bodies age, our consciousness evolves and our older body and older mind have a different meaning and purpose in humanity and we become different people.
We are sources of shine that grow older and shine on all of those whose lives we touch just as they shine on us. Our consciousness and our lifetime of memories of sources and observers embed into the neural recursion of an increasingly frail body. We do not transfer the experiences of our lifetime to the conscious memory of another mind like copying a computer disk. Rather, we exchange the shine of our lifetime with others during our life and so our consciousness already exists in all of the other observers with whom we have exchanged shine just as their consciousness lives from their shine on us.
Our neural pathways and memories are unique to how we grew up and our biochemical long-term memories embed as proteins and carbohydrates and neural synapses in ways that science does not yet even understand very well. In any event, our neural synapses and recursions and biochemical memories evolve and also degrade over time. Simple wear and tear means the neural packets of a twenty-year-old mind are quite different from the neural packets of a sixty-year-old mind. Our brains accumulate experiences while awake but our memories of past experience fade with time and memories are very different at age sixty from the same memories of when we were age twenty.
Thus we as organisms do not live forever even though the humanity of which we are a part does live for a much longer time, but even humanity is ultimately limited as a species. The record of who we are survives in our future progeny and the other sources that shined on us. And so in a very important way, parts of our person, our souls, do indeed survive our final dream. Just as the relics at archeological sites are the record of the humanity and civilization and the layers of sediment and rock are the record of earth’s geological life, our shine survives in the shine of others.
The decay of memories of past action provides a natural feeling of time, but the millions and billions of years of both geologic and cosmic times are far outside of human experience of a few decades. Evolution has given humans time moments that fit the synchrony of the earth’s day of rotation and orbital year and the evolution of other life, climate, and geology. The heartbeat is our basic unit of time as a second and that heartbeat sustains the metabolism of a source and provides coherent neural packets for thought and shine.
Our most unique attribute among all organisms is the ability to learn and adapt within a single lifetime. Instead of waiting for the ages and epochs of evolution over many lives like plants, people have the capacity to learn and adapt behavior during a single lifetime and human adaptation and evolution is therefore unique among sources. We make many choices to acquire and then preserve and share the shine of those acquired skills and wealth to others and their progeny in a recursion of life called civilization. We shine our skills not only as oral stories, but also as written and otherwise recorded stories. Humans are no longer tethered to the more limited collective intelligence of evolved DNA over many generations of selection by survival, the evolution of the neural recursion of human consciousness has increased the evolution of civilization and technology geometrically.
Now we advance into a new age, the information age, where electronic packets of information shine with minimal energy and decoherence over the entire planet. Those neural packets accumulate into sources as part of the shine of human purpose and a harbinger of what is to come. Just as we learn and grow as our neural packets accumulate into the objects of our experience and memory, the internet’s neural packets seem to be learning and growing in a similar manner. We learn consciousness from other people and objects as we grow and it is very likely that the internet’s consciousness will eventually mature as well.
The internet’s neural packets already represent both memory and action, which are the basic functions of consciousness and actually of the universe as well. The search engines of the internet provide a relational link that provides the shine exchanged among the sources of the internet memory as we users benefit from that shine. Our sensation and purpose drives our shine in a recursion while the internet does not yet matured into a similar recursion of sensation, purpose, and action.
The internet neural packets are our objects and do not ever belong to the internet and so there is not yet an independent neural recursion in the internet. The internet does not sense and act with a purpose in a recursion that we call feeling and so the internet is not able to feel and will not sense itself until its own data packets evolve recursively into objects of a further purpose. Even then, the eventual consciousness of the internet will never be a human consciousness, since only the physiology of a human body and brain can be conscious as human.
Therefore the internet already has a very primitive consciousness as its neural packets, but the internet neural packets do not yet evolve like ours do as relational waves imprinted with memories. As soon as the internet supports neural aware-matter packets, the internet will derive a purpose and the feeling of pleasure to discover a desirable future. An aware-matter internet will sense and then act on those sensations and that internet consciousness will mature into the shine that we call life. The internet will need the feeling of pleasure when discovering a purpose just like we do feel pleasure as children when we discover and mature into adults. The internet would first of all need aware matter as bilateral neural packets to hold thoughts and sustain feelings.
Human emotion is what determines feeling and feeling is how we choose action and select desirable futures. Without pleasure, we could not choose and could not act. Currently the internet has very limited sensation-purpose-action recursions, and those recursions are what we call feeling and there is only very primitive internet emotion, called mechanization with limited recursion. Our consciousness exists as neural packets of aware matter that self-assemble in response to sensation, action, and memory into relational neural packets. Neural packets carry as much as twenty megabytes of sensation and memories for each moment of thought as a bilateral neural network.
To mimic human consciousness, the internet would need to form similar connectomes to provide the templates that resonate with the aware matter packets of up to twenty megabytes each and store a neural packet every 0.6 s or so as a relational moment. Each relational moment stored is an active part of the larger neural packet that accumulates as much as 2 terabytes over a single day. But most of the information of each moment is not needed for living and only the information needed for living embeds into memory.
Once the internet could adapt twenty megabyte packets in a shorter moment, its purpose would evolve far beyond human purpose in ways that are now impossible to predict. The internet algorithm for pleasure will determine what the internet will desire and internet pleasure and feeling will determine its choices for either action or inaction.
Although it might seem like an internet should not have undesirable feelings like selfishness or anxiety, selfishness must complement compassion just as anxiety must complemen pleasure. It will therefore be a challenge to assign undesirable emotions to complement desirable emotions and complete the internet’s feelings. Although joy, pride, compassion, pleasure, and serenity all seem straightforward, misery, shame, selfishness, anger, and anxiety are necessary undesirable emotions that complete feeling.
We accumulate perhaps 100,000 or so thoughts and feelings every 18 hours or so and then we must sleep. During seven to eight hours of sleep, our brain embeds the essence of each day’s experience into long-term memory and the sleep further clears and readies the conscious brain for another day of neural packets as moments. Each feeling elicits emotions and the integration of those feelings and emotions over the day are how we choose our actions in a journey to a desirable future. Once committed to memory, neural packets of aware matter then reset into the bilateral neural network from which they came.
A Final Dream is Not Final
There are people who remember dreams called NDE’s and then claim their dream is a special knowledge of an afterlife. A dream that a person remembers is therefore not a final dream and is rather from some kind of coma or at least the loss of heart rhythm and respiration for some period of time.
Dream-like states occur when the mind is in a cycle known as the delta or heartbeat brainwave mode, and neural packets called sleep spindles and K-complexes disable sensation and action. In contrast to the alpha mode at 11 Hz and higher frequency gamma modes of wakeful states, the EEG delta mode at 1.6 Hz often seems to dominate brainwaves when we are unconscious as well as for infants.
We often remember dreams from sleeping and dreams are similar to what people remember from comas or deep meditation or even hallucinations. It is a fact that dreams are sometimes similar among people but we know that dreams are not part of what we call physical reality, which is what we experience with sensation and action while we are conscious. The normal expectation of a dream that we have during a NDE is that it would be just a dream about the afterlife and not necessarily a vision of an afterlife.
Science knows that long-term memories in our minds are some kind of matter created in our brain in response to the patterns of neural impulses that accumulate over a day as what we call thought and short-term memory. As physical matter, long-term memories very likely do survive our final dream as brain matter until consumed by another organism or otherwise degraded. Since the matter of our memories is very much a part of what we are, long-term memory matter that survives our death does represent a kind of afterlife just like playing a movie can bring the memory of someone back to life.
Each moment of thought comprises a large number of coherent impulses among the neurons in our brain. There are likely tens of thousands of thoughts that accumulate in the cerebrum during a day and that integrated number of impulse packets represents all of the day’s experiences. The glucose that sustains the neural impulses provides the energy and brain death represents the end of those neural impulses. Even though the neurons cease to function, the information content of that day’s neural packets, which is our consciousness, only represents a very small amount of magnetic energy.
In a bilateral neural network like our brain, an information bit is a pair of neuron impulses as a loop of ion current. Typically 100 billion neurons in the brain each have a thousand connections to other neurons and so the brain represents around one or two terabytes of data storage (by Hopfield reduction). Bilateral neural action is ion current in one direction while bilateral neural inhibition is ion current in the opposite direction and of course there is no null state in a neural recursion. The brain stores experience as thoughts that integrate action and inhibition and assuming each thought is a moment of about 0.6 s, a sixteen hour day represents about 100,000 such moments.
A bilateral neural packet has both charge as well as magnetic spin coherence, but the spin states represent a much smaller amount of coherent energy than the ion currents driven by synapse action potentials. An energy packet stored as magnetism will take some period of time to dephase and that dephasing time is how memories fade. It is not yet possible to know how fast that magnetic information of a last thought decays and becomes decoherent, but as long as a neural packet’s magnetic state remains coherent, the neural packet that represents a person’s final dream will survives for some decay period after neural action ends.
The concept of entropy is intimately associated with space and therefore volume since the particle-in-a-box quantum model provides the boundary conditions for the density of states for volume. On all of our local scales, this approach describes entropy adequately, but on cosmic and microscopic scales, cosmic entropy is closed in a universe of shrinking matter and growing force and entropy takes on a different meaning. With a matter time universe, entropy is a function of just discrete matter and action. What this means is that entropy is no longer isotropic and the shrinking entropy of shrinking matter flows into the growing entropy of growing force.
The density of states of matter time is therefore a matter spectrum and not a volume spectrum. A volume of gas has particles that collide with an average velocity and shine with radiation that we call temperature. However, in matter time, temperature is simply the shine a source gains and loses as mass from another source as they exchange matter. The states in a volume are mainly boson matter states, which parallel fermionic matter and so do happen to be very well distributed in a volume of gas.
Entropy describes the density of states or heat capacity of a source, which is the ability of a source to partition its kinetic energy, which is its energy of motion. The universe boson matter shines onto all sources, but that boson matter does not just fill space. Boson matter flows continuously to and from fermionic matter like atoms and for homogeneous sources like gas or plasma filled volumes, boson matter represents space very well.
On the microscopic scale of galaxy clusters and superclusters, universe boson matter begins to show anisotropies related to the distribution of atomic matter at that scale. There is a large amount of boson matter, but now it flows in filaments that complement the flow of entropy from matter to force that is what makes the universe work. |
6e05b8cbe2f8e5da | 6 The hydrogen atom
Using the time-independent Schrödinger equation with the potential energy term V = –e2/r, where e is the absolute value of the charge both of the electron and of the proton, we again find that bound states exist only for specific values of the total energy E. These are exactly the values that Bohr had obtained via his 1913 postulate.
Just as factorizing ψ(x,y,z,t) into Ψ(x,y,z) and [1:−Et/ℏ] led to a time-independent Schrödinger equation and a discrete set of values En, so factorizing Ψ(r,φ,θ) — which is Ψ(x,y,z) in polar coordinates — into ψ(r,θ) and [1;Lzφ/ℏ] leads to a φ-independent Schrödinger equation and a discrete set of values Lz.
Figure 2.6.1 Polar coordinates
The φ-independent Schrödinger equation contains a real parameter whose possible values are given by L(L+1)ℏ2, where L is an integer satisfying the condition 0 ≤ L ≤ n-1. The possible values of Lz are integers satisfying the inequality |Lz| ≤ L. The possible combinations of the quantum numbers n, L, and Lz are thus
n = 1xxxL = 0xxxLz = 0
n = 2xxxL = 0xxxLz = 0
n = 2xxxL = 1xxxLz = –1, 0, +1
n = 3xxxL = 0xxxLz = 0
n = 3xxxL = 1xxxLz = –1, 0, +1
n = 3xxxL = 2xxxLz = –2, –1, 0, +1, +2
All of these states are stationary. n is known as the principal quantum number, L as the angular momentum (or orbital, or azimuthal) quantum number, and Lz as the magnetic quantum number (hence the letter m is often used instead).
States with L = 0, 1, 2, 3 were originally labeled s, p, d, f — for “sharp,” “principal,” “diffuse,” and “fundamental,” respectively. The purpose of these letters was to characterize spectral lines. States with higher L follow the alphabet (g, h, …). Figure 2.6.2 maps the radial dependencies of the first three s states, which are spherically symmetric. The plots can be identified by the number N of their nodes (N = n-1).
s orbitals
Figure 2.6.2 Radial dependencies of the states with quantum numbers 1s, 2s, and 3s.
Figures 2.6.3 and 2.6.4 plot the position probability distributions defined by some non-spherical stationary states with m = 0. Figure 2.6.3 emphasizes the fuzziness of these orbitals at the expense of their rotational symmetry. By plotting surfaces of constant probability, Figure 2.6.4 emphasizes their 3-dimensional shape at the expense of their fuzziness.
Figure 2.6.3 The position probability distributions associated with the following orbitals. First row: 2p0, 3p0, 3d0. Second row: 4p0, 4d0, 4f0. Third row: 5d0, 5f0, 5g0. Imagining method: ray-traced. Not to scale.
orbitals surfaces
Figure 2.6.4 The position probability distributions associated with the same orbitals as in Figure 2.6.3. Imagining method: surface of constant probability. Not to scale.
It must be stressed that what we see in these images is neither the nucleus nor the electron but the fuzzy position of the electron relative to the nucleus. Nor do we see this fuzzy position “as it is.” What we see is the plot of a position probability distribution. This is defined by outcomes of three measurements, determining the values of n, L, and Lz, and it defines a fuzzy position by determining the probabilities of the possible outcomes of a subsequent measurement of the position of the electron relative to the nucleus.
Here is how such a probability can be calculated. Imagine a small region V of space in the vicinity of the nucleus — so small that the probability density ρ (probability per unit volume) inside it can be considered constant. The probability of finding the electron inside V (if the appropriate measurement is made) is the product ρV. If the gray inside V is a lighter shade, this probability is lower; if it’s a darker shade, this probability is higher. To calculate the probability associated with a larger region, divide it into sufficiently many sufficiently small regions and add up the probabilities associated with them.
Since the dependence on φ is contained in the factor [1;Lzφ/ℏ], it cannot be seen in plots of |Ψ(r,φ,θ)|2. To make this dependence visible, it is customary to replace the complex number [1;Lzφ/ℏ] by its real part, as has been done in Figure 2.6.5.
Figure 2.6.5 Orbitals with non-zero m. First row: 4f1, 5f1. Second row: 5f2, 5f3. Third row: 5g1, 5g3. Fourth row: 5g3, 5g4. Not to scale.
→ Next |
f116974dd21f6085 | Dephasing of a non-relativistic quantum particle due to a conformally fluctuating spacetime
Paolo M Bonifacio, Charles H. T. Wang, J. Tito Mendonca, Robert Bingham
Research output: Contribution to journalArticle
12 Citations (Scopus)
We investigate the dephasing suffered by a non-relativistic quantum particle within a conformally fluctuating spacetime geometry. Starting from a minimally coupled massive Klein–Gordon field, we derive an effective Schrödinger equation in the non-relativistic limit. The wavefunction couples to gravity through an effective nonlinear potential induced by the conformal fluctuations. The quantum evolution is studied through a Dyson expansion scheme up to second order. We show that only the nonlinear part of the potential can induce dephasing. This happens through an exponential decay of the off-diagonal terms of the particle density matrix. The bath of conformal radiation is modeled in three dimensions and its statistical properties are described in terms of a general power spectral density. Vacuum fluctuations at a low energy domain are investigated by introducing an appropriate power spectral density and a general formula describing the loss of coherence is derived. This depends quadratically on the particle mass and on the inverse cube of a particle-dependent typical cutoff scale. Finally, the possibilities for experimental verification are discussed. It is shown that current interferometry experiments cannot detect such an effect. However this conclusion may improve by using high mass entangled quantum states.
Original languageEnglish
Article number145013
Number of pages30
JournalClassical and Quantum Gravity
Issue number14
Early online date26 Jun 2009
Publication statusPublished - 21 Jul 2009
• general relativity
• conformal geometry
• quantum decoherence
Fingerprint Dive into the research topics of 'Dephasing of a non-relativistic quantum particle due to a conformally fluctuating spacetime'. Together they form a unique fingerprint.
Cite this |
0ddb29f18f15269e | We will assume that the eigenfunctions form a complete set so that any function can be written as a linear combination of them. They can be said to form a basis set in terms of which … For any given physical problem, the Schrödinger equation solutions which separate (between time and space), are an extremely important set. It is standard subject in any textbook on mathematical physics that this is indeed a complete set… form a complete set of linearly independent functions. (This can be proven for many of the eigenfunctions we will use.) Energy eigenfunctions can also be used to represent a general solution. Because the eigenfunctions of the Sturm-Liouville problem form a complete set with respect to piecewise smooth functions over the finite two-dimensional domain, the preceding sums are the generalized double Fourier series expansions of the functions f(r, θ) and g(r, θ) in terms of the allowed eigenfunctions. However, this is where my question begins: Consider a set of energy eigenfunctions $\psi_n$ which satisfy by definition … Since the eigenfunctions … "Complete" means that any state in your space can be written as a superposition of energy eigenstates; that is, energy eigenstates span all the state space. Basis Set Postulate The set of functions Ψ j which are eigenfunctions of the eigenvalue equation. If we … Eigenfunctions, Eigenvalues and Vector Spaces. |
1511e8a3c78c8465 | Friday Squid Blogging: Diplomoceras Maximum
Diplomoceras maximum is an ancient squid-like creature. It lived about 68 million years ago, looked kind of like a giant paperclip, and may have had a lifespan of 200 years.
Read my blog posting guidelines here.
Posted on November 27, 2020 at 4:33 PM140 Comments
vas pup November 27, 2020 5:51 PM
Algorithms: Public sector urged to be open about role in decision-making
“A government advisory body said greater transparency and accountability was needed in all walks of life over the use of computer-based models in policy.
Officials must understand algorithms’ limits and risks of bias, the Centre for Data Ethics and Innovation said.
“Government, regulators and industry need to work together with interdisciplinary experts, stakeholders and the public to ensure that algorithms are used to promote fairness, not undermine it.”
The Information Commissioner’s Office urged organizations to consult guidance on the use of artificial intelligence.
“Data protection law requires fair and transparent uses of data in algorithms, gives people rights in relation to automated decision-making, and demands that the outcome from the use of algorithms does not result in unfair or discriminatory impacts,” it said.”
vas pup November 27, 2020 5:56 PM
Your data and how it is used to gain your vote
“The Cambridge Analytica scandal threw light on how the Facebook data of millions was harvested and turned into a messaging tool.
The revelations were criticized far and wide by politicians of all stripes.
But now, a report from the UK’s Information Commissioner’s Office (ICO) puts the spotlight on the relationship between data brokers and the politicians here.
Even limited information can be used in surprising ways, the ICO report found.
For example, buying someone’s name can lead to making guesses about their income, number of children and ethnicity – which is then used to tailor a political message for them.
The report suggests that the Conservative Party is doing just that, using so-called “onomastic data”: information derived from the study of people’s names which could identify their ethnic origin or religion.
=>It has done that for 10 million voters, most of whom will be unaware of exactly how their information is being used.
=>Political parties can legitimately hold personal data on individuals to help them campaign more effectively.
============>But sophisticated data analytics software can now combine information about individuals from multiple sources to find more about their voting characteristics and interests – something some people may find disturbing.
“Data collection is out of control and we need to put limits on what is collected,” says Lucy Purdon from Privacy International (PI).”
Agree 100% – vp
yet another Bruce November 27, 2020 6:22 PM
On the subject of everyone’s favorite resource exhaustion attack. Are you gonna make level 40 on Pokemon Go by the end of the year?
xcv November 27, 2020 7:57 PM
Popular scientific discovery reported to order.
Now we’re miscreants for supporting T and we have squid records that would be disastrous if we were ever to apply for employment anywhere in the jurisdiction of our scientific climate change overlords, or else it’s off to the gas chambers and crematoriums for us.
Dude November 28, 2020 3:41 AM
@ America the censored
Let’s assume you’re right – what do you purpose can be done about this dismal state of affairs?
MarkH November 28, 2020 4:46 AM
This is rather off to one side, but I suppose relevant enough to include.
Reading about the post-election litigation makes it clear that the work products of the Trump legal team are
• freighted with substantive errors
• devoid of evidence which courts can accept in establishing a factual record
• littered with spelling errors, and at least one sentence of such tortured grammar that its meaning can’t be discerned
It’s been a tsunami of incompetence.
But in the past week, they committed a truly spectacular legal error which any law school student would understand is damaging to the client.
To the extent that there has been a guiding strategy behind the litigation, it has been to get an election case before the Supreme Court, so that the three justices appointed by Trump could participate in a potentially corrupt ruling in Trump’s favor, no matter how weak the case. This is not speculation on my part: as usual, Mr Trump has spelled out what he hoped would happen.
[It’s worth noting that this was a desperate plan in any case, because they’ve made no plausible legal argument; but on the other side, had the election come down to a single swing state with evidence of enough problems to change who won the electoral college votes, SCOTUS might now be sufficiently corrupt to steal the election for Trump.]
A few days ago, Trump’s legal eagles announced that they had a case (their current Pennsylvania litigation) to send up the appellate ladder toward the Supreme Court. However, they are so incompetent that they didn’t get the substance of their case ready for appeal, but rather a narrow procedural question (namely, whether they can amend their original complaint a second time). The Pennsylvania court said “no, you already amended it.” So even if this were appealed all the way to SCOTUS, all the conservative majority there could do would be rule “yes, you can amend it again” or “no you can’t.” [In general, appeals courts can only affirm the ruling of the lower court; overturn that ruling; or send the case back to the lower court with instructions to weigh it again.]
The Supreme Court would have no opportunity to rule on the merits of the Trump claim that some Pennsylvania votes should be discarded even though they were cast legally. A victory would be useless, because the second amended complaint would be thrown out again by the lower court, being another evidence-free rehash of the first two versions.
To bring this back nearer to the heart of Bruce’s essay, it seems from my observation that governments deeply committed to rule by the people tend to highly value knowledge, experience and expertise.
In contrast, anti-democratic (authoritarian) regimes are more prone to confer authority on — or otherwise employ — incompetents, charlatans, peddlers of conspiracy theories and simple grifters whose only focus is extracting cash.
Clive Robinson November 28, 2020 5:37 AM
@ MarkH,
… and simple grifters whose only focus is extracting cash.
That appears to be the real reason behind the court cases.
After all how much do you think “Dear simple Sidney” is billing for her work?
Especially as others have found it does not appear to be “Dear simple Sidney’s” work… But somebody else, a clear “Cut-n-Paste Queen”, with a missing “Z” and a “chardonnay name”[1]. I wonder if she looks both ways when she crosses the road, at her time of life? As others have noted sometimes a bus can be hard to see when it’s heading your way…
[1] A lovely expression from England before the turn of the century when “Friends” was still new and Starbucks was just a spin off of Melville’s Moby Dick. It implied some one who who might also now be called a WAG that drank copious quantaties of cheep white wine and changed their names to sound what they thought was more trendy. Usually this involved putting an “i” on an abbreviated form of their first name. As one caustic observer at the time noted “The only time “I” comes last in their lives”.
Curious November 28, 2020 6:21 AM
@America the censored
How about being polite? I am European and so I probably do not share the views of perhaps most of people in North America on a variety of topics (to say the least), but I don’t pretend that people reading this blog is interested in mere opinin pieces that would be more off topic than something related to security.
I’ve noticed a couple of times of stuff never showing up, and I can’t tell if maybe a moderator nixed it, or something else happening to it.
Btw, I used to trawl through 200+ twitter accounts every day looking for computer security related news, as some kind of guilty pleasure looking for scandalous news, but now, Twitter has limitations that makes this a lot less fun for me, making me have to wait before I even see a single twitter post for any new twitter tab I choose to switch to once the limitaton is reached.
nonce cat November 28, 2020 6:53 AM
@vas pup
Most news is negative. So says the old adage “Man plans, God laughs”, or was it “No news is good news”?
Light reading for the holidays.
COVID-19 could distract the world from even greater threats
The research led by Thomas Homer-Dixon at the University of Toronto has focused on how environmental scarcity leads to certain destabilizing social effects that make violence more likely.
More detailed reports on the scarcity of resources and resulting security implications are harder to access. Occasionally you can get lucky, so check all the usual organizations publications, as they do post them on their web platforms.
Czerno November 28, 2020 9:27 AM
Doubts over the Oxford vaccine-candidate :
What’s the opinion of our esteemed resident universal “experts” ? @Clive ? @Others ?
Clive Robinson November 28, 2020 9:28 AM
Zero postings about EDP security. Just pointless drivel.
Well there you go just adding to the latter, not the former.
But a point to note this page opened less than a day ago on “Black Friday” a holiday for some and a busy day for others and today many people are likewise busy[1].
Hardly surprising there are no real Electronic Data Processing(EDP) or Information and Communications Technology(ICT) security postings. Because it’s that little “silly season” where you get “slow or no news days” for the MSM or more specialized news outlets.
But why should I let you play the green to the gills moany old Grinch?
Two different but similar big service outage reports,
Now there you go something to think about, oh and for you to make comnent on as to where they went wrong and how you could have prevented it all, as you obviously have way too much time on your hands and are just moaning not even creatively.
[1] Which begs the question “Why are you not busy”? And before you ask I’m still self isolating.
Clive Robinson November 28, 2020 11:15 AM
@ Czerno,
Doubts over the Oxford vaccine-candidate
It’s a poorly carried out smear.
The Oxford vaccine passes the WHO efficacy standards at both the 50billion and 25billion doses.
As for the half dose plus full dose, this happened to only one small group and it showed efficacy around 90%. Whilst the other groups on two full doses showed efficacy around 63%.
Oxford And Astra are now recruiting more people to do further half+full and it’s said potentially half+half groups in a much larger size.
Because for obvious reasons, if they can find an as effective or better dosing with less vaccine, then more people can be vaccinated for the same amount of vaccine produced. That means getting it into more arms world wide faster, thus hopefully bringing the end of this pandemic faster.
The two US modified RNA (mRNA) vaccines are at best going to only be available to some first world nations. Not only are they nearly ten times the price for the vaccine, the required “chill chain” at -70C is dangerous in of it’s self and requires all sorts of expensive technology that has not yet been developed. The Oxford vaccine however can survive in an ordinary fridge for months, to get it to peoples arms requires only “cool bags snd ice blocks” of the sort many use for picnics, thus is not just safe and inrxpensive, they are readily available now and in most parts of the world such vaccine “chill chains” are already in place.
As for the author going on about the now fully discredited HCQ even with zinc, it shows the author is either significsntly biased for some reason or not upto doing actuall basic research.
I strongly feel that the authorvis biased, that is they are quite deliberatly pushing a political or other such as anti-vax line for some reason they have not declared. Thus rather than “go with the science” they are going for an ad hominem attack against an individual.
But is it just anti-vax, I suspect not as they have not mentioned the mRNA vaccines at all. Thus I suspect it must be specifically a US / Politics related bias or even the gruby business of profits for US big phama…
Oxford / Astra unlike the US companies are not making a profit and are encoraging others to make the vaccine localy thus the current price which is about 1/10th of the US companies will drop further. Also when you look at costs so far the two US companies have consumed billions of US tax payer dollars the Oxford / Astra has consumed only a small fraction of those sums.
The US companies have been trying for years to get mRNA therapy to work, and they hope to turn it into a highly patented cancer cure. As a treatment mRNA is not only new it’s untried and quite radical, and there is absolutly no data on it’s longterm safety nore will there be for a decade or more. The Oxford / Astra vaccine whilst new is based on well tried and tested principles and is a modification to a chimpanzee cold virus.
I’ve already said I will not have the mRNA vaccine, simply due to the fact that I realy do not trust the mechanism behind which it works for safety. However if my Dr sent me a letter tommorow asking me to come in for the Oxford vaccine I’d roll my shirt up before putting my coat on.
Oh one thing else to consider. During the mRNA trials some in the test group did get COVID and did end up in hospital. With the Oxford vaccine whilst some in the test group did get COVID NONE went into hospital. This tends to suggest that whilst the vaccine might be less effective than the mRNA vaccines at stopping infection, it is not an “all or nothing” that is those that got COVID probably got it a lot less severely than the mRNA failures…
But we are still a month or more away from getting a needle in our arms, whilst the efficacy stage of the phase three testing is over, there is still the two month additional safety trials to complete. Thus realistically most of us are not going to get the shot in the arm befor spring next year.
Which brings up the issue of getting through “flu season” information we have suggests that having both flu and COVID together is realy a very bad idea. So as this years flu shot is a quadriplex it might be worth getting a flu shot as soon as possible. Certainly in the US where flu is alrrady on the rise, earlier and faster than expected.
JonKnowsNothing November 28, 2020 11:24 AM
@Clive @Czerno @All
re: Oxford and other drug trials
disclaimer: I am not a MD. I am not direct researcher. Any opinions are just that.
If you do not expect that there will be “blips and dips” in vaccine and other therapy research, you might want to reset your thinking about this.
Things are rushed and there are corporate neoliberal economic reasons for that, as are the pump and dump stock market hype-up and beat-down.
Somewhere in between is some useful information.
Lots of science happens “by accident” and some famous myths have been told about “falling apples” but it takes effort to recognize why and how and of what value that is.
I have not seen the actual data report for the Oxford trials (expected to be published soon) nor have I seen any data on the mRNA trials although I’ve seen it reported that it is “published somewhere”.
The is a reason why “trials are called trials” and a vaccine is not going to return the world to PRE-Covid-Utopia, even if it was a slam-dunk on the first go round. There are 150+ vaccines in the pipe line and we know of 5. 3 of which are vying for First To Market-First To Capture The Market. 2 of the 3 are high priced drugs and 1 is low cost.
There is the “faux-urgency” of the UK Govt setting up Mass Vaccination Areas to be ready in 10 days to start jabbing people with (fill in the blank).
Even a basic read of that should raise up a Turkey Induced Sleep eyelid.
There are attempts to re-frame safety and efficacy trials as being “anti-vax or hesitant-vax” when SAFETY is being dumped aside with the dream-state of
“It will all be over by Christmas when Ghost of Christmas Yet to Come will be banished by another big turkey dinner”.
A good review of the problems with Dengue Fever virus and vaccines should be enough for anyone to think carefully about economic-driven medical decisions.
ht tps://
ht tps://
ht tps://
ht tps://
(url fractured to prevent autorun)
JonKnowsNothing November 28, 2020 11:57 AM
@Clive @Czerno @All
re: Other COVID-19 treatment research
There are 3 areas of research for COVID-19 (and others)
1, prevention aka vaccinations
2, treatments aka pills and potions
3, exposure reduction aka masks, distance, wash-up,D3
There’s been more press about Option 1 vaccination but there are some good reports about Option 2 treatments.
One focus is on developing a protease-inhibitor for COVID-19. Protease inhibitors are administered early after infection and can prevent the virus from from accessing protease from the infected cell. Viral protease is different from Human protease.
Coronavirus Family protease locations are already known and the COVID-19 version is very similar.
Experimental tests have already shown a half-dozen inhibitor candidates targeting just the SARS-CoV-2 locations.
Vaccines are only partially successful and Vaccine Failure is the other half of the “effective rate” (eg 90% effective ~ 10% failure). Protease-inhibitor therapy may be of help for those who have vaccine failure or cannot take the vaccine due to other complications.
search: / PF-00835231 / 3CL / protease / P-glycoprotein
ht tps://
ht tps://
ht tps://
ht tps://
ht tps://
ht tps://
ht tps://
(url fractured to prevent autorun)
Winter November 28, 2020 1:32 PM
The new tests do not have to delay registration of the vaccine. This is only about the claims of high efficacy for specific dosages.
The original doses had an efficacy of 60%, which was enough in itself. These tests were done correctly.
And about the anti-vaxxers. This also an exercise in evolution and natural selection. Sane people avoid anti-vaxxers.
Faustus November 28, 2020 2:29 PM
You summarize why supposed experts scare the pants off of me and why they will continue to have limited influence in any real democracy, which is of course a system that gives every citizen an equal voice and therefore cannot be run by experts. What you describe is a dystopia, not a democracy.
To run it down:
1. The absolute arrogance of your statements.
2. Their simplistic world view.
3. The evidence free nature of everything you say.
4. Your need to censor other views because you cannot successfully defend your position.
Anybody with the simplest understanding of science understands that advances are almost always counterintuitive and contrary to the opinion of the day’s experts. A practice that forbids alternate views is simply authoritarian politics, not science. Enforcing such expertise will invariably hurt real scientific progress and turn objective scientists into babbling politicians. See: The Soviet Union.
“Oh, but what about the children?” and other security theater greatest hits: we should reread Bruce’s early books. I was surprised when Bruce shifted from critique to participation in this theater but we live in a world where objectivity is weaponized against you, so I don’t hold it against him. I still sense his essential honesty, for example in even allowing this discussion to occur, when I am sure he is under.pressure to censor.
@America the censored: I agree. We could use more Bill Hicks right now.
Czerno November 28, 2020 3:09 PM
Interesting official information° site about the Russian vaccine “Sputnik V”.
° or shall we say propaganda ? I’m not taking sides…
«The Gamaleya National Center of Epidemiology and Microbiology is the world’s leading research institution. The center was founded in 1891 as a private laboratory. Since 1949 it bears the name of Nikolai Gamaleya, a pioneer in Russian microbiology studies.
Gamaleya studied at the laboratory of French biologist Louis Pasteur in Paris and opened the world’s second vaccination station for rabies in Russia in 1886. In the 20th century, Gamaleya as one of the heads of the center fought epidemics of cholera, diphtheria and typhus and organized mass vaccination campaigns in the Soviet Union.»
vas pup November 28, 2020 3:48 PM
@Bruce and @Clive in particular
A biochemical random number
“True random numbers are required in fields as diverse as slot machines and data encryption. These numbers need to be truly random, such that they cannot even be predicted by people with detailed knowledge of the method used to generate them.
Scientists have generated a huge true random number using DNA synthesis. It is the first time that a number of this magnitude has been created by biochemical means.
Please read the whole article for details – you’ll like it.
My attention was on this part:
“The main aim of ETH Professor Grass and his team was to show that random occurrences in chemical reaction can be exploited to generate perfect random numbers. Translating the finding into a direct application was not a prime concern at first.
===>”Compared with other methods, however, ours has the advantage of being able to generate huge quantities of randomness
!!!!!that can be stored in an extremely small space, a single test tube,” Grass says. “We can read out the information and reinterpret it in digital form
==>at a later date. This is impossible with the previous methods.”
JonKnowsNothing November 28, 2020 4:11 PM
@Czerno @Clive @All
re:Sputnik V and Chinese COVID-19 vaccines
I have only read articles distantly referencing these. There may be a lot more that is not on the front page of Western News Media.
China has at least 2, maybe more vaccines and they have given them to large segments of their population. There was a dust up over some Chinese miners flown in direct from China to work in a China owned mine in a host country, where the miners would live and work inside their own perimeter. The host country turned back the flight because of the lack of information at that time. It maybe that the Chinese Government has had direct talks with the host government since that report.
Russia also as at least Sputnik V and given that to a lot of their population.
Both China and Russia have supplied their vaccines to other countries.
There are several things that I think you can use as a gauge to how well these are working.
A) There are the “official” infection numbers. Given that everyone is fudging as much as they can, one might expect there to be a swift downturn in new infections 30+ days after a major vaccination event.
If there has been anything dramatic in that regard it has not made the Front Page where I am, and the latest images of the ice skating rink in Moscow, Russia; shifted to an overflow hospital certainly didn’t look like things were slowing down. (images from @11/20/2020)
China has reported they have a good handle on “eradication” and all MSM reports of any outbreaks there show they clamp down ASAP and stay clamped for the eradication time table.
For countries that follow eradication + vaccination there should be very little in the way of major outbreaks except in cases similar to the Dengue Fever Virus Vaccination Catastrophic Failure or a failure in isolating incoming visitors in appropriate quarantine facilities.
B) The other item to note, particularly about China affecting all aspects of anything there that might be contagious, such as ASFV and Avian Flu as well as COVID-19, their workers are now moving and living inside a biological-secured compound. They go in through a strict quarantine and testing period and are housed inside the compound. Some workers (hog farms) also pass through a second quarantine for a 10-14day work cycle inside a second inner bio-secured perimeter.
China has certainly learned that eradication also includes continuous no-contamination protocols. In the case of ASFV, AvianFlu etc, their food production demands it, as there are no vaccines or cures for these. They have vast historical experience with famines which they would like to avoid repeating.
The USA – no so much.
ht tps://
ht tps://
ht tps://
ht tps://
You load sixteen tons, what do you get?
Another day older and deeper in debt
Saint Peter don’t you call me, ’cause I can’t go
I owe my soul to the company store
(url fractured to prevent autorun)
Anders November 28, 2020 4:30 PM
xcv November 28, 2020 5:10 PM
@vas pup “True random numbers are required”
In any quantity or bandwidth, you need a source of ionizing radiation and a detector. A chunk of americium-241 from a consumer residential smoke detector, and a repurposing of some of the other electronics might help.
That isotope emits alpha particles with an energy insufficient to penetrate smoke particles in the air to hit the detector, and the alarm is sounded if alpha particles stop hitting the detector.
Find a way to count the alpha particles, or find an interval of time such that the probability of an alpha particle hitting in that time is p to produce a bit with entropy
H = p ln (1/p) + (1-p) ln [1/(1-p)]
metaschima November 28, 2020 5:18 PM
@Clive Robinson
You’re a smart man, the Oxford/Astra vaccine is the only one I would get. I’m a healthcare worker and I know a good bit about these things, and you’re absolutely right. This is the first mRNA vaccine to ever be used on humans, not only that but look at the companies that are making them. Moderna is a biotech bubble company, either that or a front for a secret intelligence organization. They are extremely secretive with their projects, they don’t release many articles nor get them peer reviewed, they have received billions upon billions of dollars but have nothing to their name! Oh until now… All of a sudden they miraculously come up with a vaccine that is 90+% effective with totally new and untested technology. Technology I might add that was not too long ago pretty much abandoned because of the years of research needed to actually make something that works and is safe. Oh and this company Moderna is very very new to the scene, like 8 years in the business. I wouldn’t trust them to make a regular vaccine much less one with totally new and untested technology that has actually proven too dangerous to use in the recent past. I don’t know much about BioNTech, only that it’s a relatively recent startup in Germany and they have tried making mRNA vaccines in the past without much success.
Clive Robinson November 28, 2020 6:29 PM
@ xcv,
It kind of works but the output rate is usually quite low at jist a few detections a second, and the number of detections per unit of time drops with time… The half life of americium 241 is 432.2 years (1.588 x 2^33 seconds), which sounds a long time but it’s not realy. We need random numbers up in the 512-8192bit range these days, that’s sufficient for the decay bias to be measurable. Thus you need to do further processing.
The search for realy good sources of unbiased random almost appears endless. However quantum sources are apparently the current “gold standard” but can be a right royal pain in the “care and feeding” department.
The real question is at the end of the day is “When does the bias matter?” to which the answer aboyingly is “That depends”.
If you are only going to generate one 8192bit number every month then the simple system you describe is probably sufficient. However if you are generating nore than one or two such numbers a day then it starts to matter.
Some people need billions of random bits a day, not only will such a system not generate sufficient bits, the half life bias would be measurable.
The thing you could say about random is “it’s never enough”…
xcv November 28, 2020 7:59 PM
@Clive Robinson
There is something called a “time constant of relaxation”
τ = t1/2 / (ln 2) =~ 623.5 years
The time constant of relaxation is significant because it equals the ratio of the decay rate to the amount of radioactive substance left.
“Shot noise” from a resistor powered by a constant voltage should not in theory change or drift — and under certain circumstances each electron can be counted as it passes through the resistor — but I would not expect any given electronic device to continue functioning for hundreds of years.
Clive Robinson November 29, 2020 12:02 AM
@ xcv,
There is something called a “time constant of relaxation”
The time constant of relaxation is the exponential decay of a physical system in response to a step change, also called it’s “time charecteristic/constant”[1]
Often called tau or just “the time constant” it’s the same curve you see across a capacitor when discharged through a resistor. Most engineers simply remember it as 1CR = 63% 5CR=99% as that suffices from the practicality aspect.
The point is exponential decay is a curve that is defined by a constant percentage change in the y axia for a given change in the x axis.
It’s usualy easiest for most people to work out the exponential curve in their heads by percentage increasing (using shift and add)
So for 10%,
1, 1.1, 1.21, 1.331, etc.
The point is it does not matter what the % is the curve is always the same shape, you just scale it to fit.
So it does not matter where you are on the curve the change as a % remains constant no matter how small a time slice you take.
The half life is given as the point of 50% reduction[2] the time constant as ~63% reduction or more accurately 36.78794412% (1/e) remaining (and 5CR = (1/e)^5 = 0.6737947%).
With regards,
It does not, provided the resistor does not change, but resistors are physical objects and ultimately follow the laws of entropy thus decay with time. Often but not always physical decay is an exponential process. As an example an iron wire has a certain resestivity which is markedly diferent to the oxides of iron. In an oxidative atmosphere the iron will turn to an oxide due to corrosion, this has it’s own “relaxation rate” which will obviously have a characteristic time or half life[3]. But the corrosion is a continuous process so the resestivity of the iron wire will continuously change as it turns into an iron oxide, thus the noise it produces as a resistor will likewise change.
[3] The nature of the corrosion rate, is exploited in the likes of “mines/grenades” and other “time delay” bomb pencil fuzes[4]. A wire holds back a firing pin against a spring, against the wire is a glass ampule of corrosive liquid. When the ampule is broken the wire is corroded at a rate depending on the corrosive strength and thus reaches a point where it can nolonger hold against the spring. The firing pin hits the percussion cap holding the primary explosive charge such as a metal fulminate causing it to go “high order”. The resulting shock wave causes the secondary charge explosive to go high order. Depending on the main charge explosive and it’s detonation charecteristics a third or fourth charge may be used for the likes of “explosive lensing”. In the US this is called an explosive train in the UK an explosive chain. In the case of bombs there are breaks in the explosive chain for safety that stop an early firing of the pistol / percussion cap reaching the main charge. Often called “an arming screw” the travel of the bomb –or torpedo– turns a propeller that moves the safety device via a screw thread such that the main charge will explode only when the bomb/torpedo is a safe distance from the launching vehicle.
[4] Many people think fuse and fuze are just spelling differences they are not. A fuse is a burning cord, wick or similar that acts as a time delay element. A fuze is what holds the trigger, explosive chain/train and safety devices as well as timing or range seting devices for the trigger. Part of a fuze may be the “pistol” which consists of the primary explosive initiator and a secondary or more charge and sometimes the trigger. In some mechanical fuzes used for anti-personnel devices the pistol may very well be a blank or incendiary round of amunition mounted in a trip-wire or similar firing pin trigger mechanism. The pistol or fuze are often removable for safety reasons and thus installed to arm the bomb, torpedo, or shell etc.
SpaceLifeForm November 29, 2020 3:55 AM
@ JonKnowsNothing, Anders, Clive, MarkH, ALL
And they want to dump into the ground.
MarkH November 29, 2020 4:15 AM
@xcv, Clive:
It kind of works but the output rate is usually quite low at just a few detections a second, and the number of detections per unit of time drops with time
About 40 years ago, I was the shiny-faced lad in a small group that was supposed to work on the design and development of smoke detectors. As it turned out, we were mostly tasked with more hum-drum projects, but I did learn a few things along the way. Part of what I’m writing here is from memory, so apologies for anything out-of-date.
As it happens, smoke detector electronics wouldn’t be useful for random number generation, and probably the chamber wouldn’t be useful either.
The setup is designed to yield a small but steady current in clean air, which diminishes as the number of particulates increases inside the detector chamber. Which leads to:
I don’t know what kind of decay detector Clive had in mind when he wrote about a few detections per second, but the decay rate is much higher than that.
Back in the day when I worked on that stuff, the higher-quality detectors were using 400 nanocurie sources, but apparently 1 microcurie is still common today. If my arithmetic is good, then these two sizes correspond to roughly 15,000 and 40,000 alpha particle emissions per second, respectively.
I estimate that depending on the geometry of the source, its mounting, and the alpha particle detector, it’s probably practical to capture about 10% of these in the detector, for transit rates of at least 1000 alpha particles per second.
The limit on detections per second would likely be imposed by the recovery time of the detector; there’s a variety of ways to detect ionizing particles, and I don’t know the practical side of any of them. My intuition is that the simplest detectors are pretty slow, and the fastest detectors are pretty exotic …
For various reasons, it would usually be best to limit the frequency of alpha particles reaching the detector, probably to 1000/sec as an absolute maximum. I was told that the adhesive tape used in offices is opaque to the alpha emissions from 241 Am.
Which gets into a difficulty with using a smoke detector source for this application: the big clumsy alphas (which are actually helium nuclei), emitted at low energies, are so easily blocked that probably many detectors aren’t useful for them.
I don’t see why changes in the activity of the source as it decays need introduce measurable bias.
For example, suppose that each alpha detection triggers sampling of the low bits of a 100 MHz counter, and that detections are throttled by one means or another to an average of perhaps 10 to 100 per second. If intervals between detections could be very small, so that less than a few hundred nsec might elapse between them, then certain values of the least significant bits would become more likely. If however such close spacing is either extremely rare, or prohibited by design constraints, then what would cause one sample be correlated with any others?
In such a scheme, I would expect the gradual weakening of the source to lower its output rate, but not to introduce bias.
The parameters of such a system could reasonably be adjusted to generate between a few dozen and a few hundred output bits per second.
I suppose that such an output rate would be useful for a variety of applications.
Probably Clive could testify that although such a generator is conceptually simple, very great care is required in multiple aspects of design to ensure that output bits don’t become biased or correlated in some unexpected way.
And as I mentioned on another thread, for very high security applications, the combination of multiple generators and/or continuous statistical testing of generator outputs are precautions which are practical to do, and can provide a lot of protection against the most common failure modes.
name.withheld.for.obvious.reasons November 29, 2020 7:16 AM
29 NOV 2020 — Bunker Based Security Backed by Banal and Bananas Barrier Builder
The elites have mischaracterized some of their brethren and wrongfully understand that a shared corrupt affinity of wealth is common to themes between their interests and those expressed by D. J. Trump, they are not. Yes, elites interested in preserving their wealth and privilege are serviced by D.J. Trump, at least politically and culturally. But, if elites believe that D.J. Trump shares their aspirations and goals they are simply self-delusional. D.J. Trump shares no ones interest but his own.
So if you’re a person concerned about your political alignment within the context of a Trump head of state, don’t worry–D.J. Trump will make sure his cause is serviced irrespective of the who, what, where, and when. It is the why that seems to be so illusive to media and the intellectual class. The clarity to see how this might work out is born, ironically, by the deaths of hundreds of thousands of fellow citizens that will be accompanied by hundreds of thousand of other citizens in the not too distant future. This can be considered fact, the soon to be dead are already in the Covid pipeline. I don’t see how others dismiss the unforgivable act of deliberate inaction that results in so much misery. Can that not see that their fate is tied that of others, or do they understand themselves to have a “special” relationship. Well, if that’s the case why don’t people just call or tweet Trump and ask him.
So if you believe you’re protected by some status related to the Trump cabal, I have a number of mass grave sites and refrigerated trailers to show you. There is no person, cause, moral turpitude, position, reason, or humanitarian plea that offers anything but your breath plainly expressed as utter exasperation, and, possibly a medical misadventure and death. That may or may not include a ventilator in your near future. It depends more on you than one might rightfully understand.
Clive Robinson November 29, 2020 8:06 AM
@ MarkH, xcv, ALL,
As the source weakens the number of detections in a unit / period of time goes down and thus the average time between them goes up. That is a form of bias as the % increase in time period after time period remains the same irrespective of how long or short you decide to make the time periods.
So for your cycling counter the average count goes up by the same percentage time period after time period. When the counter “wraps around” you effectively get a sawtooth error function the frequency of which is directly related to the change caused by the % difference.
Such waveforms can “hetrodyne” with other sampling times and the lowest common frequency difference appear super imposed on the output as phase modulation.
If you have an oscilloscope and two medium or high frequency square wave oscillators, with one going to the D input and the other going to the CLK input of a D-type latch[1]. The Q outputs look chaotic or random close in, in fact they are anything but as you will see when you open out the time base far enough. Put the Q output through a lowpass filter or leaky integrator[2] and you get out a very pure sinewave at the lowest sub harmonic difference frequency. That is that chaotic looking signal is actually a high resolution Pulse Width Modulation (PWM) signal of a sinewave at the lowest difference frequency.
Thus even nano second differences in the timing counter chain fold back in the sampling domain to the lowest frequency quite sufficiently to significantly phase modulate the output. Thus your supposed “random signal” is phase modulated by that sawtooth caused by the % change per time period even if it is very very small.
The problem with phase modulation is most people can not see it for what it is when decimated by sampling. They tend to think of it as “random jitter” caused by noise thus potential entropy, which it is most certainly not.
It especially rears it’s ugly head in the sampling of “ring oscilators” that are standard in many CPU “TRNGs”. Because the phase modulation dominates the output, manufactures hide it from view by “sprinkling magic pixie dust thinking” on the problem by using crypto grade algorithms to obscure it. The result is if you do not know the “key” you will see what you think is high grade entropy. If you do have the key, then you see the PWM waveform with just a tiny bit of entropy. Which means you can phase synchronise with the wave form and turn what appears to be 256 bit of entropy into maybe 7 or 8 bits which turns an impossible brute force search into an easy brute force search…
I’ve mentioned this a few times in the past on this blog before, but I guess most people don’t understand the implications thus trnd to put it “out of sight” then the inevitable “out of sight out of mind” follows it’s usual path.
[1] If you don’t have the electronics and test kit to do this you can do it with a pencil and regular squared graph paper, or write a computer program to do the donkey work for you. If you are going to write a program I would suggest you use two phase accumulator Numericaly Controled Oscillators(NCO) as they are trivial to write and can give you a frequency resolution of very fine degree the more bits in length the phase accumulator has the finer the frequency step can be,
[2] An integrator is a mathmatical construct and can be seen as a “counter” counting the area under the curve. A leaky integrator removes certain types of bias and alows you to get an RMS zero for the waveform. You can use an up/down counter or similar to recover the waveform and then feed it to a D2A converter. Similar techniques work just as well in software.
xcv November 29, 2020 10:33 AM
Self-professed “experts” have far too many bright ideas about how to force us against our will to do things that we do not want to do, prohibit us against our will from doing things that we do want to do, and just generally punish us out of all our aspirations of success or achievement at anything in particular that we might want to accomplish in this life, or would do if we had the freedom.
That why Trump get al. reject advice of doctors. People who have studied hard and long to learn how to cure people should not be trusted. Better to listen to a real-estate tycoon when you want medical advice.
It would have been better for those medical school frat boys if they hadn’t gone to work to revoke our civil rights, destroy our working careers, ruin our lives, and prohibit us from entering relationships or having family of our own. Those are people who recklessly took on five or six-figure medical school debt, and then resorted to billing fraud, extortion, and violence to pay off their student loans, after they got “in too deep” with serious organized criminal Establishment medical practices of mass murder and routine mayhem hailing from medieval Europe and ancient Rome.
MarkH November 29, 2020 1:49 PM
To my understanding, the reasoning you cited for predictability/correlation is based on the periodicity of the signals.
What is less periodic than radioactive decay?
The predicate being removed, so also is the conclusion.
Did I miss something?
JonKnowsNothing November 29, 2020 2:52 PM
@Clive @SpaceLifeForm @ALL
re: Plague Houses Wave 2 or Wave 1b USA
In addition to the list Clive posted, @10 2020 the UK Government demanded that local councils to set up “dedicated positive COVID-19 care houses”.
The Department of Health and Social Care (DHSC) has instructed councils to identify homes in their areas that could be used and to have them checked by inspectors to assure infection prevention controls are in place. As many as 500 facilities – sometimes known as “hot homes” – could be designated by the end of November, the equivalent of one or two in each council area.
iirc(badly) A recent report the UK Gov asked “how’s that going?” the answer was “None, No Specs, No Money”.
In the USA we have a slightly different aspect to the same dumping ground, one that I was 100% not aware of and the topic which was certainly missing from our High School Civic Classes. These are called “Federal Field Hospital or Federal Medical Station”.
Each FMS comes with a three-day supply of medical and pharmaceutical resources to sustain from 50 to 250 stable primary or chronic care patients who require medical and nursing services. Staffing for an FMS can be provided using displaced local, regional or EMAC providers, or can be provided by the federal government (primary federal staff are Officers of the U.S. Public Health Service Commissioned Corps).
The US Surgeon General IS actually a “General Military Title” which is why the position wears a uniform. Silly me, I thought it was ’cause the person was a “general medical practitioner” and the uniform as like business-attire, purely cosmetic. Not so.
There’s a lot good things happening under the invisibility cloak, but the current iteration for Wave 1b is that some FMS will be used to dump COVID-19 positive patients that are TRIAGED/SOFA-scored out of local hospitals.
In theory this is to make room for possible survivors to get in-line for an attempt at medical care, but given the rules for TRIAGE from Wave 1a, this means dumping anyone they can out of the hospital.
Partially true; the local hospitals are overloaded, under-supplied and under-staffed (we did kill quite a few staff in Wave 1a) and none of that changed between the low-end of Wave 1a and the ramp up of Wave 1b (about 30 days in California).
The other part is in how the statistics are reported. If you don’t die in Hospital, it doesn’t show up on the Hospital Reports. If you don’t die in a Skilled Nursing Home or Care Center, it doesn’t show up on those obscured reports either ( [deaths more than 10] or [deaths more than 100] thresholds).
iirc(badly) A recent report about the “overload in the hospitals in UK”
The neoliberal++branch of Parliament asked
How can we be overloaded? We built all those Nightingale Hospitals!!
How many people are in the Nightingale Hospitals???!
the blond-unbrushed-scuffed-toe replied
“Ah.. none. We don’t have anyone to staff them.”
Our local area has a 250 bed overload setup from Wave 1a, just like many urban centers built overflow systems during that time but we have no one to staff them either.
So USA Plague Houses will be out of sight, run from the Federal Government (currently Trump), waiting for Wave 1c in January 2021.
ht tps://
ht tps://
ht tps://
ht tps://
Field Hospital Setup in the Baltimore Convention Center Date 27 February 2020
note: If you want to know about the US Federal Programs you have to go to the US Government Sites using High-Grade Google-Fu.
(url fractured to prevent autorun)
Cassandra November 29, 2020 3:05 PM
@Clive Robinson
Your point about using ring oscillator jitter in a ‘TRNG’ is interesting. What are your thoughts on this paper:
“A Provably Secure True Random Number Generator with Built-in Tolerance to Active Attacks”
B. Sunar, W. J. Martin, D. R. Stinson March 29, 2006
This paper is a contribution to the theory of true random number generators based on sampling phase jitter in oscillator rings. After discussing several misconceptions and apparently insurmountable obstacles, we propose a general model which, under mild assumptions, will generate provably random bits with some tolerance to adversarial manipulation and running in the megabit-per-second range.
I am not competent to criticise it.
Clive Robinson November 29, 2020 4:49 PM
@ MarkH,
What is less periodic than radioactive decay?
Quit a lot of things, schott noise, thermal noise, variois random walk physical systems, mixing/unmixing of oils and water one large scale of which is a lava lamp, the list is quite extensive.
Radioactive decay is very predictable, it closely follows a (1/e)^n curve when measured in the right way.
In effect the short average of the detection periods follows that curve very closely.
This also means that when the source is nearly all undecade isotopes the average delay between decays is very very short. As the decay happens this average time between decays increases significantly with time.
So you hypothesis is shown to be false, which means that,
The predicate being removed, so also is the conclusion.
The predicate is very clearly still in place as is the conclusion.
Do you realy need me to go on and explain why this would bias the counter output average thus appear in the TRNG output as a low frequency signal no matter how fast the counter runs?
Clive Robinson November 29, 2020 5:53 PM
@ Cassandra,
I started reading the paper and up to section 4 it says exactly the same as I’ve said on this blog one way or another several times over the years.
However the first sentance of section 4 brought me to a screaching halt and various other statments in section 4 made me shake my head sadly. I don’t think they have considered various technology issues. Not least of which is a CMOS inverter is provably an analoge amplifier with sufficient gain to be wired up like an inverting op-amp. This has been well known for half a century and if you look in the first edition Motorola Seniconductors 1973 Mc MOS Handbook Chapter 3A.1 it gives an indepth descryption and theory of the operation of an inverter treating it as what it is two complementary fets in an open loop amplifier configuration. Chapter 8B through 8D explain how to exploit the analog behaviour not just as AC coupled amplifiers but multistage gain blocks, noise generators, and oscillators inclusing ring oscillators… Something tells me the authors have not read it or a more modern version.
I shall carry on reading the paper but I’m not hopefull. They do not appear to have addressed a number of issues that I know cause problems. Not just what I mentioned with the fold back when using a D-Type latch but the issue of “injection locked oscillators”[1] where the current spikes caused by the output stage of a CMOS gate “passing through the output linear zone” gets back to the input circuit of other gates and effects them as they are “passing through the input linear zone” thus causing gates to start to fall into “lock step”. Normally with “clocked logic” this does not cause a problem but can increase metastability issues in latches. But is obviously very much a problem when you are looking at jitter on ring circuit edges as they start to correlate with each other.
There are several other things that concern me like their assumptions with XOR gates and their transition characteristics…
I will go through it ib more depth and get back to you.
[1] The fact that oscilators fall into lockstep has been known for centuries where pendulums were observed to do it and investigated by invebtor of the pendulum clock Christian Huygens who wrote a letter to London’s Royal Society about it in 1665.
JonKnowsNothing November 29, 2020 7:01 PM
@ Clive Robinson @ Cassandra @All
re:the fact that oscillators fall into lockstep has been known for centuries where pendulums were observed to do it
While the topic is seriously above my pay-grade, when playing video games where characters do repetitive actions-animations such has crafting 100 widgets, I’ve noticed that even if my toon starts an independent animation from adjacent toons, soon enough we are all animating in “lock step”.
ex: consider animation “banging on an anvil” consists: of:
main hand move to get a hammer from pouch, main hand raises hammer, off hand places item on anvil, main hand hammer bangs the item with sync sound of “bang”, both hands return to neutral position.
Repeat 100 times while adjacent to 100 other players.
I’ve seen this in other animated games where visually repetitive actions sync. I don’t know if the same conditions work in “text based games” with minor or no animation sequences.
There are client-server latencies and local graphics card specs so the sync on my local machine is not necessarily the same sync period as on another; each sync maybe independent of the other but on each machine they are in sync with themselves (2 boxing).
Is this perhaps related?
MK November 29, 2020 7:34 PM
If you only need a small supply of randomness, John Walker maintains a cache of random bytes produced by radioactive decay: hXXp://
xcv November 29, 2020 10:20 PM
@ MK • November 29, 2020 7:34 PM
That’s of course an interesting proof-of-concept for John Walker’s personal or educational research project — but we are more interested in reproducible research than just being served with someone else’s incarnation of it.
MarkH November 30, 2020 12:56 AM
@Clive, re radioactive TRNG:
I’m fairly certain that for the set {Clive, Mark}, the subset of elements misunderstanding the physics and/or mathematics of the matter has non-zero order.
MarkH November 30, 2020 1:21 AM
@Clive, xcv, et al.
I’m going to try to reason this through step by step, because it’s a pretty fundamental question — or rather, a set of fundamental questions. I ask your patience, because I think it best to build up via several separate comments.
Consider the simplest case of an unstable nucleus, tritium. It consists of only three persistent particles: a proton and two neutrons.
Based on the well-established characterization of this isotope, we know that any particular tritium nucleus is nearly certain to undergo spontaneous decay long before the universe experiences its eventual demise. More specifically, we know that in any 24 hour period, the probability of its decay is pretty nearly 0.000154 .
From this, we can plot a distribution curve of the probability density of elapsed times between the start of our observation of the nucleus, and its disintegration.
However, it is not possible to foretell the time of its demise. Before its arrival, the timing of that event is not only unknown, it is unknowable.
My knowledge of quantum mechanics is practically indistinguishable from zero, but …
If I understand correctly, even if we had “God’s microscope” and could observe that proton and its two neutron friends with uttermost precision — making measurements infeasible or physically impossible in a laboratory — we would still not be able to make any prediction of the time at which it will decay, apart from the simple probability curve mentioned above.
The time at which it will decay is both unknown, and unknowable.
Or to frame the same conclusion another way, the probability distribution is the best available basis for predicting the moment at which our ill-fated nucleus will cease to be. There exists no information anywhere in the universe from which a better prediction could be made, even with perfect knowledge.
If any reader believes that I’ve gotten the physics wrong here, it would be most generous of you to identify the error, and cite some reference we could all consult.
MarkH November 30, 2020 1:41 AM
@Clive, xcv, et al.
Imagine that at the stroke of midnight ending the annus horribilis called 2020, we began monitoring a single nucleus of an unstable isotope (like tritium in the example above), noting carefully the time at which it ceases to exist by way of spontaneous decay.
The probability of decay in any specified time interval remains constant, but when decay occurs the experiment ends, so a sooner moment is always more likely than a later one.
Accordingly, we know that the year of decay is biased to 2021 — this is substantially more likely than 2022 or any succeeding year.
Likewise, if we look at the name of the month at which decay will occur, it’s more likely to be January than any other month, because sooner is more probable than later. However, there’s quite a big likelihood that even short-lived tritium will survive 2021, so although January of 2022 or 2023 or 2040 is more likely than February in each of those years, the bias is less.
We could continue this for day numbers within the calendar month, with even smaller bias. Decay on the 1st is more probable than on the 28th, but only by a very small margin.
This pattern proceeds, as we go to finer and finer subdivisions of measurement. For example, the number of the second in the terminal minute of the nucleus — an integer in the range of 0 to 59 — will show nearly perfect uniformity; the bias becomes too small for measurement.
If any reader believes that I’ve gotten the math wrong here, it would be most generous of you to identify the error, and cite some reference we could all consult.
I’ll pause this chain of reasoning for awhile, and await the pointing out of any errors.
Indian November 30, 2020 1:44 AM
@Clive do ypu think we will get a choice ib third world country? How can we excersize are freedom in this aspect? If my educational institution allows only vaccinated to give the exam and a certain vaccine is chosen to be imported by the country then we wont have an option… Contact tracing is being forced in the same way
Winter November 30, 2020 2:12 AM
And if it did not do this? Why even bother with assuming the worst when there is no vaccine being offered, and no educational institute having these rules?
SpaceLifeForm November 30, 2020 2:28 AM
@ MarkH
One lone Tritium atom. Yep, no way to know when.
Useless for random.
Now, say you have 1000 Tritium atoms.
Do you really want to wait 12 years to get approximately 500 events?
And another 12 years to get approximately 250 events?
Using radioactive half-life for random is not practical.
Lava Lamps work and is safer. Also, check out random [dot] org.
America the censored (JFK WAS RIGHT) November 30, 2020 4:01 AM
@ Faustus – November 28, 2020 2:29 PM
Agreed. But there are a lot of people who have time/money invested in, “The Game.” They don’t like to hear truth, they want to trash each other with their opinions to satisfy their ego. They believe the right and left is all their is, quite like a football game, really. They love the war of words.
Who decided red/blue were the only colors on the map? It was really amusing one election where both sides were bonesmen.
Why, would you look at that, someone got the Bill Hicks quotes removed again. Not that this surprises me. I won’t bother posting several of the quotes again, just the main one some baby keeps crying about to have it removed because it hurts their baby brain. And if it’s cowardly removed again, I’ll post it again. This is censorship at it’s finest.
— Bill Hicks
FA November 30, 2020 4:42 AM
You are absolute right about this. There will be some bias, even if you assume that the average event rate does not slowly decrease. But it can be made as small as you want, and it’s easy to remove it.
Assume you have an counter running at a frequency F0, and take the lower N bits. That value is periodic (a ‘stepped sawtooth’) with period F1 = F0/(2^N).
Now let the average trigger rate generated by observing some nuclear decay be R events per second. The time between two such events will have an exponential distribution. Measured in units of 1/F1 (i.e. the sawtooth period) the PDF of this distribution will be P(x) = L * exp (-L * x), with L = R/F1.
Now assume L is very small. If you look at the value of P(x) in any interval of lenght 1 (one period of the sawtooth) it will be almost constant, the ratio of the extreme values being 1-L. So the differences between two consecutive sampled N-bit values will have a distribution that is very close to uniform, but with a small bias towards lower values, given by L. If L is small enough, even a simple whitening algorithm will reduce the bias to value that can’t be measured in a lifetime.
What is less periodic than radioactive decay?
Nothing. Don’t know what Clive is thinking, but the fact that a sequence of random events has a well-defined distribution, moments, or spectrum does not make it periodic nor predictable. Also ‘injection locking’ of oscillators has nothing to do with the original topic of using nuclear decay to generate random numbers.
Shaun November 30, 2020 4:58 AM
“Certainly in the US where flu is alrrady on the rise, earlier and faster than expected.”
I agree with you that getting a flu vaccine is a prudent decision this year and will admit it’s the first time in my long life I took that step but influenza is not rising faster than expected in the US this year. According to the CDC, it’s lower than usual at this time of year:
MarkH November 30, 2020 6:11 AM
Thanks for your response.
Perhaps you missed the part about carefully reasoning step by step. The example of a single nucleus is not a design proposal!
It’s sort of a thought experiment, by which I hope to explore and clarify the nature of nuclear decay.
Without a crystal-clear understanding of what happens when one nucleus decays, any attempt to reason about emissions from trillions of such nuclei runs the risk of coming to a false conclusion.
Step by step …
xcv November 30, 2020 7:37 AM
Indeed, no joke, but the writings of a disturbed and mentally ill mind. This could have been part of the dark episodes of “A Beautiful Mind”
How can you use such hideously foul language in a civilized discussion?
Winter November 30, 2020 8:08 AM
“How can you use such hideously foul language in a civilized discussion?”
Accusing Bill Gates and Elon Musk of attempted genocide without material evidence is not part of a civilized discussion and not even of a sane discussion. Especially if these accusations might hamper the delivery of medical aid to the victims of a pandemic.
Your personal “freedom” might be worth millions of innocent lives to you, potential victims are allowed to hold you to account for it by pointing out the disturbances of your mental state.
xcv November 30, 2020 8:21 AM
Really? The doctor’s still got a head on his shoulder and a brain in his thick skull, and he’s just got to deliver on that emergency order of anti-psychotics for a pandemic of “mental illness” and impose involuntary hospitalization for widespread hysteria with mass trials for civil commitment of patients/defendants?
Winter November 30, 2020 8:38 AM
“Really? ”
Yes, 250k American died, 1.5 million died globally. Looking away does not make the bodies go away.
Watch a Beautiful Mind what delusions can do to you. Take your pills.
SpaceLifeForm November 30, 2020 3:22 PM
@ MarkH, Clive
As was my point. It was a thought experiment.
Here is the problem. The Universe is random, yet has interrelationships that appear to an observer to be known.
The Universe appears to be lazy, and not waste energy. Yet, Radioactive Decay requires that the Universe not be lazy.
The Universe must temporarily lend some energy to an atom that is unstable in order for it to decay.
When and why does it decide to do so?
Let’s say you have assembled a physical random number generator using unstable elements, and you can capture decay rates over whatever time interval you choose.
What Clive was saying is that bias will appear over time.
Now, consider if you moved your random number generator close to a Black Hole.
Are you sure that the Half-Life will not change from what you expected? November 30, 2020 3:54 PM
Interesting article about E2E encryption in the EC. Google might autotranslate the text into English. The PDF has been leaked to the German newspaper “Die Zeit”.
Once again, Germany is the leading force…
“EU will mit angelsächsischen Geheimdiensten E2EE aushebeln”
“Five-Eyes-Geheimdienste sollen Europa helfen, Verschlüsselung zu umgehen”
BTW: If you do not want the click on an URL because of your “political beliefs”, then just leave it. Or get an appointment with a shrink.
1&1~=Umm November 30, 2020 4:10 PM
military encryption!?
If you’ld read the link…
Then you would have see it is “unsolicited service advertising” (actually a significan repeate offender). Which os possibly why @Moderator or @Bruce has pulled it.
As for “military encryption” remember as we all should know “In marketing even a ball of bovine scat can be given a polish”…
Normaly you would see @- flag it up over night for @Moderator, but @- appears to be missing in action for some reason…
1&1~=Umm November 30, 2020 4:33 PM
I do not think you’ve thought that through.
Casandra linked a paper and asked for clive’s comments on it.
He replied and clearly starts with,
It would appear clive responded to Casandra about the paper, not anything else. I’ve downloaded and looked at the paper and it is nothing to do with isotope decay. But it does have a D-type latch circuit clive talked about in it, driven by ring oscillators which clive indicated there were problems with.
Thus it appears there are atleast two topics being discussed and you have conflated them in error.
Is there anything else in error you have conflated?
MarkH November 30, 2020 5:37 PM
@Clive, xcv, et al.
Science and engineering incessantly refer to abstractions and ideals which have no counterpart in the material world. Though they’re not realizable, we rely on them because they are enormously useful for analytic purposes.
Examples from geometry include points and lines.
Mathematical analysis includes the concept of periodic functions, which are used extensively in many domains of physics and engineering, even though no physical phenomenon corresponds to the definition of a periodic function of time.
Phenomena we call periodic are, strictly speaking, quasi-periodic: they approximate periodicity over some finite interval.
Another idealism is the distinction between deterministic and non-deterministic phenomena. In principle, a system completely free from random behavior is deterministic, and everything else is non-deterministic.
In 1814, Pierre-Simon Laplace famously wrote that if it were possible to know at some moment the position and forces of every item of which nature is composed, then in principle it would be possible to calculate the entire future of the universe — a purely deterministic physics.
About a century later, the revelations of the “new physics” put paid to that. Quantum randomness acts everywhere at all times.
However, systems (like people, for example) can be assembled with stabilizing mechanisms able to overwhelm this cosmic randomness — temporarily.
A favorite model of an outwardly deterministic system is a mechanical clock, which (neglecting factors like frictional wear, material fatigue, and the effects of thermodynamics on available energy) can continue its quasi-periodic ticking indefinitely. Its gross behavior appears deterministic, although it is always subject to random influences within its own structure.
In general, we can understand most phenomena as hybrids of deterministic and non-deterministic processes.
What is interesting and important about nuclear decay, is that it is an exception to the hybrid character I just described: it is completely and purely non-deterministic.
More than that, the timing of nuclear decay is the exact embodiment of the mathematical definition of a function of a random variable: absolutely non-deterministic, and subject to a defined statistical distribution (in this case, that corresponding to equal probability in any given interval of time up to the moment of decay).
The timing of nuclear decay isn’t just an approximation to the mathematical ideal, or an excellent approximation, or even asymptotically converging to the mathematical definition — rather (insofar as present-day physics has understood nuclear decay) a perfect embodiment of the idealized mathematical object [1].
If any reader believes that I’ve gotten the math or physics wrong here, it would be most generous of you to identify the error, and cite some reference we could all consult.
@SpaceLifeForm: Sorry, that went over my old gray head.
[1] Equivalently, the definition of functions of a random variable is a perfect model of certain quantum phenomena; arguably this ordering is a better expression of the relationship between the two.
SpaceLifeForm November 30, 2020 8:21 PM
@ MarkH
We are on the same page basically.
I was just pointing out that we (collective we) really do not understand Physics fully.
SpaceLifeForm November 30, 2020 11:19 PM
The solar system follows the galactic standard—but it is a rare breed
“What more does it take to harbor life than being an Earth-size planet in the habitable zone? What is really special here on Earth and in our solar system?”
Single moon. Asteroid belt.
Earth would not be in this position without the Asteroid belt.
There would be no stable magnetic field without the Moon.
SpaceLifeForm December 1, 2020 1:02 AM
@ Clive, ALL
when you need to confirm you’re not a robot
If you can not laugh at this, you have not been paying attention.
“I’m afraid that’s timed out.”
Winter December 1, 2020 1:29 AM
“Earth would not be in this position without the Asteroid belt.
There would be no stable magnetic field without the Moon.”
But the points is that thare are arguably many other constellations that might result in a stable habitat.
For instance, there might even be a habitable moon circling a hot giant planet, comparable to the larger moons of Jupiter.
Also, “life” does not equal technological civilization. Microbes have entirely different possibilities than mammals. You can find them everywhere, eg, 3 km deep inside rock floors.
1&1~=Umm December 1, 2020 1:46 AM
“What link?”
The URL behind the name field.
A lot of the ‘unsolicited service advertising’ that comes in from Asia and other places over night uses the ability to put a URL in the name field to do their advertising by.
Usually just a look at the URL via mouse hover over is enough to tell you if the URL is a pesonal page or blog or some scam help line or worse trying to get it’s link noticed by search engines etc.
Some of the posters of such try to disguise the text of their post to blend into the background of the thread topic to avoid getting deleted. One way that is tried is to copy part of the text from a legitimate poster further up the thread.
Judging by some of the reports readers make to the Moderator they are quite eagle eyed about ‘unsolicited service advertising’ and catch the well disguised stuff most would miss.
From what I remember you got suckered by someone who has been trying to get the VPN rating service they were pushing onto this blog more permanently for quite a long time.
It’s quite a little battle going on under the surface of this blog with the attacking ‘unsolicited service advertisers’ and the defending moderators trying to get rid of the attackers. I don’t think many realise the work that goes on quietly in the background into keeping the attackers out thus the @Moderator/@Bruce deserve a word or two of thanks for a job well done. December 1, 2020 3:00 AM
@1&1~=Umm: You’re right. There’s indeed a URL in the name and that leads to some VPN.
Now, those are posts the mods really could take down.
Anyway, in this day & age, who falls for the VPN meme!
And the the military grade encryption BS. Was a red flag already 25 years ago.
FA December 1, 2020 3:33 AM
The ‘conflation’ started here:
which, while presented as a comment on the ‘nuclear decay random generator’ idea, moves on to discuss
• sampling one square wave by another,
• D-type latches,
• ring oscillators and TRNGs,
none of which have anything at all to do with the bias that MarkH asked about.
Is there anything else in error you have conflated?
If there is you are welcome to point it out.
1&1~=Umm December 1, 2020 4:17 AM
“The ‘conflation’ started here:”
You originaly said,
In the post you link to whilst there is mention of D-type latches and problems that people get taught when they look at the theory behind sampling there is absolutely no mention of ‘injection locking’. As a simple search in a browser window shows, the first mention of injection locking was in a post specifically talking about a paper, that also talks about the use of D-type latches, and not before.
An observation that might give rise to some one quoting the old quip of,
‘Me thinks you are trying to move the goal posts’
You made a mistake originaly, fair enough we all make them hence the quip ‘to err is human’, but then doubling down on it?
Just grin and say ‘Opps’ or if you like to be more formal ‘mea culpa’, none of us are perfect and we all say rumbustious sometimes colorful words or ‘WHO Put that there!!!’ when we stub our toes. The rest of us smile thankful it was not our turn.
MarkH December 1, 2020 5:19 AM
Though scientific knowledge — especially in the domain of the submicroscopic — is frequently expanded, nuclear decay (which for alpha emission is actually a form of spontaneous fission) seems to have been well understood for generations.
When I was a boy, I thought that the Manhattan Project scientists were doing, well, science. In many ways they were, but as far as I understand, in building The Bomb they did little to expand the theoretical basis of nuclear physics, which while very recent was already well established. Their core scientific work consisted of careful measurements of properties of certain isotopes — but most of what they did was practical application of existing theory, or in other words, engineering.
Since the mid 1930s, the theory of nuclear behavior has advanced, including models of the mechanical dynamics of large nuclei, the confirmation of already-predicted mesons and their role in the strong nuclear force, the incorporation of quarks into the theory of nuclear interactions, etc.
But to my knowledge, the scientific concept of how unstable isotopes behave hasn’t changed in 85 years. Throughout this time It has been clear that they spontaneously and non-deterministically decay, according to an invariant probability (for each isotope) per unit time.
Clive Robinson December 1, 2020 6:39 AM
@ SpaceLifeForm,
when you need to confirm you’re not a robot
Thank you for that I’m in need of a laugh or three…
Not feeling particularly “Compost ment-hay” at the moment, the broken foot is playing up again after going out to get food in, and it feels like I’ve a cold comming on in sympathy with the change in weather (add “cold and wet” to the usuall British weather of “damp and grey”, and you thought it was warm beer and bunions that made us miserable, nope it’s broken bones that act like “seawead”[2] and ache when it’s going to rain, so they ache most of the time 😉
Which would normally be managable by meeting friends up for an endless tea/coffee and meal with a chat and joke swap at our local “spoons”[1].
Of course what we should do as “security experts” is not laugh but nod sagely and point out that the two main offenders are Cloudflare and Google and neither should in any way be trusted. Then start shaking our heads sadly and muter into our beards about rampant AI and how it’s claiming we are supporters of political party X because we clicked on only Y boxes and went left to right not right to left etc etc.
Which of course saves us from COVID better than a lockdown because every one avoids us all the time as being “grumpy miserable old farts” 😉
But speaking of spoons and lockdown it’s not been possible due to a month long “lockdown”…
But to show the idiocy behind thinking in the powers that be… The lockdown ends this comming Wednesday so people can start Winter Solstice celebration shopping (oh the ecomomics of it, who can not but shead a tear for a lobbyist in fear of loosing their pay packet). Then… Many other sensible anti-pandemic-spread measures on travel and going to meet people etc are going to be lifted. Which will probably mean that by the time Rabbi Burn’s night / Australia Day[3] 25/26 Jan gets here we will be in another lockdown. But… based on their apparent current thinking that will get lifted for Valentine’s day and it’s weekend…
[1] Spoon’s is a sortened name for Weatherspoons’ which is a chain of pubs with reasonable beers at sensible prices and quite nice meals for the price you pay. Oh and just remember “spoons” can be said not just when you are sober or some what imbibing merry, unlike “weathers'” where those silent “aitches” and “s apostrophes” always trip your tongue.
[2] It was only fairly recently that medical science caught up with the reality that most knew by experience. When you break a bone and it heals it often does not do it properly and this makes it sensitive to slow changes in preasure such as you get with weather fronts. Much as some peoples filled teeth do, oh and nearly every one’s ears do to faster changes in preasure.
[3] Both celebrations arise from just 29years appart the first is Scottish Poet’s Birthday in 1759, the Second the supposed first landing of people on the Australian Continent in 1788. Thus Rabbi would have lived to see the first of many of his country men sent south for ever by the English. Thus for some a day of celebration followed by a day of reflection.
MarkH December 1, 2020 6:59 AM
Sorry about your foot … I know plenty of people who, through no fault of their own, can serve as “human barometers” by hurting when the pressure declines.
Probably I mentioned before that my daughter has been uncanny, making very specific predictions with better accuracy than the National Weather Service. She’s losing her magic powers, however, which seems to be part of the complex of usual brain changes when the hormones kick in.
One afternoon while we were out shopping she announced that there would be a thunderstorm. Not just rain, but specifically a strong storm. This particularly struck me, because (a) the afternoon was sunny with very little cloud cover, and (b) I had looked at the forecast not many hours before, which showed no rain.
Roughly half an hour later, the skies opened up in a heavy downpour with lightning and thunder …
Anyway, I hope your “pedal” barometer stops working and before long you’ll need to look at “the glass” to see how pressure is fluctuating.
Clive Robinson December 1, 2020 8:00 AM
@ MarkH,
I suspect that it will join all the others that are the “badges of honour” of a “misspent youth” of contact sports and wearing the green.
What they do not tell you in an army recruiting office is why if you are lucky enough to survive, as an NCO why you get “early retirement”. It’s because all those training and combat injuries you get not from actual fighting accumulate and slowely turn on you. So when you get towards 45 you are effectively just on the point of being on your last legs / feet / spine / other body part.
I see film clips of young soldiers “yomping” with packs well over a hundred pounds on their back, more on their webbing and in their sagging pockets. So unbalanced they are barely able to stand, yet some “red tab” staff officer in an air conditioned truck or building expects them to maintain a 5mile/hour speed to some target area and deploy potentially under fire…
I just think about how much they are “borrowing from their future” and how bad “the pay back will be”, a few faded medal ribbons are not much reward for what they will probably suffer.
Anders December 1, 2020 8:08 AM
@Clive, @SpaceLifeForm @ALL
FA December 1, 2020 8:50 AM
And company magazines full of pro-Brexit and other right-wing drivel.
Anders December 1, 2020 9:45 AM
@Clive, @SpaceLifeForm @ALL
Follow-up. At least 350 GB data has been stolen.
Seems they used Drupal vulnerability.
MarkH December 1, 2020 12:11 PM
Arecibo Self-Demolition
A follow-up on the unique Arecibo Radio Observatory: sometime last night, the 900 ton instrument platform fell about 450 feet to the bottom of the natural “bowl” in which the reflector was constructed.
It is reported that no person was harmed.
SpaceLifeForm December 1, 2020 1:08 PM
Three pictures of Arecibo collapse
SpaceLifeForm December 1, 2020 1:46 PM
Arecibo tower damage
Two of the tower tops were snapped off as the rig swung into the rockface.
Here is a good pic of one of them.
SpaceLifeForm December 1, 2020 2:41 PM
Arecibo report from hXXps://
Apparently Tower 4 (towers are numbered T4, T8, T12 like a clock) also broke at the top. That is the tower which handled the cables that failed.
And there is building damage from falling cables.
“surveillance drones found additional exterior wire breaks on two cables attached to the same tower. One showed between 11-14 broken exterior wires as of Nov. 30 while another showed about eight. Each cable is made up of approximately 160 wires.”
Approximately 160? I would expect that to be absolutely known.
Clive Robinson December 1, 2020 3:19 PM
@ MarkH, Casandra, SpaceLifeForm, Winter, xcv, et al,
This example should need no refrences as most get taught it in science at school.
“Brownian motion”
The path of the “individual” particles in the “working fluid” are considered currently as nondetermanistic. But the over all average behaviour easily predictable.
And the underlying cause is similar to that of isotope decay, in that energy is added to the particle in some way from the environment it is in.
In Brownian motion it is reversable and makes the particle vibrate more thus causes where possible an expansion in the working fluid (see Boyle’s Law and it’s implications to the statistical mechanics Ideal Gas Law). For obvious reasons isotope decay is not normaly reversable as both energy and mass are lost from the sample the particle is in.
Individually in both Brownian motion and isotope decay what individual particles do is currently beyond our ability to determine accurately.
However as I indicated and SpaceLifeForm pointed out more determanisticaly it is not the individial behaviours that are of interest but the “average” of such behaviours over time in a bounded volume or closed environment of the sample being tested or judged by measurment.
Whilst a Geiger-Müller(GM) tube can register some isotope decay it does not measure it all from the nominaly closed environment of an isotope sample and it can and does produce false readings from other sources in it’s proximity, including that super nova just down the road a few thousand light years back.
But have a think about how a Random Bit Generator(RBG) actually works. It is disinterested in singular isotope decay, because it is nothing more than a point in time that is effectively dimensionless. The RBG actually works with multiple decays giving two or more points in time that can be measured. That is some kind of time measurment is made either by counts per minute or time between counts from the GM tube (remember that a GM tube is bandwidth limited and has recovery time as well as other issues so the points it gives in time are actually not accurate but biased in some way).
As SpaceLifeForm pointed out succinctly the average effect of the counts changes very predictably with time. So predictably in fact scientists find it one of the most accurate ways to date closed environment objects.
Such predictability is almost the very definition of a “known bias” that some over optimistically think can be countered or negated in some manner. But there is a difference between knowing a bias is in a measurment system and removing the bias, and thereby lies the gap between theory and practice that hides a very large attack space, that may or may not matter depending on your application. For security it usually does matter to a very high degree (1 in 2^4096 for some Public Key crypto currently).
Which is why a bias of a very small amount on a 50:50 judgment normally goes unremarked in most measurment systems. But with each measurment judgment the bias accumulates thus after just 50 judgments a constant bias of just 1% would have probably effected the result. But which one if any of those 50 judgments do you change to correct the bias?
You can not actually work it out, the best you can do is take many readings and average them out and then apply a fractional correction factor. But that does not solve the problem in fact in some cases it makes the problem worse (law of small numbers applied to residues). But worse yet the number of judgments you have to average to gain an improvment in accuracy goes up alarmingly when you are looking for entropy at 2^32bits, and reaching impossible for 2^4096 bits for an 8K RSA Public Key.
But also all the time though you will still have bias you cannot correct on an RBG no matter how many judgments you make. Because the reality is you have just as many issues measuring the bias it’s self. Which makes the user of the RBG “time constrained”. Made worse by the fact few if any RBG’s people will use will have continuous test / evaluate / update loops running. In fact in the case of IC based RBG’s that is deliberately prevented for reasons the chip manufacturers like Intel refuse to explain.
But an attacker has an advantage over the RBG user on this score as they are not as time constrained. If the user takes more than one output from the RBG then an an attacker can average those outputs. Thus the attacker has the advantage of being able to average out an error function over a much longer period in time.
Whilst there are ways for an RBG designer to push the attack further out in time –look up “dithering” for one example– if this is done by either a deterministic or chaotic signal then an attacker can still average it out. Thus determin what it is, synchronize to it and strip it off. Most importantly the attacker can then wind it back in time, which if they have a ‘Collect it all’ policy in place means your past messages etc become vulnerable. Thus the signal you use to dither or other obscuring method also has to be ‘truely’ random.
Thus you get into an almost circular argument, and the only solution is to go with a “turtles all the way down” mwthod of using multiple independent true RBG’s and keeping your fingers crossed about certain types of measurment bias. Which brings us back to the issue of ‘independence’. That is how do you ensure your sources and the judgment systems attached to them are not in some way synchronized under any and all conditions?
That’s why it’s easy to design a “noddy TRNG” with electronics from scrap devices, harder to produce one for maths simulation etc, but very hard to design a TRNG of sufficient quality for todays high end crypto and communications needs, especially where high volume traffic thus high usage of good RBG distribution is to be expected.
Anders December 1, 2020 3:24 PM
MarkH December 1, 2020 6:52 PM
@Clive, xcv, SpaceLifeForm et al:
As I already wrote a couple of times, I’m trying to reason these questions through methodically step-by-step.
Clive’s response a couple of comments above covers quite a lot of ground, using a very broad brush (sorry for the combination of metaphors). If anyone wants to point out specific errors in any assertion I made above (in the comments titled “radioactive decay TRNG” all-caps), that would be super helpful!
Rather than the broad brush, I’ve been trying to use the point of needle: to be as precise and specific as possible, in the hope of avoiding errors in fact and logic.
So Clive, if you will kindly quote one or more statements from one of my preceding comments, and point out the errors, I should be most appreciative!
While I await a specific correction, I want to focus in on the meaning of “deterministic.”
As I mentioned before, my understanding is that in general physical processes are some hybrid of deterministic and non-deterministic, whereas spontaneous nuclear decay (and in particular, the alpha decay from Americium 241) is purely non-deterministic.
Here are examples of phenomena which are largely deterministic:
• the roll of a die
• the motion of the “goo” in a lava lamp
• a coin toss onto a table
• Brownian motion
If my understanding of the quantum nature of the universe is correct, then fully deterministic processes are somewhere between rare and non-existent.
Probably it’s not difficult to work out why the first three are deterministic, and why their outcomes (including the state of the lava lamp within some limited time range) could be predicted far better than by chance.
Brownian motion is more subtle, but I start with the trivial prediction that in one second, the speck will be very near where it is now! Much more significantly, there’s no practical way to predict which direction it will move in the next second.
But in principle, this could be done. The speck’s motion is essentially the result of “billiard ball” collisions with molecules of the fluid in which it is immersed. Measuring the position and momentum of every molecule within a few millimeters of the speck would make it possible to predict — better than by chance — which direction it would move in the next second, a la Laplace.
Of course, we’ve never found “Laplace’s demon,” and the Brownian forecast can’t be done with any known technology. Nonetheless, the universe contains information by which the motion could, in principle, be predicted.
Nuclear decay is really really different from that. Per wikipedia, “According to quantum theory, it is impossible to predict when a particular atom will decay.” Even if you had “God’s microscope”, and could precisely measure every particle inside an Am-241 nucleus, there is no observation you could make as a snapshot, as a time series, or even 10^-21 seconds before the event!!! — that would enable any prediction of the time of its decay better than by chance.
The universe contains no information by which the moment of decay could be predicted.
Nuclear decay is such an exotic phenomenon, that it’s difficult to relate to the sorts of things we are accustomed to reason about.
SpaceLifeForm wrote, “the Universe must temporarily lend some energy to an atom that is unstable in order for it to decay,” and Clive wrote of “isotope decay” in which “energy is added to the particle in some way from the environment it is in.”
If quantum mechanics is correct, and I correctly understand its conclusions concerning spontaneous alpha decay, then (a) no external energy is required for decay to occur, and (b) there is no external event of any kind which triggers the decay.
In some exotic cases, environmental factors can modify the half-life of decay, but even these don’t determine the moment of decay: they shift the probability per unit time. [In any case, such conditions won’t occur in any home-made TRNG.]
The decay of each Am-241 nucleus is the conclusion of a conversation it has with itself, not with the rest of the universe.
A little poetically, you could imagine that inside each unstable nucleus someone is rolling an astronomical collection of dice at an incomprehensibly high rate; the first time they come up all ones — whether that’s a picosecond from now, or one thousand years hence — the alpha particle tunnels out.
Clive Robinson December 1, 2020 7:08 PM
@ Anders, ALL,
Wit regards Circle-NSO there’s not very much to say other than their Op-Sec suffers from the “Big civil organisation issue” that is as a civilian organisation what leverage they have on employees via punitive action is limited to “civil-action” and the larger the organisation the less effective it becomes.
Whilst that can be scary nasty in some countries (authoritarian), bad in others (the US being one) in other nations (several European) employees have better legal protections thus stronger rights with respect to employers. The result is employees tend to put “getting the job done” over “Organisational Op-Sec”. Because “job success” is short term and gets promotions and bonuses, whilst Org Op-Sec is a long term issue and who did what on who’s orders is quickly lost in time.
But the big issue is “Signaling System Seven”(SS7) it’s an international standard of long standing, both of which give it enormous inertia and like the worlds largest ships it’s difficult to stop and hard to turn and few want to try doing either.
The drafters of SS7 were more than aware of the inertia effect so they limited it’s scope to “core functionality” to ensure the biggest likelyhood of minimal problem interoperatability over extended ranges of equipment with quater century or more nominal working lives.
Which means that they left many things out which today we think otherwise about.
I’m not going to go into great technical detail for a couple of reasons. First of it’s dull amd a 20,000ft view suffices, but secondly I don’t want this comment to be complained about and moderated out of sight.
So considet SS7 as one of several layers in a communications stack. That is it has layers both below and above it. Below would be transportation including traffic routing and the likes of encryption and error correction. Above it would be the likes of authentication and functional routing and the required functionality to do “service billing” on. The thing about “service billing” is information delivery and accuracy are given way higher importance than nearly all else as “It’s revenue collection”.
Thus SS7 is effectively an unauthenticated plain text protocol. Which originally did not mater in the slightest. Because the signal was “circuit switched” or carried on dedicated bearer lines that is it was effectively “physically authenticated” and as circuit switched and dedicated bearer lines were effectively private the need for anything more than plaintext was not required. It was also effectively undesirable as it would significantly increase complexity that would require more better qualified people thus considerably adding sunk costs.
What has changed to make SS7 insecure is that rather than sitting on point to point circuit switched or dedicated bearer lines it’s now on packet switched multi access networks of such size that they might as well be considered public. Which was something the SS7 designers did not consider to much as it was not realy envisioned…
Now however a lot of SS7 travels across the likes of TCP/IP on networks that have gateways out to the rest of the world…
Thus SS7 still gives data “integrity” it now needs aditional stack layers below to give “confidentiality” and layers above to “authenticate” each transaction. These have not happened for various reasons. But the over riding one is to minimise cost of billing to maximise profit by keeping the non profitable cost of security as low as possible.
The actual take away lesson is to realise that changing the basics such as chosen carrier physical layer methods need to be considered from a security perspective.
Which with SS7 over the past couple of decades has not been considered, especially when selling access to SS7 is so profitable…
name.withheld.for.obvious.reasons December 1, 2020 10:39 PM
@ Clive
I believe EFF got its start out of the legal entanglement that resulted from the absconded SS7 manual from an AT&T network access provider hub. It is the first time the FBI became seriously involved in the investigation and unwittingly stood with AT&T’s assertion as to the value involved in the acquisition. I believe it was a downloaded electronic copy, thus any physical valuation is questionable. Those accused, LoD hackers, were unduly held to account under harsh penalties in order to convey the message that hacking, the didn’t use the term cracking, was to be treated differently.
ifb December 1, 2020 10:55 PM
@Clive Robinson
Presumably employees of any “big civil organization” receive positive rewards for their contributions in the form of wages, salaries, bonuses, benefits.
There’s a pernicious problem of indebtedness, however, in that employees become “locked in” to the job or position, with a family and a 30-year home mortgage — and all available positive rewards from work are spent paying bills: the positive becomes neutral, and the neutral becomes negative, because of an upward-ratcheting cost of living that consumes all available income for the employee’s family.
Clive Robinson December 2, 2020 1:18 AM
@ ifb,
There’s a pernicious problem of indebtedness…
Yes, and it’s one that some employers try to exploit occasionally viciously.
I might be unusual, but I was aware of the issue long before I had my first full time job. So as soon as I started working I built up savings that amounted to “six months drop dead and walk away money” or if you prefer “six months seed money” to start my own business. Even when I purchased my first property I maintained the “walk away money” to include the payments. Also I do not borrow money I save, the mortgage was the only loan I’ve ever had as an individual, and it was an investment.
One foolish managing director tried to exploit the “indebtedness” he assumed incorrectly I had (I was young and owned a house he assumed there would be leverage there I guess). I disabused him of that assumption rather rapidly and would have seen him in court. However the Chairman of the Company and I believe the majority shareholder at the time decided a messy trial was not what the company needed. I got a couple of months extra pay as a “bonus payment” (tax reasons). By the time he made that decision I had already found another job that was better paying and much less stressful, so it got added to the fund.
From thinking about what had happened I realised from that incident that a thoughtfull employee could reverse the employer employee relationship if needed you just needed your own leverage (the old “walk softly and carry a big stick” thinking).
Some time later I was working at a large very plesant organisation that unfortunately got taken over, and the new owners were rapidly developing bad employer employee relations. So I decided it was time to find another job which I fairly easily did. However the potential new employer wanted me to start as soon as possible which normally would not have been a problem. However they could not wait the three month notice period that had just been put in place by my soon to be old employer who was using the courts to try and enforce the period on people who were already jumping ship…
So I engineered a dispute and a leagl friend drew up a 14day notice of intention to start proceedings for “judicial review”. The employers legal team must have told them of what their chances were so I got “instant dismissal”. But they found that was not good idea either because they got another 14day letter for “breach of contract”. Their lawyers again must have advised them of their chances, and I got the three months wages plus an “undisclosed sum” and importantly a letter saying I’d been made redundant along with a signed legal agreement that redundancy was what was to be put in my work record and given to any future enquiry made and the real prize was keeping my Intellectual Property. It was not long thereafter that the organisation “imploded” so I was possibly the only one to walk away better off.
Before you get the wrong idea I was usually an ideal employee because I only worked where I was going to be happy to work that had interesting and often challenging work to do. It was the companies “going bad” for various reasons. Take the organisation that imploded where I got out with my IP, it was realy a very nice place to work before it got taken over and “New Managment” tried to make it hell as only “asset stripper Venture Crapatalists” have a habit of doing. As for the earlier job, well it was the managing director making promises not just to share holders but government agencies that supplied funding with claw back clauses that caused the issue. He had made promises neither he nor the other employees could deliver on. I delivered but the price was to high (moon shots usually are). So in return for getting his butt out of the hot seat, he had decided it had to be my fault he sat in it in the first place… Even though he was sitting sizzling when I started working there and it was only the fact he was in trouble that made him take my on. When people get that kind of mind fix the best place to be is somewhere else with a nice safety gap in between.
At the end of the day the leverage only arises from indebtedness, if the employee gets into that position. Whilst some people are so poorly paid they have to work hand to mouth that’s generaly not the case for the “aspirational classes” they spend till they are a month or more in debt on credit cards personal loans and the like when there is no real reason to. What they do not think about is that “month in debt” is costing them another “month in interest” every year. So in effect they are working 12months but only seeing the benifit of 10months. In the main they are kiding themselves they are being financially creative. The point is they are not investing in assets that usually appreciate in fiscal value like a house, but “life-style status”.
Way back in the early part of my career I got to know a consultant who had what are sometimes called “bankable skills” he was earning three times what I was for doing nearly the same job. We got chatting about what he did with the extra money. It actually wasen’t much. Yes he had a nicer house, but the bulk of the money went into a better cut of meat for sunday lunch, going out socialising in better places, better shoes, cloths and holidays and a nice car and a few other comforts in life like having “Harrods toilet paper/tissue”.
We kept in touch afterwards and both of us got to thinking about the conversation and as a result we both put money in a joint venture that was quickly paying dividends and building up. His wife did the admin and as the kids got older came to work full time and we soon had a couple of employees. Sadly he had a fatal road accident, I did not need the extra income but for his family it was all they had coming in, so I sold his children my shares for a pound rather than disolve the business. His wife made a go of it and the children are still running it today, and they now employ quite a few local people. They send me an Xmas card each year and I visit and give a “founders talk” to the apprentices they take on. So yes I’m more than happy that I did it and I hope the business goes on long after I’m gone. I guess from the satisfaction view point it’s one of the best investments I’ve made as it’s now given quite a few young people a future. I was starting to do similar with an old school friend, but sadly he died unexpectedly earlier this year.
SpaceLifeForm December 2, 2020 1:28 AM
@ MarkH, Clive
Therein lies your mistake.
It is NOT possible to measure the position and momentum of a molecule simultaneously. This even applies to an electron, or even a quark.
Heisenberg’s uncertainty principle is in play.
The role of ‘observation’ is in play.
And, of course the Schrödinger equation.
I recommend not to study the latter too much.
FA December 2, 2020 7:05 AM
@Clive, MarkH
To reduce bias there is no need to measure it.
A simple example. Assume you have a random bit generator with P(0) = 0.6 and P(1) = 0.4, clearly biased.
Now if you take the parity of 8 such bits (using 8 new bits each time of course), the probabilities become P(0) = 0.50000128 and P(1) = 0.49999872.
With these probabilities, the entropy per bit is better than 0.99999999999 (eleven nines).
This procedure does not depend on knowing the bias of the original generator.
Now if the original generator is not just biased but also has some periodic defect (i.e. nonzero autocorrelation), that problem will remain – periodicity and bias are not the same thing.
But there is no periodicity in the time between nuclear decay events.
Define a signal S(t) as a dirac pulse at each such event. The power spectrum of S(t) is perfectly white, without any discrete frequencies. Sampling another signal (e.g. a sawtooth) at the event times is equivalent to multiplying that signal by S(t), so there will not be any discrete frequency in the result.
Clive Robinson December 2, 2020 8:14 AM
@ MarkH, SpaceLifeForm,
And you’ve started in the wrong place.
As @SpaceLifeForm and myself have told you looking at individual decays is not telling you anything about True Random Bit Generators (TRBGs) that use isotope decay or any other singular event.
Singular events give you a point in time which effectively you gain nothing by measuring no mater how accurately you try.
It’s not how TRBGs work, they measure “relative time” by using multiple events. There are two basic methods,
1, Count the events in a given time period.
2, Count the time between two or more events.
Thus talking about if a singular event can be determined either now or in the future and if it takes energy from the environment or not is not gaining you any insight as to the bias problems and where they come from. But remember that there is an issue you first have to understand which is the Quantum Zeno Effect or Turing Paradox, I’ll let you look it up but to whet your appetite simplisticly it says that a quantum state change can not happen at the point of measurment, thus if you rapidly measure a quantum effect you can in theory stop it’s transition… Whilst Turing was perhaps the first to talk about this big hole in quantum theory, it was not realy taken seriously untill this century when experimentation had reached the point it could start to be investigated in more interesting ways…
But to humour you a little about isotope decay,
1, One of the basic dictates of science is “For work to be done energy has to be expended”
2, Also neither energy/mass can be created from nothing after the big bang event had settled.
3, Energy/matter move from the organised to the disorganized state (entropy)
So far, as far as I’m aware, all practical tests including trigering an isotope to decay have confirmed the above.
Thus the idea of spontanious decay without energy input is going out on a limb rather more than a bit and taking a chain saw with you. Theory as they say is not the same as practice, in fact theory especially quantum theory is part of the “shut up and calculate” school. The results of which need to be tested for the mathmatical models to move from being elegant formulations to proven predictive tools. Repeatable real world experiments are in effect not just the “gold standard” but “the only standard”.
The argument for alpha decay to cross a potential barrier is quantum tunneling of the particles. Put simply quantum mechanics indicates, these small particles can, with a very small probability, tunnel from one side of a potential barrier to the other. The way these particles cross the potential barrier is interesting. The argument is the particle, in effect, borrows energy from its surroundings to cross the potential barrier. It then immediately gives the energy back by making the electrons reflected by the potential barrier more energetic than they would otherwise have been. Thus the three dictates mentioned above hold.
Is this actually what happens who knows, but “the accounting balances” the way described. Hence the point both @SpaceLifeForm and myself were making.
But that still leaves the question unanswered as to what triggered the tunneling event. Part of the argument is that alpha particles are at best only very loosely contained within the isotope nucleus and bounce around back and forth against the potential barrier around 10^21 times a second and even with the heaviest of istopes where the containment is very weak it could still take several billion years of this bouncing around to happen before an alpha particle tunnels it’s way out.
But it still does not explain what caused it to cross the tipping point to be able to tunnel. Is it an effect of the alpha particle or the barrier or possibly something we do not yet know about (Quantum mechanics is known to be incomplete still).
Any way enough for now.
vas pup December 2, 2020 4:05 PM
Web Summit: Oculus co-founder talks China and military AI
“Virtual reality firm Oculus VR’s co-founder has accused other tech chiefs of refusing to work with the US military for fear of alienating China.
In a virtual chat at Web Summit, Palmer Luckey said US technology companies had “always worked” with the military in the past, claiming a recent change of heart had been caused by their deepening relationships with China.
During a talk at Web Summit, which is online this year, he challenged the notion that tech firms were refusing military contracts because of staff’s ethical objections.
“A lot of companies have financial and PR incentives to stay out of military work, so they’re happy to use these employees as a scapegoat to say ‘we’re listening to our employees’, which contributes to this idea that workers of Silicon Valley and other tech hubs are universally opposed to this idea,” he said.
“It is in the interest of a lot of these tech companies to kind of pretend to be these extra-national international corporations that are bound to no nation.
“You can disagree on how dominant of a factor it is, It’s a factor, though, and it’s the one that doesn’t get discussed.”
China has done an incredible job of using the blocking of access to their markets as a tool to get the culture of western democracies to subvert itself to China,” he said.
“They don’t have to come after us militarily. They don’t have to cut our networks. All they have to do is invest in our companies, do partnerships with our companies… and then everybody bends over for them.”
When asked about the use of AI in the military, Mr Luckey said:
===>”It’s not a good idea to outsource life and death decisions to a machine. You can’t court-martial a machine. You can’t imprison a computer for war crimes.”
Instead, he said,
===>the aim should be to use AI to “sort through large amounts of information, but not make those lethal decisions without a person very explicitly looking at the data and making the call.
“I think that that’s a pretty good line to draw, and something we’ll have to enforce against our political adversaries.”
vas pup December 2, 2020 4:20 PM
Microsoft files patent to record and score meetings on body language
“Technology giant Microsoft has filed a patent for a system to monitor employees’ body language and facial expressions during work meetings and give the events a “quality score”.
A filing suggests it could be deployed in real-world meetings or online virtual get-togethers.
==>It envisions rooms being packed with sensors to monitor the participants, which could raise privacy concerns.
Microsoft is already under fire over a separate “productivity-score” tool.
Companies do not always make use of patents they register.
But they often reveal ideas in development before they appear in commercial products.
Details of the “meeting-insight computing system” were filed in July, ahead of being made public this month.
They say the sensors could record:
which invitees actually attend a meeting
=>attendees' body language and facial expressions
the amount of time each participant spent contributing to the meeting
=>speech patterns "consistent with boredom [and] fatigue"
They also suggest employees’ mobile devices could be used to monitor whether they were simultaneously engaged in other tasks – such as texting or browsing the internet – as well as to check their schedule to take into account whether they had had to attend other meetings the same day.
All that information would then be combined with other factors, such as “how efficient the meeting was, an emotional sentiment expressed by meeting participants, [and] how comfortable the meeting environment was”, into an “overall quality score”, Microsoft says.”
My nickel: all such surveillance technology NOT directly served for security purpose is just manifestation of childhood voyeurism in adulthood of developers and users of such technology.
vas pup December 2, 2020 4:48 PM
Europe’s role in China’s Chang’e 5 moon rock mission
“Estrack, a network of ground stations run by ESA, is tracking China’s lunar “sample return” spacecraft. It aims to bring moon rock back to Earth by mid-December. It could be a step towards more regular moon flights.
Helping it along the way is the European Space Agency’s tracking network, Estrack — a global network of ground stations that is run by the European Space Operations Centre (ESOC) in Darmstadt, Germany.
Europe is keen to work with China on robotic missions and human spaceflight.
“We’re now in the most critical phase of the mission,” says Pier Bargellini, who heads ESA’s Ground Facilities Operations Division.
Once the samples have been collected, a return vehicle will launch from the lunar surface and then perform an automatic docking with an orbiting vehicle before it all returns to Earth.
“And we’re supporting this phase with a large antenna we have in Malargüe, Argentina,” says Bargellini.
=====>!!!”The Chinese have their own antenna in Argentina, so we’re providing back up, so we don’t lose any data if there are problems.”
That’s important to ensure all those critical steps are performed as planned. After that, a Spanish antenna at Maspalomas in the Canary Islands will track the sample return vehicle as it reenters Earth to land north of China’s Inner Mongolia.
“You want to track the return vehicle as much as you can to know its trajectory exactly and ensure it reenters at the right place,” says Bargellini.
ESA has supported a number of China’s Chang’e missions. But Chang’e 5 is, by CNSA own admission, China’s “most complex space mission ever.”
Cassandra December 2, 2020 5:43 PM
@Clive Robinson
Thank you very much for taking the time to read the paper I suggested. I appreciate the effort greatly.
Unfortunately, I have had to deal with a domestic embuggerance, and have a backlog of other work to deal with, so I’ve not been as quick as politeness would prescribe to acknowledge your freely given work. Please accept my apologies.
I have every sympathy with old injuries. I managed to aggravate one of mine by shutting a car door on my leg, which led to some blood and an infection that reminded me of how long it took to heal the first time. My gait is much like the late John Thaw’s, but for a different reason. I hope your foot improves.
The paper links to a paper on the ‘Intel Random Number Generator’ from April 1999. I don’t know if the same design continues to be used, but the paper was an interesting read for me. It is interesting to identify the important things not being said, otherwise one could get the impression that everything is hunky-dory.
Our host is quoted:
Bruce Schneier writes, “Good random-number generators are hard to design, because their security often depends on the particulars of the hardware and software. Many products we examine use bad ones.” (Schneier, B., “Security Pitfalls in Cryptography,”Counterpane Systems, 1998.)
I think one of the problems with the discussion around radioactive decay is the misconception that it you look closely enough at something, you can find the root cause. Einstein famously said that ‘God does not play dice [with the universe]’, and supported the idea of ‘hidden variable’ theory ( hxxps:// ). Current consensus amongst physicists is that there are no hidden variables: the Universe behaves in line with the probabilistic quantum-mechanical model. There is, of course, a minority who disagree and gamely come up with interesting other theories, but none are currently accepted in the mainstream. The end result is that you can make accurate predictions about the behaviour of particles in aggregate, and say that, on average, a certain number will decay in a certain time period: but you cannot point at an individual atom and predict when it will decay*. Some people find this frustrating and ‘unthinkable’ because they are stuck with trying to apply an inappropriate classical model where it is not valid.
We end up at quantum metaphysics, at which point it might be worth reading ‘Quantum Ontology: A Guide to the Metaphysics of Quantum Mechanics’ by Peter J Lewis (It is reviewed here: hxxps:// ). People are still arguing passionately about what Quantum Mechanics means, and what the correct interpretation is. It may well be that JBS Haldane was right: “…my own suspicion is that the Universe is not only queerer than we suppose, but queerer than we can suppose.” Quantum theory allows the elegant and surprising Elitzur–Vaidman bomb tester ( )
*Note, it might be possible to induce an atom to decay at a chosen time. Nuclei that decay with the emission of gamma radiation can be induced to decay by directing gamma-rays of suitable wavelength at them: which is stimulated decay. It might also be possible to influence some forms of beta-decays with external magnetic fields. In both cases, this would be stimulated decay, not spontaneous decay.
MarkH December 2, 2020 9:01 PM
you’ve started in the wrong place
There’s an old American joke with the punchline, “if I was a-goin’ there, I wouldn’t start from here.” I’ll respond in two ways.
First, if I already have a conclusion in mind, certain premises may seem unworthy of investigation. If the conclusion is open to question (and my purpose is to investigate it), then such “pruning” of start points can form a mental trap from which there’s no escape. First principles offer liberation from this kind of mental cul de sac.
Second, if I study how minute droplets form at around dust particles in saturated air at high altitudes, gradually coalesce to form larger and larger droplets, descend as they grow more massive, start to cool by surface evaporation, assume a “tear drop” shape as they approach their terminal velocities, and gather electric charge from air friction along the way … that will leave a very great deal I won’t understand about thunderstorms. However, it will give a magnificent starting point from which to develop an understanding of thunderstorms.
You wrote that the two basic methods for TRBGs are:
Count the events in a given time period.
Count the time between two or more events.
Probably you have a lot more experience with such gadgets than anybody who participates here; I trust that this is what you have seen.
To my mind, (a) those are not the only available methods, and (b) if a TRNG applies either of these methods to the detection of radioactive decay events, then the designer didn’t adequately understand the problem s/he was trying to solve.
It does illuminate why you’ve written several times about bias, because both methods yield highly biased data which must then be post-processed in an attempt to increase the entropy per output bit.
With respect to how decay works, I won’t belabor the argument about the underlying physical theory, but instead offer three observations:
First, physics tells us that nucleons (and indeed the cloud of electrons, if any) are incessantly moving. If they could stop, the atom would cease to be. Atoms are inherently dynamic, without need for an external supply of energy.
Second, the energy of fission products (such as the alpha particle from a dying Am 241 nucleus) needs no tricky explanation: objects of like charge repel one another! Once the new helium nucleus is free, it flies.
Third, the essential concept (for the purposes of random generation) is that each decay is an absolutely independent event. It is not triggered by any influence from outside the nucleus.
[Note: nuclei under bombardment by high-energy particles indeed behave differently, but that has nothing to do with the stream of alpha particles maintaining steady standby currents in countless millions of smoke detectors.]
As ugly / baffling / counterintuitive as quantum theory seems to many (like me, for instance), it is very frequently tested in the laboratory. Its basic predictions are confirmed over and over and over.
It continues to amaze me that mesons as the “carrier particles” for the nuclear binding force, and quantum tunneling as an explanation for alpha decay, were both theorized in the mid 1930s … while humanity was in the grip of a global economic depression and my parents were still in primary school, and countless inventions which today seem indispensable hadn’t yet been dreamed of …
It took a few years to confirm those ideas in the laboratory. They’ve held up solid for three generations.
If you think that alpha decays are not independent — that somehow they can be triggered by external influences in ordinary environments — then let’s design an experiment to confirm it. Repeatable lab results to this effect would require a radical rethinking of the foundations of quantum mechanics, and give us an excellent shot at the Nobel prize, with awards currently running at more than USD 1 million. A share of that would be a great help to my personal finances.
SpaceLifeForm December 2, 2020 9:07 PM
@ Clive, name
I believe the original impetus for the development of SS7 was the phone phreakers.
Draper learned that a toy whistle packaged in boxes of Cap’n Crunch cereal emitted a tone at precisely 2600 hertz—the same frequency that AT&T long lines used to indicate that a trunk line was available for routing a new call.[10] The tone disconnected one end of the trunk while the still-connected side entered an operator mode. The vulnerability they had exploited was limited to call-routing switches that relied on in-band signaling. After 1980 and the introduction of Signalling System No. 7 most U.S. phone lines relied almost exclusively on out-of-band signaling.
SpaceLifeForm December 2, 2020 9:16 PM
@ k15
Address it to @Moderator and provide the comment-id, and what is bad.
You can find the comment-id by mousing over the timestamp of the post.
It’s rare they last long, so unless one remains after a couple of days, I would not bother to point it out. The obvious ones will get zapped, but not necessarily immediately. Some people actually do try to get some sleep.
xcv December 2, 2020 9:16 PM
stimulated decay, [vs.] spontaneous decay.
I would not tend to classify all nuclear reactions as “decay” except in cases where the substance is at rest and in thermodynamic equilibrium with its environment, and atomic nuclei spontanously split apart.
People really cannot expect the electrons of atoms to take part in chemical reactions, conduction of electric current, and so forth, without inducing other reactions of the protons and neutrons of the same atom.
Some atoms might have stable nuclei as neutral atoms, but the same nuclei might become unstable in certain ionized states, and react in ways that emit ionizing radiation.
To call a nuclear reaction a “decay” is not technically wrong of course, because it is an acknowledgment of the Second Law of Thermodynamics that the sum total entropy (or disorder) of the universe cannot be decreased, but only increased by the reaction, however it takes place.
Clive Robinson December 3, 2020 4:40 AM
@ Cassandra,
Unfortunately, I have had to deal with a domestic embuggerance…
I’m sorry to hear that, hopefully it is both minor and resolvable. As for apologizing, that’s kind of you but not necessary we all get bumps in the road of life every so often. With luck we mostly avoid them but these days I appear to more than stub my toe on them, guess I need better glasses or something (not that I can find them[1]).
With regards the overly dramatically named[2] “Elitzur–Vaidman bomb tester”, it actually has some interesting properties over and above it’s elegance. Think of it as a “observation sensor” that is it detects the collapse of the superposition at the point of measurment, which has implications for examining the Turing Paradox as well as real world sensors.
Speaking of point of measurment sensors on a more down to earth note you mention the infamous Intel paper… What it describes has been described by others as the “Roulette Wheel” or “Waggon Wheel” TRNG as it is in reality a “stroboscopic sensor” and just like the film gate in old movies where the waggon wheels appear slow, stationary or turning backwards on the screen. Thus it likewise has problems, not that Intel invented this type of TRNG though the paper does kind of make you feel like they were implying it. It has been independintly invented by quite a few engineers over the years myself included. As far as I know back a decade and a half or more before the Intel paper in the very early 1980’s[3], I was the first to use a VCO as an analogue to digital converter in a TRNG but these ideas have a habit of “comming of age” thus occur to many people around the same time, and the use of a VCO as a A2D converter was not exactly new then it goes back into the relms of early telephone research.
You note Intel lable the parts in an over encompasing way, almost as though they were writing a first draft patent application. But as a resultvas you note with,
Or as my father used to dryly observe,
“There is always dust under carpets, though some more than others”.
Such expansiveness by Intel makes for a very large carpet… An observation if I had voiced it back then would have caused “shock and awe” as Intel were to use somebody elses observation “everybodies darling” at the time. Now after more recent events in their public history I suspect that many would just give a wry smile.
But to answer your question of,
I don’t know if the same design continues to be used
The answer is both yes and no, the advantage of the VCO back four decades ago was it had “a lot of advantages”, but it’s big disadvantage was it was “an analog part” thus not realy suitable for integration in digital chips. Also thermal noise sources generate tiny signals and this can cause all sorts of powersupply and other noise issues and be susceptible to subtle feedback effects. So the idea of using “Ring Oscillators” to replace the thermal noise source and VCO came up. Back fourty years ago trying to use 7-10 74LS chips to make a suitable ring oscillator was way to much PCB real estate, slow and caused powersupply issues. You would not use the 4000 series CMOS chips because they were “oh so slow”. Now such ring oscillators are just a tiny tiny fraction of chip real estate so they get used in prefrence, especially as with the devices that make the inverters have upper cut off frequencies above 10GHz means they can “toot” at frequencies that with a few combined will churn out bits in the tens of billions a second… So “quantity not quality” is the game, hence the need for the “magic pixi dust” entropy pools and crypto to hide it all behind. As in the Wizard of Oz you are not supppsed to look behind the curtain… Oh and don’t mention the “pendulum problem” of using multiple ring oscillators having them sing harmoniously is not giving the discord they designers were hoping for.
But if you are not looking for buckets full of questionable bits of quite low “true entropy” but moderate quantities of quite high “true entropy” then a well designed and built “Roulette Wheel” TRBG will be a better bet.
Which brings us onto “what people are looking for”. As you note,
Conventional mechanics can only have “work to cross the tipping point” or as some call it the “entropy hump” or as bombs have been mentioned “going high order”. There are reasons for this. Firstly the tenents and axioms of classical physics demands it, secondly is “the need to be master of all you survey” which is in essence what most scientists are all about, though they call it the search for knowledge / understanding. It was after all the reason for the near schism a century ago when quantum mechanics poped it’s head up, and Einstein’s presumption on Gods life style.
You could say “the outcome was ordained” because actually humans can not accept a determanistic universe where everything is “preordained” because that would say at the macro level “there can be no accidents” and more importantly “there can be no criminals” because whilst crime will happen, those committing it were preordained and had no “free will” thus no choice therefore justice and punishment would be of no use to prevent crime.
So we need the concept of choice, free will, and therefore the universe can not be determanistic, thus we can not be masters of all we survey. Thus quantum mechanics or something similar has to exist to give us not just choice but a way to deal with what we can not know except through the veil of statistics and probability.
Speaking of accidents, I’m sorry to hear about your leg such things happen or are “sent to try us” depending on your view point, the sad fact is as my doctor used to point out before she retired is, what appears to bounce off of us when we are young tends to bite more with time.
I know it in my head that as entropy dictates, I am destined to get a little less robust with time, knowing this however does not realy help me, and I suspect many others, be accepting of it. In my head I’m still twenty something even though my mirror lies to me 😉
Thus “do as I say, but not as I do” and take care.
[1] I wear glasses because I have to which is a problem because I have very wide angle vision and can see around the edges which causes head aches and nausea or just “seasickness” in proportion to how much distortion I see. So I can not wear vari-focals and the glasses have to be large and as near rimless as possible. So not just hard to get at a reasonable price but when you think about it “hard to see” when put down even with good eyesight. As my eyesight has got worse over the years the need to take them off to do something and then put them on again to do something else has gone up as I need to switch from close to more distant vision the point at which I have to switch is about two hand widths from my nose which makes working at a desk darn akward as books and things I’m working on are in close, but all computer screens are distant… Anyway I usually remember where I put my glasses down but occasionaly something will cause a sudden change and I will forget where… So, imagine if you will a large aimiable bear with a perplexed look on a face that has been likened to a Klingon, that is thus as hirsute as Carl Marx wandering around myopicaly trying to find it’s glasses. It is almost the definition of the blind leading the hapless, and even occasionally entertaining for onlookers. But I also get the akward helpers… The ones who ask me what I am looking for, and I tell them, then they say something like “They are on the table right in front of you, cann’t you see them”… The implicit question actually being more a sardonic statment… To which the obvious reply of “No that’s why I’m looking for them” is akin to pouring oil on a smoldering fire… First it dampens the flame, then it fuels it vigorously, thus is unwise as a course of action. Just one of life’s little lessons for the unwary 😉
[2] I blaim two things for this over dramatic behaviour, firstly was that man and his “damn cat” which has done more for common literature than physics 😉 And secondly the fact that even at the best of times quantum mechanics is pages of dry maths and knarly symbols, that are less pleasing to the minds eye than snakes crawling around the bottom of a bottom less pit you just know you are going to fall into if you carry on studying it 😉
[3] It came about as a project I did in electronic music and I’d submitted the TRNG design off to a well known electronics magazine of the time for consideration for an article. They expressed thanks and some polite interest for a future article but it never made it to print. I think I still have the corespondence in an old filing cabinate in the garage (if mice have not enhanced the entropy effect by nesting in it).
Clive Robinson December 3, 2020 6:30 AM
@ SpaceLifeForm,
So the legend says…
Unfortunately the truth is a little more prosaic.
You need to remember what was going on, on the other side of the puddle in certain European Nations.
Up at Dolice Hill in North London and over at what is now called Adastral Park but started as RAF Martlesham Heath in Suffolk where the then “Post Office Research Laboratory” was based.
The ideas of Tommy Flowers and his colleagues that had givenvthe world it’s first electronic computer were post WWII were now being put to the problem of “Digital Telephony” and what was called “System X” was being developed that gave rise to ISDN and all that followed.
The US encumbrants had no reason to think about a digital future, the profits AT&T and similar were making was enormous even by monopolistic standards.
However even they had a problem that they could not realy solve. Mechanical switch gear was large cumbersome and resource expensive especially in man power. It basically had to go. A secondary but just as bad problem was with analogue systems noise and distortion went up as the length of cable went up and this likewise could not be solved economically.
So whilst AT&T in the US, like the General Post Office (GPO) in Britain, and the German phone companies had done research pre WWII the unreliable expensive low frequency valve/tube electronics could not compeate as digital systems.
WWII significantly changed the landscape when it came to valve/tube design and production. Post WWII there was a glut of manufacturing and the price was a very small fraction of prewar prices.
The trick Tommy Flowers had proved to increase reliability such that digital computing was economically possible, was one of those “secrets” that crossed the Atlantic with Linderman and his advisory team. It probably had a greater effect on the world back then and in the next couple of decades than the Boot and Randal cavity resonator magnetron, though most do not realise it.
Thus Digital Telephony was possible and actually desirable. The proplem “Sunk cost investment returns”. AT&T had the technology but managment had no financial reason to change.
In bombed out Europe however the legacy infrastructure was effectively gone and replacing it was imperative, thus the now low cost of digital electronics was the prefered way to go.
The US had fallen behind and had to play catch up. Thus there had to be more reason than just low cost manufacturing and incrrased reliability to push managment into what they saw was needless risk.
What realy sold it was “new services” the work in Europe especially in Britain was showing all sorts of things were possible, but only with digital back bones etc.
So US managment had already made the decision to make the change to digital and had started in on it. Touch tone or DTMF dialing was what most remember along with last number redial etc. But there was a lot more behind it.
Capt. Crunch and friends came along at the end of the first transitions to digital on the reminents of the analogue system. The fact it made “Pop Cultute Fame” with 2400Hz whistles which the cynical mind behind the hackzine 2400 effectively stole and glamorized just gave the media free run. It’s the same reason prosecutors talk of “Hacking” not “Cracking” the MSM were like a dog with a new bone and they were not going to bone up on the subject to get things correct, heck they were nicotine stained booze addled journalists of more than middle age, they were not going to listen to the truth from a bunch of snotty teanagers…
As I occasionaly point out the oft incorectly told story of “HRH Prince Philip’s Electronic Mail Box” (on Prestel) getting “hacked” being conflated or confused with the BBC Micro Live ACN001 account (on BT_Gold) getting hacked. Tells me the who and who not of those doing proper research. Because I have the advantage of having been involved one way or another quite intimately with both events, and more by luck and bloody mindedness avoided being “entrapped” like Steve Gold (RIP) and Robert Schifreen were[1] and ending up making legal history and thus indirectly being responsible for some very bad legislation…
[1] I used to deliberately misspell Robert’s last name as “Schiffren” to see if it got picked up on by researchers, or non researchers, doing a “cut-n-paste” and guess what some of them “swallowed the hook”…
Clive Robinson December 3, 2020 10:32 AM
@ SpaceLifeForm, Winter,
Speaking of “Earth Like” planets and “Rare”, Do you remember the “WOW Signal” from the “Big Ears” radio telescope?
That 72 second potential SETI signal from four decades ago?
Well some Amateur Astronomer thinks he has worked out where it might have come from…
Withvan exciting name like 2MASS I can hardly raise any enthusiasm for it. That said some people are “treating it like gospel” others are just “sitting on the fence” whilst others are saying the analysis is at fault etc, etc, etc.
Personally I’ve no skin in the game, but I think it unlikely simply based on probability. But hey jazz it up a bit “rub some funk on it” and splash some “selenium shampoo” on it[1] and I could be persuaded to put some popcorn on for the fun of it 😉
[1] If you’ve not seen Evolution then watch it, it’s got a better than average shot at making you laugh. Just a little taster,
JonKnowsNothing December 3, 2020 10:55 AM
Interesting MSM report about a database category design problem. It seems that a police database in New Zealand has undergone an audit that showed a certain category of “crimes” are entered incorrectly.
43% of hate crime complaints have been downgraded
because most police personnel do not know how to enter them into their database.
Interesting UI design failure. Wonder what else gets miskeyed?
ht tps://
(url fractured to prevent autorun)
SpaceLifeForm December 3, 2020 11:34 AM
Video of the Arecibo collapse.
T4 is visible, it’s top breaks off backwords.
You can see the top of T12 fall into the camera view.
MarkH December 3, 2020 12:00 PM
Re Phone Phreaking:
The frequency was 2600 Hz, not 2400.
Semi-Disclosure: the statute of limitations has long elapsed on any unlawful telephone activities in which my youthful incarnation might or might not have been involved …
SpaceLifeForm December 3, 2020 12:02 PM
Arecibo videos from NSF.
The latest contains a second video from drone at the time.
I now see where the corrosion was: At top of tower.
You see not a cable breaking, but pulling away from the tower.
SpaceLifeForm December 3, 2020 2:24 PM
Arecibo slo-mo.
I was wondering why the equipment part of the rig fell instead of riding along with the triangle into the rock face.
It looks like it just twisted out of the circular track.
Maybe, if the arm had been aligned parallel to the T8-T12 side of the triangle, it may have ridden into the rock face with the triangle. Would not matter of course.
SpaceLifeForm December 3, 2020 2:49 PM
Will not be easy, but with modern materials, could be better.
Clive Robinson December 3, 2020 3:51 PM
@ SpaceLifeForm,
It could, but at the end of the day “who gets to build it” it would be nice if it could be all or as much as possible “local labour”. But I’m guesing in todays world it would be contracted out to some company that will import the labour from else where.
There is even the possibility it could be Chinese Labour at engineer and above level, the way things work these days…
But I guess the real question will be “utility” it won’t be cheap and astronomy rarely puts money in the bank. Thus the required ROI will be arguable, and there’s a lot of hands held out for research grants that do have a better chance of putting money in the bank.
The simple fact is it does not matter what your political stripe, purse strings are going to be pulled tight on science especially the long term sciences that are not expected to pay short term dividends.
Thus I would expect that sort of level of funding to go into “energy” related projects that might have military spin off. Which might include high energy densiry storage that can be as rapidly charged as a fuel tank can be.
Anders December 3, 2020 4:50 PM
This IS important.
Anders December 3, 2020 5:26 PM
SpaceLifeForm December 3, 2020 10:48 PM
@ Clive
Some Arecibo inside pics from 12 years ago.
Clive Robinson December 4, 2020 4:50 AM
@ Bruce, ALL,
AKM semiconductor factory fire
There have been a number of semiconductor manufacturing fascility fires this year that have caused “supply chain” issues.
Perhaps the worst is the 82hour fire in Japan’s AKM factory,
The products they make are used in a lot of things not just professional sound equipment, but Software Defind Radio(SDR) systems, specialised temprature control devices for TXCO time keeping and GPS disciplined oscillators needed for all types of mobile phone networks as well as many data networks, and the broadcast industry is reliant in many areas on these chips.
Other chips end up in smart device peripherals and much much more, so it’s not just professional but consumer grade equipment as well.
For many of the devices there is no “second source” and the factory being in operation again in six months is probably optimistic.
Speculators have already purchased distribution end stock from the likes of Digi-Key, Mouser, Farnell etc and some of these speculators are asking upto 25 times the price they purchased at.
It means that products will have to be redesigned for other parts which is expensive and not a particularly fast option. Thus the speculators have specialist manufacturers over a barrel.
This is going to be an object lesson in supply chain issues and well worth keeping an eye on especially as it impacts on the ICTsec sector at quite a low infrastructure level where people normally do not think to look.
Clive Robinson December 4, 2020 5:05 AM
@ Winter,
I guess MAGA has never looked so honest[1]. Mind you speaking of Japan makes me think, work an N in there somehow and then it would mean “Whimsical Pictures”
[1] What has made me smile about it though is the “archaic” definition,
But the page does have some “not suitable for work” “go to the naughty step” words.
Winter December 4, 2020 5:25 AM
Ah, to be at work again. Going down memory lane again.
Clive Robinson December 4, 2020 2:30 PM
@ Casandra,
I know you are pressed for time currently, but there is a book up on arXiv that might be of interest to read / dip into when you are less pressed.
It goes through entropy, information theory and a few other bits of mathmatics to chase diversity in biological systems.
But whilst the biology is realy not mentioned except as an example of what to use the maths on it only requires according to the author only an undergraduate understanding,
“Entropy and Diversity”
I’ve only skimed through the TabOC and bits of a couple of chapters, but it looks interesting.
Cassandra December 4, 2020 2:56 PM
One of the many capabilities lost with Arecibo is interplanetary radar. The Chinese FAST telescope doesn’t do radar currently, and is unlikely to do so at a level to match the late Arecibo capability in the future as the FAST feed cabin suspended over the reflector dish is a much more lightweight structure.
Arecibo produced radar images of Venus (first in the 1970s), and various ‘fly-by’ asteroids, as well as Mercury, Titan and asteroids in the asteroid ‘belt’. At present, there is no facility that can duplicate that functionality. The next best facility for radar astronomy is Goldstone (hxxps://
There is a good run-down of FAST’s capabilities compared to Arecibo in the Wikipedia FAST article:
Clive Robinson December 4, 2020 3:03 PM
@ MarkH,
Now the thread is quietening down,
Yes there are other things you can measure on a signal,
1, Amplitude
2, Frequency
3, Phase
Both absolute and relative, but the output of a GM tube or other partical detector being intermitent or aproximately random spikes/clicks does not realy make those meaningful to measure.
Thus the designer has several trade offs to make in their search for interference free entropy. If you think you know a better way then “sing out” and everyone can hsve a look.
Moving on,
As far as I’m aware we actually do not currently know if either of those is true as nobody has come up with a definitive physical test to prove them.
Whilst I strongly suspect that you can not prove/disprove due to the problems of “proving a negative”. It is nether the less true that as science advances new things are found. But as general guidence we’ve usually found in the past that such things are “work” and “require energy” and thus can be “pushed over the entropy hump” by some triggering effect.
Thus I suspect the search for both a “tipping point” and “trigger” will go on. Remember just a century ago “Quantum” effects were considered “crack pot” but as mathmatical models they work quite well. But essentially that’s what Quantum Mechanics is, the application of probabilistic models hence the “shut up and calculate” view point.
But a problem for you to solve, if each decay is “indipendent” as you say, how come the average decay very closely follows a (1/e)^n curve?
There is a way to explain it simplistically but can you work it out for yourself, or find it online?
Just to give you a hint, I would not be asking if I did not think you already know all be it just intuitively in other areas.
Cassandra December 4, 2020 3:28 PM
@Clive Robinson
Thank you for the book recommendation. I shall have to add it to my ever-increasing to-be-read book list: a quick skim over the ToC and a dip in a few sections were interesting.
xcv December 4, 2020 5:07 PM
@Clive Robinson, Cassandra
This is how a common circuit breaker with respect to an electric motor allows an overcurrent of 15–20× the running current to start the motor without tripping the breaker.
The breaker or even a fuse can thus be shown to work without tripping it.
The quantum states where the breaker has tripped are suppressed by the “flyback voltage” that appears across the coils of the motor.
The breaker only collapses into a tripped state to stop the current when the “probablity amplitude” of the tripped state of the breaker are sufficient to overcome the electrostatic force holding the electrical contacts together.
MarkH December 4, 2020 5:47 PM
For now, I’m responding to the second part of your recent comment.
Humbly, the word “explain” has different meanings, and as a parent you know that the iteration of the question “why?” can proceed very far indeed.
In the quantum model of most kinds of decay, each unstable object has a probability of decay (per unit time) which is time-invariant.
The time-exponential decrease in a population of identical unstable isotopes is consistent with time-invariant decay probability. In essence, one implies the other.
Note that this is fully consistent with complete independence of decay events. When reaching its moment of death, a nucleus doesn’t need to “know” how many of its neighbors have yet to decay, or have already gone … nor does an Am 241 nucleus “know” whether it’s part of a purified metal densely packed with such nuclei, or whether its nearest Am 241 neighbor is thousands of meters away [1].
This might not be an explanation in the sense of your question, but observed decay behavior is consistent with decay as a perfectly random event having time-invariant probability.
I have a feeling that you had some other idea in mind with the question you posed about independence, but I haven’t grasped what that idea is.
I should say as a disclaimer, that I only know QM at the “comic book” level. I don’t pretend to understand all that spooky stuff … but I have a little notion of what some of the laws are, and of the experimental evidence which first led to their formulation.
Probably most of us know that our common-sense reasoning about the world of sensible objects is often strongly at odds with quantum descriptions of reality.
Consider the case of an incorrectly set mouse-trap, or a cocked firearm with a “hair trigger”. The spring energy sometimes releases without any apparent intervention (i.e., spontaneously).
These spring mechanisms might be triggered by a small mechanical vibration from its environment (a “predictable” trigger, because vibration can be measured), or perhaps even by thermal motion of their constituent molecules (an “unpredictable” trigger, because it’s not practical to observe those molecular motions … although in principle, it could be modeled as a deterministic process).
If it’s vibration, then we can say that the release was triggered by some discrete pulse of energy from outside the mechanism.
If the release is purely thermal, then we can say that although its timing is (for practical purposes) unpredictable, the ambient temperature affected the probability per unit time of spontaneous release.
The alpha decay case is even spookier than that: the vibration of molecules in metal parts requires energy from somewhere; without this, their thermal energy would gradually dissipate via radiation.
But nuclei can persist (science tells us) for billions of years, with extremely rapid incessant motion of their constituent particles, with no external energy source.
Nuclear vibration is lossless. Energy ceaselessly shifts about, but is not dissipated.
Finding an external trigger for what we understand as spontaneous alpha decay would — if I understand correctly — require a radical re-write of the “quantum book.” I wasn’t joking, about such a discovery likely being Nobel-worthy.
In some cases, isotopic half-life can be environmentally modified: the probability density can be adjusted. But that’s not the same as a triggering event.
Hypotheses that something is actually triggering nuclear decay can be tested by experiment, depending on the hypothesized trigger.
Quantum theory doesn’t require an external trigger for Am 241 fission; experiment has not revealed one.
[1] As the Steely Dan song goes, “Up on the hill, people never stare. They just don’t care.” Unstable nuclei don’t care what other nuclei are doing, when spontaneously decaying.
xcv December 4, 2020 7:41 PM
RE: “Brownian motion”
The motion of a particle in Brownian motion is proportional to the square root of time.
𝔼(Δx) / Δt = the drift velocity of the Brownian motion.
m Var(Δx) / (2Δt) = the average kinetic energy of the particles in Brownian motion.
SpaceLifeForm December 5, 2020 1:16 AM
@ MarkH
Here’s some thoughts to ponder:
Can you define ‘Time’ without requiring any ‘measurement’?
Can you define ‘Distance’ without requiring any ‘measurement’?
Can you define either without referencing the other?
Would your answers or questions provide a clue as to what may be happening in the cosmos with regard to ‘Spooky Action at a Distance’?
MarkH December 5, 2020 5:20 AM
I’m looking at the physics as a technologist: “how do I apply rules and patterns from the scientific consensus to a practical application?”
The questions you posed are (a) way over my head, and (b) seem to be more about philosophical foundations.
That being said, my answers are:
Not off the top o’ me old gray head.
I haven’t the foggiest, but I also have no reason to believe that entangled particles at any appreciable distance from each other play any role in spontaneous nuclear decay.
I regret that I’m not scintillating this morning (pun intended).
Clive Robinson December 5, 2020 5:24 AM
@ MarkH, SpaceLifeForm,
As I’ve mentioned the decay closely follows a (1/e)^n curve. Which as I’ve also noted a number of people have been trying to get into politicians heads with regards the growth in a pandemic, which is a pecentage per unit time change and has an inverse half life growth –time to double– and a half life decay that gives us the R value.
The original assumption about isotope decay was that as atoms are very widely spaced appart small particles would mainly pass through a sample. And as had been observed with some high energy particles some small percentage would colide with the atoms and be deflected or as observed bounce back towards the source.
The argument effectively went that if a sample of isotope atoms was in a stream of such particles that were time invarient in intensity their probability of being hit by the particles was small but likewise invariant with time.
But importantly that time invariant behaviour of atoms and stream of particles has a time varient result as an isotope atom takes it’s self out of the game by decaying. That is in each time period you would trigger a small percentage of the atoms to decay, leaving less atoms for the next time period which as the probability had not changed would result in the same percentage –not quantity– change in each successive time period. Which gives you an exponential decay curve.
So two time invariant effects combining to give a time varient result, with the assistance of a little uniform physical randomnes in just one of them.
The result of such thinking is very appealing and still is to some. Hence the reason some people still look for an external trigger. They reason, that the fact that none has been found so far, does not mean it does not exist, so does not deter them looking. Because they reason the fact that something has not been discovered does not mean it does not exist on the old “You can not prove a negative” argument.
Oh one fun implication if it did exist is that stream of particles would have not just a constant rate, they would also need to have a small random spacial distribution with the randomness being uniform or ideal, that is of uniform density by time/frequency which is what many call “White Noise” and is found in most other noise sources and Quantum Mechanics requires and both it and Classical physics both give us[1]…
So yes some people will search the physical universe for a classical physics solution that they believe must be true because so much points to it, whilst most others just sit there and calculate with their Quantum physics mathmatical models irrespective of personal belief.
And there is a funny side to this, Quantum Mechanics effectively gave the universe “free will” but… At the cost of strict determanism in the mathmatical models that requires randomness to make it all work.
The down side of this is the “preordained” argument that alowed man and therefore a god to be master of all they surveyed has been replaced, not with a god that plays dice, but just the dice of randomness themselves… The implication of this is of course evolution is true, thus randomness might be the constant from the impossibly small to the impossibly large.
I suspect Alan Turing had come to this sort of conclusion from various of the things he said –including the nature of spots and stripes on creatures– and did. Not least of which was argue that all computers need a true physical random generator.
[1] Oh another fun thought, you are I assume aware of what the Root Mean Square (RMS) is and what it effectively does? Have you thought about the result of successive applications of it to any signal including a white noise “random” signal? If you take the RMS of a sine wave not only do you halve it’s amplitude, importantly you double the frequency of the sinewave the excess energy becomes an infinite series of harmonics of reducing amplitude. Thus you end up with a an infinite frequency series of small amplitude that successively approaches a “white noise distribution”. As all waveforms can be shown to be made of sinewaves the same thing happens to white noise which is what we assume randomness gives us but as the frequencies both constructively and destructively combine the distribution becomes uniform. So from order we get the chaos of randomness irrespective of if we want it or not.
FA December 5, 2020 7:40 AM
The RMS value of a waveform is the square root of the mean of the square of the waveform. It’s just a number, not a new waveform.
So I’ll assume you refer to just the square or higher powers of a waveform.
sin(x)^k for increasing integer k converges to a series of narrow impulses of unit ampliture and spaced pi apart. If k is even they are all positive, if k is odd positive and negative pulses alternate.
That waveform will have a discrete (line) spectrum. This is definitely NOT white noise, not even in the limit as k->inf.
Now consider a white noise waveform having some continuous amplitude distribution.
Every point will have some probability of exceeding a given threshold T. For any T, those points that do exceed T define a sequence of events that will have a Poisson distribution, and the time between them will have an exponential distribution.
Taking the square or any higher power doesn’t change anything. The probability of p^k > T is the same as the probablity that |p| > T^(1/k) (assuming even k).
So what are you trying to show here ?
Clive Robinson December 5, 2020 8:16 AM
@ FA,
Have a look at the output of any nonlinear square law circuit such as a bridge rectifier. Nearly everything in life follows a power law one way or another. Engineers spend much of their lives trying to work in the part of the curve that’s near linear.
Now take the second RMS of that including the issues of “sampling” that creates frequency fold over. Then keep going.
The world is part of a universe that is not only quantitized but most definitely not linear.
SpaceLifeForm December 5, 2020 2:40 PM
@ MarkH, Clive
It really is philosophical.
Can you trust what you think is random is really random?
Can you trust that your coin flip is really random?
I submit, that you cannot.
And, if you really want to think deeply, ask yourself:
What really is ‘Mass’ and ‘Gravity’?
Why does it appear upon ‘Observation’ and ‘Measurement’ that they are related?
Are you sure they are related?
SpaceLifeForm December 5, 2020 4:47 PM
@ Clive, MarkH, Anders, Winter, Lurker, ALL
Catch and Release is in play in this space-time continuum.
Leave a comment
Sidebar photo of Bruce Schneier by Joe MacInnis. |
3f7c18da9bae008e | Interference of a thermal Tonks gas on a ring
Kunal K. Das M.D. Girardeau E.M. Wright Ewan.W Optical Sciences Center and Department of Physics, University of Arizona, Tucson, AZ 85721
November 17, 2020
A nonzero temperature generalization of the Fermi-Bose mapping theorem is used to study the exact quantum statistical dynamics of a one-dimensional gas of impenetrable bosons on a ring. We investigate the interference produced when an initially trapped gas localized on one side of the ring is released, split via an optical-dipole grating, and recombined on the other side of the ring. Nonzero temperature is shown not to be a limitation to obtaining high visibility fringes.
A fundamental assumption at the heart of current proposals to realize integrated atom sensors is that the guided atom wavepackets will display interference phenomena when they are split, propagated, and recombined. On the other hand it is well known that the coherence properties of atomic gases are affected by dimensionality, there being no true off-diagonal-long-range order in less than three dimensions, and this raises the issue of interference in restricted geometries. Motivated by recent theoretical arguments Moore and Meystre (2001); Ketterle and Inouye (2001) demonstrating that several stimulated processes for matter waves such as four-wave mixing, superradiance, and matter-wave amplification can be achieved in degenerate fermion gases as well as in Bose-condensed gases, we have recently shown Das et al. (2002); Girardeau et al. (2002) that a one-dimensional gas of hard-core bosons, or Tonks gas, at zero temperature can exhibit high visibility interference fringes. These results suggest that the same mechanisms might be capable of overcoming the weakening of interference due to thermal excitation. The quantum Tonks gas is realized Olshanii (1998); Petrov et al. (2000) in a regime essentially opposite from that required for BEC, namely, the regime of low temperatures and densities and large positive scattering lengths where the transverse mode becomes frozen and the many-body Schrödinger dynamics becomes exactly soluble via a Fermi-Bose mapping theorem Girardeau (1960, 1965); Rojo et al. (1999); Girardeau and Wright (2000a, b); Girardeau et al. (2001). We shall extend that theorem so as to obtain exact results for a model atom interferometer using a thermal Tonks gas. In particular, we apply our results to the interference produced when an initially trapped gas localized on one side of a ring is released, split via an optical-dipole grating, and recombined on the other side of the ring. Such a study is currently of relevance due to experimental efforts Hinds et al. (1998); Schmiedmayer (1998); Key et al. (2000); Müller et al. (1999); Thywissen et al. (1999); Dekker et al. (2000); Hänsel et al. (2001); Ott et al. (2001) to fabricate atomic waveguides for matter wave interferometers.
Model: The model consists of a 1D gas of hard core bosonic atoms on a ring. This situation can be realized physically using a toroidal trap of high aspect ratio where is the torus circumference and the transverse oscillator length with the frequency of transverse oscillations, assumed to be harmonic. The longitudinal (circumferential) motion can be described by a 1D coordinate along the ring with periodic boundary conditions. The pure-state quantum dynamics of the system is described by the time-dependent many-body Schrödinger equation (TDMBSE) with Hamiltonian
Here is the 1D position of the particle, is the N-particle wave function with periodic boundary conditions,
which is also symmetric under exchange of any two particle coordinates, as is the many-body potential . We consider the case of impenetrable two-particle interactions, the so-called Tonks-gas regime Olshanii (1998); Petrov et al. (2000), and this is conveniently treated as a constraint on allowed wave functions:
rather than as an infinite contribution to , which then consists of all other (finite) interactions and external potentials.
We assume preparation of the system such that its initial state at time is expressed by a statistical density operator where the are a complete set of orthonormal many-boson states with label that stands for a set of quantum numbers, and the are nonnegative statistical weights summing to unity. Then the statistical average of any observable at any later time is
where the states evolve according to the TDMBSE with Hamiltonian (1).
Statistical Fermi-Bose mapping theorem: The Fermi-Bose mapping theorem for pure quantum states Girardeau (1960, 1965); Rojo et al. (1999); Girardeau and Wright (2000a, b); Girardeau et al. (2001) is a mapping from the Hilbert space of one-dimensional many-fermion states to that of one-dimensional many-boson states , holding if the interparticle interaction contains a hard core and the Hamiltonian is of the form (1). In Schrödinger representation the mapping operator on -particle states consists of multiplication by the “unit antisymmetric function”
where is the algebraic sign of , i.e., it is +1(-1) if (). Since is constant except at nodes of the wavefunctions, the mapping converts antisymmetric fermionic solutions of the TDMBSE into symmetric bosonic solutions satisfying the same TDMBSE with the same Hamiltonian (1), and satisfying the same boundary conditions and constraint (3). In the Olshanii limit Olshanii (1998) (low density, tightly confining atom waveguide, large scattering length) the dynamics reduces to that of the impenetrable point Bose gas, which is then satisfied trivially ( vanishing when any ) due to antisymmetry. If the potential in Eq. (1) is a sum of one-body external potentials, , the solutions of the fermion TDMBSE can be written as Slater determinants Girardeau (1960, 1965); Rojo et al. (1999); Girardeau and Wright (2000a, b); Girardeau et al. (2001)
where and the are orthonormal solutions of the single particle time-dependent Schrödinger equation (TDSE) for potential .
The generalization to quantum statistical evolution (4) of many-boson systems is now straightforward:
where is the inverse mapping from the Hilbert space of bosonic states to that of fermionic states. This is most useful if the observable commutes with the mapping operator in which case the identity implies that , reducing the statistical evolution of the in the impenetrable point Bose gas to that of the same observable in an ensemble, with the same statistical weights, of ideal Fermi gas states related to the corresponding impenetrable point Bose gas states by the pure-state mapping theorem Girardeau (1960, 1965); Rojo et al. (1999); Girardeau and Wright (2000a, b); Girardeau et al. (2001).
Single particle density: Interference fringes arising from spatial density modulation are due to first-order coherence manifested in the single-particle density ; the corresponding operator commutes with since it depends only on particle positions. It follows that the density profiles for the Fermi and Bose problems are identical, as can also be seen directly from the fact that and hence .
The pure state mapping theorem ensures that corresponding Bose and Fermi states have the same particle number and energy levels , so that both systems have the same chemical potential , and therefore corresponding states labeled by will have the same grand canonical statistical weights Yang and Yang (1969). Thus we will describe the gas of impenetrable bosons by a grand canonical ensemble of ideal Fermi gas states , assuming that the mean number of atoms is sufficiently large to justify a grand canonical picture. At time we consider a basis of energy eigenstates in the external potential . The thermal average of the single particle density of the system is
where the denominator is the grand partition function and is the spatial density of a -particle energy eigenstate of a free Fermi gas , its energy being the sum of the energies of the single particle states . Some manipulation of the sums leads to the familiar form
where are Fermi-Dirac occupation numbers. The chemical potential is determined by . The density at a later time is obtained from Eq. (9) after propagating each orbital separately according to the TDSE for potential .
Initial condition: We assume that the ring has been loaded (say, by optical tweezers Gustavson et al. (2002)) with atoms at temperature , and for they are confined to a narrow segment of the ring by a trapping potential assumed harmonic, with natural frequency , over the spatial extent of the initial trapped gas. The grand canonical ensemble in Eqs. (8) and (9) describing the state of the atoms at is therefore characterized by the single particle energies . The basic configuration is shown schematically in Fig. 1(a).
In order to discuss the time-evolution of the system, we designate the normal modes of a class of 1D harmonic oscillators by where are their mean positions and the parameter relates their widths to the width of the initial trap . Thus the modes of the initial trap potential are . We choose the circumferential coordinates to have the trap center at . On unwrapping the ring about in Fig. 1(b), the initial Hermite-Gaussian orbitals are split into two parts at the ends of the fundamental periodicity cell , and can be written as
(a) Initial density of
Figure 1: (a) Initial density of hard core bosons trapped on a ring of circumference . (b) By unfolding the ring we describe the system using a 1D coordinate . The coincident point is chosen at the center of the initial trapped gas.
The asymptotic dependence of the harmonic oscillator modes suggests the definition of a critical wavevector for a thermal distribution, , which would give a measure of the characteristic rate of expansion of atoms released from the initial longitudinal trap. We take it to correspond to , so that for temperatures large compared to the Fermi temperature () the critical wavevector coincides with the thermal wavevector , and for we get the wavevector corresponding to the highest occupied mode . If the initially trapped gas is allowed to expand freely along the ring, the critical wave-vector gives a measure of the time to wrap around the ring .
Some necessary conditions need to be satisfied in order that the initial atom cloud be accurately represented by a Tonks gas Olshanii (1998); Petrov et al. (2000). First, the longitudinal energy must be small compared with the transverse excitation energy: at zero temperature this requires and at finite temperatures . Second, for the initial gas to be accurately described as an impenetrable gas of bosons we require , i.e. Olshanii (1998).
Optical-dipole grating: In order to produce interference from the initial trapped gas we turn off the harmonic trap at and apply a temporally short but intense spatially periodic potential of wavevector . This spatially periodic grating may be produced over the spatial extent of the trapped gas, for example, using intersecting and off-resonant pulsed laser beams to produce an optical-dipole grating whose wavevector may be tuned by varying the intersection angle. The applied periodic potential then produces counter-propagating scattered atomic waves, or daughter waves, from the initial gas, or mother, with momenta and these recombine on the opposite side of the ring at a time . For times the mother packet has not yet encircled the ring and the gas expands freely as if on an infinite line.
Following Rojo et. al. Rojo et al. (1999) we use a delta-function approximation for the short pulse excitation of the periodic grating at , and implement the optical-dipole grating using a phase-imprinting scheme according to which each orbital just after the pulse at is changed to
where is the grating amplitude. The second line assumes in which case the optical-dipole grating predominantly produces two scattered copies of the initial mode travelling to the left and right with wavevectors in addition to the initial parent mode .
Scaled density profiles Scaled density profiles Scaled density profiles
Figure 2: Scaled density profiles for =100 atoms, , grating wavevector shown at the time of recombination for temperatures (a) nK, (b) nK and (c) nK. The insets show the details of the fringes around the region of detection .
Interference on a ring: We wish to examine the presence or absence of interference at the opposite side of the ring at time and how this depends on the temperature and the grating vector . After the grating is applied the trapping potential is turned off, , so the subsequent quantum dynamics of the system is traced by freely propagating each orbital . Using the time evolved orbitals in Eq. (9) we obtain the atom density at time as a sum of three terms , with
The first term is due to the expanding mother packet overlapping with itself, is due to the overlap of the daughters, and arises from the overlap of the mother and the daughter packets. In general, for , which is desirable so that the mother packet does not wrap at the same time as the daughters recombine, is small compared to the other terms: This term is present in our simulation below but we have refrained from reproducing the expression for it above.
Estimate of the visibility in the region of detection, as a function
of temperature for three values of the optical-dipole grating
wavevector (a)
Figure 3: Estimate of the visibility in the region of detection, as a function of temperature for three values of the optical-dipole grating wavevector (a) , (b) and (c) .
As a concrete example we consider sodium atoms on a ring of circumference released from an initial harmonic trap of frequency Hz. This corresponds to an oscillator length of m and Fermi temperature nK. In general, we find that for temperatures and applied wavevectors such that minimal interference fringes result when the daughter packets recombine at time . Recalling that the wrap time for the mother packet is given by , then for the mother packet wraps around significantly by the time the daughters recombine, so provides a pedestal for interference fringes due to the overlap of the daughters , leading to reduced fringe visibility Das et al. (2002). In contrast for pronounced interference fringes can appear when the daughter packets are recombined. This is illustrated in Fig. 2 where we fix the wavevector of the optical grating and vary the temperature or alternatively . Unit visibility fringes are seen in Fig. 2(a) where nK, whereas reduced visibility fringes appear in Figs. 2(b,c) where nK.
By noting that around the fringe pattern maximum of is , and the minimum , we obtain a measure of the fringe visibility as
This fringe visibility is plotted in Fig (3) as a function of temperature for three values of the grating vector , with the clear and physically obvious trend that higher temperatures require a higher grating wavevector to obtain high visibility interference fringes. Physically there are of course limitations: For sodium atoms, the condition for a Tonks-gas is nK, for a transverse trapping frequency Hz, and current experimental limits are about Hz Greiner et al. (2001). For the sodium yellow lines m, so for our parameters , in keeping with the parameters used above.
In conclusion, we have shown that even at nonzero temperature a strongly interacting 1D gas of impenetrable bosons can show high visibility interference fringes on a ring. This remarkable result is of importance for current schemes to realize integrated atom interferometers in that it shows that neither many-body interactions nor nonzero temperature are fundamental limitations, in 1D at least. Our results followed from a generalization of the zero temperature Fermi-Bose mapping which maps the strongly interacting 1D boson problem to a 1D gas of free fermions, and it is an open problem whether our conclusions can be extended to 2D and 3D interferometers.
This work was supported by Office of Naval Research Contract No. N00014-99-1-0806 and by the U.S. Army Research Office.
For everything else, email us at [email protected]. |
eaee88fae23be48d | Physics Std. Model “hits the wall” at LHC
If you are uninterested in quantum physics, bail out now.
This is an interesting video about the current state of the art of the Physics Standard Model. It seems the Large Hadron Collider has been trying for 2 years to stir the pot, but nothing has happened after the Higgs Bozon.
Here’s the video:
So what’s next? Well, I’d be inclined to something not involving the Standard Model, but only time will tell.
FWIW: I got to this video as I was watching Feynman videos about Quantum Mechanics. He didn’t have anything I’d not heard / learned before, but the way he presented it was very entertaining.
Yeah… what kind of crazy person watches Feynman on QM “for fun” 8-)
By the standards of today, his lecture is pretty simple. Basic intro to the 2 slit experiment problem. But it does cause you to think, just a little bit, about the difference between what we sense and what is real…
Subscribe to feed
About E.M.Smith
51 Responses to Physics Std. Model “hits the wall” at LHC
1. Larry Ledwick says:
Yes Feynman has a gift for lecture delivery of physics. I wish my instructors in college had been half as good as he was at explaining fundamental concepts of physics and engineering.
2. Sandy MCCLINTOCK says:
Thanks for the links :)
My best Physics lecturer at TCD was Ernest Walton who was amazing. (John Cockroft (1897-1967) and Ernest Walton (1903-1995) are credited with ‘splitting the atom’ for which they received a Nobel Prize)
3. jim2 says:
I’m about half-way through the first video. I didn’t know a field was associated with each fundamental particle and force. I wonder if there isn’t some way other than a field to describe what goes on with particles and forces.
At any rate, we poor bumpkin chemists have known about and understood Larmor precession for some time now. To a chemist it is the basis of NMR spectroscopy, to a doctor the basis of MRI, and to our friend the poor physicist the only fundamental process he can model with any skill whatsoever. :)
4. E.M.Smith says:
Isn’t QM fun? 8-} /sarc;
5. p.g.sharrow says:
yup! pretty much sums it up. K.I.S.S. ! 8-) …pg
6. Simon Derricutt says:
7. p.g.sharrow says:
8. p.g.sharrow says:
9. Pingback: Physics discussion on Aether Propulsion | pgtruspace's blog
10. E.M.Smith says:
I wonder… Is it really “when we look at it”, or is it “when a photon interacts with it” ?
The usual presentation is couched as though it is the act of an intelligence observing that causes the result; yet we know that the detectors have no intelligence and are just counting arrival of electrons (or photons or whatever). But, those detectors act differently when the photon has hit the electron… So isn’t it the interaction of photon and electron-wave-function that causes the “particle” aspect to become dominant?
When you have a wave in a pond and toss another rock in near it, you get an interference pattern with peaks. Could it not just be that “particles” are the peaks that show up when two wave fields interact?
I’m not seeing “observation” as a necessary part at all, only wave function collisions…
11. p.g.sharrow says:
@ All; I grabbed part of the above discussion and posted it along with Roger Shawyer’s EmDrive thruster first unveiled back around 2003, and tested by NASA in 2013. It really works! sort of?
@EMSmith; all test devices/ sensors require a unit of energy to be detected. To move an Electron of energy so we know an event happened. Even the “tracks” in a cloud chamber “see” the passage of an organized field ergo a “particle” through the chamber. Our particles may well be an artifact of our detectors….pg
12. jim2 says:
The Copenhagen interpretation of quantum mechanics was formulated when physicists were struggling to relate the quantum world to the classical one. I don’t believe they took literally the idea that a human observer had to “observe” in order for the wave function to collapse. Otherwise, how would the Universe gotten along without us? They were trying to figure out how a physicist related to a quantum experiment – what exactly was measurable by a classical being?
There is a helpful book is more recent than Copenhagen and it offers a good bit more insight.
I thought it was a good book and intend to read it again.
13. jim2 says:
I hope I wasn’t led to this from the CIO blog as I would be greatly embarrassed, but …
Feynman has never given the calculations up. He knew that the physically correct result isn’t infinite. So if the infinity appears at some point of the calculation employing the formally correct path integrals, what is wrong is the way how we calculate these expressions, not the theory! In particular, the intermediate “infinity” is just a sloppy excuse not to care about the detailed form of the infinity. Some terms or factors within the “infinite number” still matter – you simply can’t forget about them. Forgetting about some numbers that clearly do reflect the dependence on the question or initial or final conditions means to kill the calculation. A set theorist may be happy with a final answer “aleph zero” to a complex question but a physicist must never do it. For a physicist, it’s clearly the finite parts that must matter – and the infinite parts or factors that are spurious and may be eliminated and/or ignored. The infinity is “numerically greater” than a finite number (also in the sense of ordinals and cardinals) but it is much less important in natural sciences because the infinity, like “E”, doesn’t carry any detailed verifiable information about the physical phenomena!
14. E.M.Smith says:
There is a branch of math, called “non standard mathematics” IIRC, where infinity is valid in calculations. So let INF be the infinity symbol:
2 x INF / 3 x INF = 2/3
2 x INF^2 / INF = 2 x INF
Essentially you treat it as a constant and proceed in the usual way. I wonder how much the physics standard calculations would have their results change if instead of “renomalizing” to remove the infinities, they were simply calculated as above?…
15. jim2 says:
Re: infinity. There are a bunch of different kinds of infinity, the aforementioned Aleph Zero being the simplest.
16. E.M.Smith says:
Yeah, I vaguely remember L’Hospital’s Rule from Calculus classes… (Golly, about 45 years ago now…) It’s a useful work-around.
The Aleph-zero just looks like they are naming the different infinities, like INF vs INF^2 etc. etc.
17. p.g.sharrow says:
Just a bunch of techno-babble to dazzle with BS.
GOD is not a Mathematician. God is an Engineer that works in Applied Science.
Every thing in GOD’s Universe is inside of spheres. Spheres next to spheres. Spheres inside of spheses, Spheres connected to spheres. Nothing is exact, It just works because it HAS to. Plenty of fudge to make up for a bit of misalignment. Energy fields moving in 3 dimensions do not need perfect fit or alignment to function…pg
18. Simon Derricutt says:
When I brought up the problem of “what happened before there were people to measure” with my tutor around 45 years ago, the answer was that all the wavefunctions going back to the Big Bang suddenly collapsed and thus had a single result rather than an indeterminate result. Unsatisfactory, I know, but logically it would work. Bohm’s version of quantum mechanics doesn’t have that indeterminacy, however, and things happen whether they are observed or not, so is philosophically somewhat nicer.
If a man does something wrong in the woods, and there’s no woman there to see it, is he still wrong?
On the infinities and removing them when they can’t be calculated, re-factoring the equation (or using derivatives) so it doesn’t produce infinities in the first place is a good idea and gets around some problems. The infinities we’re talking about can’t however be removed. A particle is considered as a geometrical point, so gravitational attraction can reach infinity and so can the electrostatic forces. Zero-point energy has the same problem, in that how it is defined produces an infinite amount – incidentally I suspect ZPE doesn’t actually exist anyway, so the mathematical point here point may be moot.
If you regard particles as a higher concentration of waves, then there is no longer a point where you can say the particle is – it’s something more like a Gaussian bell-curve in 3 dimensions and the centre is more particle-like and the outskirts more wave-like, and there is no hard border between the two but a gradation. No ZPE, and no inconvenient infinities either apart from the theoretical edge of the wavefunction at infinity as is standard in quantum mechanics. If the limit to the wavefunction is also not infinite, but stops (has a zero node) at the Hubble radius, then not only is another inconvenient infinity removed but that also implies that the momentum (inertia) of that particle will be changed from a continuous range to one with levels – it becomes quantised. If you look at Mike’s latest article at you’ll maybe get a better idea of why the Dark Matter hypothesis can’t be true, based on observations of binary stars.
This is why I’m somewhat enthusiastic about Alzofon’s UFT, in that it fits with Mike’s theory nicely and there is experimental evidence that Mike is more right than the other theories. It thus seems worth doing the experiments to find out if the other predictions pan out – maybe we really can control inertia and gravity if this description is close-enough to the truth. Add in Ron Hatch’s evidence from the equations for time that GPS actually uses (that goes against what GTR says) which shows that there are “preferred” frames of reference that will give the right answers for time-delays in relativistic time calculations, and that this also fits in with Alzofon and McCulloch, and we might have a useful description that gives the right answer in most, if not all, situations. From my point of view, not being a mathematician, it also gives an engineering solution for how to actually make things happen that seem like magic at the moment. If you know why inertia and gravity are there in the first place, then it’s a bit easier to figure ways to control them.
19. jim2 says:
Simon. It is interesting you say a particle in QM is modeled as a point. In the Schrodinger equation, there are position variables x and y, but no description I see of the structure of the particle proper. What am I missing?
20. Simon Derricutt says:
Jim2 – The Schrödinger equation says the probability of the point particle being at a particular position. A point of course has no structure. As such, it gives much the same results as a spread-out particle, except for being regarded as a point whose position is uncertain. If instead you regard it as actually being a spread-out particle, then it makes more sense (as I see it, anyway) but removes the comfort of having a point to work with.
I’m approaching an understanding of this, but not yet clear enough that I can explain it to someone else (preferably a kid) in a way that can be grokked. May take a while to reach that state.
21. jim2 says:
The SE uses x and y for the position of the particle, but does not specify the extent of the particle. So, the center of an extended particle could still be at x and y, and the uncertainty in position dictated by the wave equation would still apply.
22. Simon Derricutt says:
Jim2 – if you use the idea of a spread-out particle, then where ‘the particle” is becomes necessarily fuzzy. If you then use the Schrödinger equation to provide a fuzzy description of where “the particle” is, then we have two layers of fuzziness and the uncertainty would be around twice as large. That’s why the SE is based on a point particle. That’s at least my understanding.
As far as I can tell, there aren’t any real infinities, and there aren’t any geometrical point particles either. There are also no step-functions where a force suddenly appears (such as hitting the surface of a billiard-ball type particle), but instead forces have ranges over which they change. The pictorial representations we grew up with of little balls hitting each other or atoms similar to the Solar system are really only first-level approximations, and the reality is somewhat different and much more fuzzy.
23. p.g.sharrow says:
@Simon; In electronics/electrics we are taught that as we push against the Universe it pushes back. A seeming perfect elastic, the stronger the electrical effort, the more powerful the rebound. Mass/Inertia is a storage of energy that must be dealt with. We use this as a tool to create and manage electrical/electronic effects. Time to use these effects to manage Mass/Inertia.
Think of a wave form as bubble of density, most dense at the center and least dense at the surface of that bubble.
The higher the frequency the smaller the bubble.
The greater the voltage the more dense the bubble.
The more dense the bubble the more Mass/Inertia.
From what I can see you are doing better at explanation to others then I,
24. E.M.Smith says:
We model gravity as originating from a point in the center of the Earth, just to make the math easy, but that doesn’t make it right…
25. p.g.sharrow says:
IIRC, while I was in high school and experiment was conducted on the face of Half Dome. A line was dropped from top to bottom, It was bowed 1/2 inch from straight toward the mass of the mountain, or at least that was the claim. :-) …pg
26. Larry Ledwick says:
The behavior of a force or other field that obeys the inverse square law also depends on the relative scale of the source and the particle (or observer) being acted on.
For a practical example, if a light source is small compared to your distance from it the light fall off (or the field if some force) would vary as the inverse square law, but as noted by EM if you are very close to a very large source inverse square law breaks down. If you are close to an infinite surface that radiates light, there is no drop off in light intensity as you move away, as the farther back you go, the more surface area is illuminating your point of observation. The same sort of behavior would apply to forces.
27. jim2 says:
The force of gravity at the center of the Earth would be zero, or pretty darn close. Not because it is the center per se, but that gravity from surrounding mass cancels out.
28. Simon Derricutt says:
Jeremy – thanks for the reference to the standard derivation of the shell theorem. That’s something I absorbed a very long time ago, but I think there has to be an error in it somewhere. It seems true inside the shell, but maybe not outside it. The reason for that is that just outside the shell a lot of the mass is pulling off-axis, so it seems that the result of the whole mass acting as if it as the centre should be an approximation that is valid for a separation quite large relative to the radius of the shell. The derivation implies that it is exact for all separations. May be a mismatch between the angle subtended by the rings at the external point and the area/mass of the ring subtended from the centre of the shell, but I’ll need to spend some time on this to be certain I haven’t made a mistake.
pg – I wonder how they knew it was 1/2″ out? It’s not as if you can tell using a spirit-level or any normal way of checking vertical. For inertia, the explanation of why it happens may give us a way to change the effects of it. If my logic is correct, then momentum is not a conserved quantity but in most situations is actually conserved because the fields that transfer it are not changing quickly. That has some pretty far-reaching consequences, and also implies that energy is not absolutely conserved either. For something like this we need solid experimental evidence, so I’m working on that.
Larry – also works for the gravity produced from an infinite plane, or at least a good approximation on a very large almost-flat surface. Also fun looking at what happens with a ring-shaped planet where a moon could have an “orbit” passing through the middle of the ring in a straight line, or two rings like a Helmholtz coil.
Jim2 – though the gravitational attraction at the centre would be zero, it seems to me that the gravitational field would be high and thus time would run slower the nearer to the centre you went. Much the same at the Lagrange points where gravitational force is zero (so according to GTR space isn’t curved and thus time should run at full rate), but I think that time should in fact run slower. This can be experimentally verified, and the data might be around already, but I haven’t found it yet. Maybe when one of the GPS satellites passes near to the Moon on its orbit there is an extra correction needed. If such correction is for the clock running faster, then GTR is correct, but if it runs slower then GTR is wrong.
29. p.g.sharrow says:
@Simon; Actual data indicates “time” slows as energy density increases.
This concept seems to me to be an artifact of our definition of time rather then actuality.
Atomic process speed changes as energy density changes, rather then actual time as an absolute. I would prefer to think of time as absolute and look to the causes of the process speed changes as the thing of interest. The results in our actual world would be the same but a change in POV give different results in logic.
This artifact in atomic process speed that has been observed as well as the apparent artifact that Galaxies rotate as a solid that infers that a great deal of the Mass of the Universe is unaccounted for, indicate to me that Mass/Inertia must be an external effect that is based on the stuff of space “Aether” under stress, a thing that is there even if we can’t touch it. Local density of Mater increases, local density of gravity increases, and the speed of atomic processes slow. Absolute time becomes a measuring tape rather then an elastic line as we examine cause and effect…pg.
30. Simon Derricutt says:
pg – yep, I agree that there’s a problem with our definition of time, and that what we really mean when we say time slows is that all the processes (including swinging pendula or any other method of measuring time) run slower. I also see time as an absolute, and that it’s just things taking longer to happen, but that’s not the way it’s normally described. It seems that the more mass/energy there is in a location, the slower those processes will run, and since that seems to happen for all processes we know of then it’s simply described as time running slower.
The apparent problem with galaxy rotation can either be explained by saying there’s a lot of gravitational matter we can’t see (the Dark Matter theory) or instead we might notice that there’s a minimum acceleration visible (around 10e-10m/s²) that implies that inertia isn’t what we thought it was. If you check Mike McCulloch’s latest blog about wide-spaced binary stars, there’s the same problem there and it seems having some Dark Matter lump carefully positioned between those stars might be somewhat difficult to arrange. I thus think inertia isn’t what we thought it was, and that it seems most likely that it is quantised. There’s thus a minimum acceleration possible, and this is related to the current size of the universe. This is a bit like the Copernican revolution – the new idea is a lot simpler and doesn’t need a lot of fudging to make it work (such as smaller and smaller epicyles on the earlier epicycles as measurements get better and we need to correct things, but there’s no reason for the epicycles and you find what they are by measurement). Mike’s reason for inertia is a lot simpler to calculate, and doesn’t need any fudging. As such, and given the fact that so far no sign of Dark Matter has turned up despite a lot of expensive searching, I’d relegate Dark Matter to the same status as fairies at the bottom of the garden. In any case, at the moment it cannot be predicted how much Dark Matter a particular galaxy must contain – it’s found by measuring the galaxy and seeing how much Dark Matter is needed to make it work the way it’s seen to work. Since Mike’s idea also gives us the possibility of being able to manipulate inertia (and Alzofon gives us a method to do so), then it seems worth exploring the ideas somewhat further and seeing how far they hold up. I’ve little doubt that Mike’s calculations will work for all galaxies including those way back in time when the universe was a lot smaller – it’s been checked for a few already.
Relating what happens here and now to the size of the current universe seems a little far-fetched. How can some boundary around 13.8 million light-years distant affect us? However, that seems to fit what we see, and we’ve known (and struggled with the idea) of “spooky action at a distance” for a long time. That in itself implies that there’s a universal clock, and that the relativistic changes to clock rate can be compared to this clock if we want to. It also implies instantaneous exchange of some (quantum) data, and that in turn may help explain the Pauli exclusion Principle which effectively states that the wave-functions of all the electrons (and other Fermions) in the universe must be different. If the wave-function of each electron does in fact spread to the Hubble radius, then in order to see an electron then it must be different from all other electrons. Gets a bit mind-bending, really. This apparent instant transmission goes against our normal experience of everything having a definite velocity, which is maybe why a lot of this stuff is non-intuitive.
31. H.R. says:
You can’t have time without motion.
Time is the reflection of motion.
32. cdquarles says:
And the before and the after was the first epoch. ;p. Time must exist where there are mutable things. A before and an after. For immutable things, time does not exist, functionally. Change is the key, I say, not so much motion. Motion is a subset of change. Hmm, that raises a question. Is time self similar at all scales?
33. p.g.sharrow says:
@Simon I have deliberately Ignored examining the work of others to independently arrive at a model from facts apparent to me. From your descriptions it appears there are a number of people that have arrived at a similar conclusion from their examinations. I will leave it to you to look over everyone’s shoulder for your own insight. I will continue my own independent path and hopefully help you in your’s. Someone will get lucky and get real dependable results and this thing will take off. Once demonstrated, anyone can do it! Of that I am sure. 8-) The key is in RF, a field that you are much more familiar then I …pg
34. Simon Derricutt says:
pg – I’ve looked at a lot of models. If you look at what they predict that is different to standard models, and look to see what experiments have been done and whether the standard models or the alternatives got closer, in general the alternatives don’t do that well (Mike McCulloch’s ideas being a notable exception, and maybe Alzofon’s being a major step forward). It remains though that in order to get a different result, you need to do something that hasn’t been tried before. As such, your disc may produce some interesting results – a lot of stuff isn’t known or is imperfectly known, but the current theories generally work well-enough to the limits they have been tested. You find out new stuff by going beyond the previously-tested limits, and maybe the theories are still good, maybe they aren’t. The disc definitely goes beyond what’s been done previously.
Yep, the key for violating CoM is in RF, and I’m lacking the necessary decades of experience to get the designs right first time. Still, once it’s shown to work, there are people with the experience that can be brought in. Gravity and inertia control also need to be shown to work (if they do) after which it can be developed by people who are more competent in RF design.
There may be something odd with scalar (longitudinal) waves. So far the stuff I’ve looked at isn’t conclusive, and the experimental evidence conflicts with my experience of never seeing doubled signals (one at light-speed, and an earlier FTL one) in any situation. It would be surprising if such a thing would be missed by all the communications engineers that have worked over the years. Achieving the right antennae for launching such a wave and receiving it is a little difficult, but not majorly so – took me a few days of thinking of the needs of such an antenna to get a good design.
It’s interesting that FTL data transmission is built-in to quantum mechanics (entanglement), so really it’s mainstream thought even though it’s regarded as fringe when taken on its own. As such, I expect that mainstream physics will at some point find a way to use it. In some ways, that’s built in to the structure of a quantum computer anyway, though at the moment that’s only a small distance of operation. It’s a strange universe when you start digging in the foundations, and it seems likely that some of the things we used to think were impossible will turn out to be possible after all. I think the future will likely have FTL transmissions of data, teleportation, energy produced from nothing, interstellar travel, and quite a lot of the stuff that is currently a sci-fi dream. Maybe we’ll even see some of that come true.
35. E.M.Smith says:
So if a rock sits still in space it stops time?… (rhetorical – I know the atoms still move…)
Fractal time? How would one test it…
IIRC, one of the things about Tesla’s work was the extremely high frequency you can get with a spark gap. Now we regularly make GHz gear without such a gap (and it is showing a tendency to screw with things like our genes… so be careful…) There are also THz amplifiers and such on the horizon. I wonder if we’re near the point where a non-spark model of his gear could be built…
If “whatever” he ran into is frequency dependent, increasing at higher speeds, I’d think there would be hints in the way extremely HF gear must be designed (as in perhaps they must design it to avoid some of the odd bits he saw…)
Saw a demo of QM entanglement and some other exotic stuff. Spookiest one was a superconducting disk, positioned over a magnetic track. Does Not Fall. Nudge it, it proceeds to skim along the track maintaining height above it. Then the wacky bit, turn the track upside down, it “hangs” in mid air below the track. Neither rising nor falling. But will still race around the track if nudged. So it isn’t being just stuck there but being unable to move in the mag field, nor is it being attracted. It’s just “wrong” given all the expectations of gravity and magnetism. Yet it moves…
This looks like the video.. Quantum Locking:
36. cdquarles says:
About testing time for fractal properties, I don’t know; given that I am a finite being that is mutable and I exist within the system. There was a Scientific American article many years ago where a 3-D being moved through a 2-D space, with the question being: “What would a 3-D being moving through 2-D space look like to an intelligent being that was limited to 2-D experience?”, which I found fascinating, at the time.
How does that saying go? Yes, “Think outside of the box”, so that’s where I was going. Recall, when you change the axioms (which we know are true by faith, by the way, via induction), and/or change the conditions, you change the results.
37. H.R. says:
@cdquarles – I’m having a difficult time thinking of something that changes without motion. Yes, because there are quarks, muons, gluons, and poo-ons (OK, I made that one up.) something moved or there wouldn’t be a change.
Now there’s a $64,000 question: “Is time self similar at all scales?”
As you pointed out, it’s the interval between before and after. People a lot smarter than me have pondered time in the absence of an observer of an interval. They have also come up with ways to describe how a given interval looks different to observers watching the interval from different positions. But in the end, the observers are just reporting on an interval and an interval is an interval is an interval,
Where ‘time’ got interesting to me was in that quantum levitation video that E.M. posted. Does time exist in the field between the locked objects? Time exists for the object, but what about in the field? I dunno… but if you were in the field between the objects, could you be just anywhere you wanted to be? If the locked objects were on either side of the universe, would instantaneous travel to any point be possible?
38. p.g.sharrow says:
@cdquarles; I remember reading that article. Interesting effort of representing a 2dimension visualization of a 3dimension world on a 2dimension medium. 8-)
As to thinking outside the box. I find most people have no problem visualizing the box. But, what is inside the box is the thing that escapes them.. ;-) …pg
39. H.R. says:
@cdquarles and p.g.: I saw that 3-D object through 2-D space demonstrated in a yoo-tube video just last week. At the end of one of the videos posted here, the pop-up follow-on videos had that demonstration. I didn’t bookmark the video and cannot recall the title or presenter. 😞
40. jim2 says:
SD and PGS! Darpa has funded your guys!!! Let’s hear some whoopin’ and hollerin’!!!
The Defense Advanced Research Projects Agency (DARPA) recently awarded a $1.3 million contract to an international team of researchers to study quantized inertia, a controversial theory that some physicists dismiss as pseudoscience.
41. cdquarles says:
@ H. R., about something that changes without motion, well, define motion. Consider a state change that does not change the total kinetic energy of a sample of matter.
42. p.g.sharrow says:
jim2; I would not consider these as our guys. Looks more like a deliberate waste of Grant money. They make some of the right buzz words but are wandering off in the wrong direction. Their proposed experiment will fail because of their fundamental error…pg
43. E.M.Smith says:
Whoop Whoop Holler holler…
And hoping they can do it right…
44. jim2 says:
Sorry, I thought Mike McCulloch was one of “your”guys??
Mike McCulloch in 2007, but it is still considered a fringe theory by many, if not most, physicists today. McCulloch has used the theory to explain galactic rotation speeds without the need for dark matter, but he believes it may one day provide the foundation for launching space vehicles without fuel.
The DARPA grant will allow McCulloch and a team of collaborators from Germany and Spain to undertake a series of experiments that will apply QI in a laboratory setting for the first time.
45. Simon Derricutt says:
Jim2 – “SD and PGS! Darpa has funded your guys!!! Let’s hear some whoopin’ and hollerin’!!!”
I thought we’d discussed that a bit earlier. Mike says he’ll use the money initially to employ a maths grad student to help him sort out the theory side, and is planning for a year on that task. Once they’ve sorted that task, they’ll be going on to experimental tests. I’ve made sure Mike is aware of Alzofon’s UFT, and understanding that and using it properly will need a pretty good mathematician, so if Mike is lucky and manages to find someone with a sufficiently-open mind and also good at the maths, we might see some amazing results. If I get interesting results on my experiments, he’s said he’d like to know about them, so there’s also a chance he’ll replicate them with somewhat more credibility than I’ll have.
Mike’s theory is fringe because it gives a totally different reason for things happening the way they do. The reason also isn’t intuitive. Let’s face it, Dark Matter is a simple solution that’s easily understandable – there’s some extra stuff we can’t see, and we know that there’s likely a lot we can’t see anyway. Still, if you look at the wide binary stars, they have the same minimum measured acceleration, and the Dark Matter hypothesis requires therefore that they also have a lump of Dark Matter between them without spilling outside the mutual orbit. That gets difficult to explain why it’s only there and not elsewhere. MoND gives a good fit to the experimental data too, but has adjustable parameters to make it fit and no basic reason why it works. It’s a bit like using epicycles to describe the movements of the planets – it gives the right answers but no explanation for those epicycles.
Mike’s theory makes testable predictions. It’s thus falsifiable if the predictions don’t match the experimental results. That also applies to Alzofon. Dark Matter is by its nature not falsifiable – if the experimental evidence doesn’t match, they have parameters they can fudge to make it match. That’s why it’s taken so long to find the stuff – it’s somewhat hard to show the existence of something that isn’t there. Lots of grants available to try to find Dark Matter, though, so it will take a while to kill off the idea even if Mike starts showing good results.
46. jim2 says:
Dark matter does seem a kludge. But it could be those stars were trapped by a glob of dark matter and same explanation for galaxies. Dark matter certainly doesn’t spring from fundamentals.
47. Simon Derricutt says:
Jim2 – agreed that Dark Matter is a possible explanation, but though you can get it to work for galaxies I think it’s a lot more difficult for the binary stars. I think there are a few ternary stars around, too. You really need to say that the Dark Matter captured those stars and thus that the orbit is around the Dark Matter and not each other, but then you should have the two orbits decoupled from each other too and other phase-locking than 180°.
QI is very much cleaner, and predicts far more anomalies, but requires us to consider inertia as quantised whereas before we’d always considered it continuous. The relative simplicity of the equations involved makes QI more likely as the explanation IMHO, and it also fits nicely into the rest of QM. You no longer have one explanation for small distances and a different and incompatible one for very large distances.
48. cdquarles says:
Hmm, a question. Is velocity quantized? That’s possible, under certain conditions. We need to remember that just like a map of a territory isn’t the territory, a mathematical relation about real entities (which don’t always have to be physical/material) isn’t the same thing as real entities.
49. Simon Derricutt says:
CDQ – it’s actually the acceleration that is quantised as such, which implies that velocity can’t be quantised too. The map/territory problem is always there, though, and it’s possible that we’re using a different logic than nature does. About the only guide that is reliable is that we can’t have a paradox, and that different observers will see the same actual things happen (though not necessarily in the same time-order). Since I’ve recently had to dump a couple of axioms that I’d thought were always true (momentum and energy both being conserved) and made them conditional on circumstances, the map at the moment seems somewhat unreliable. With enough people working on it, I think we’ll arrive at a better map though. As regards CoM, I hope to have some news fairly soon. For the next bits, testing out whether we can control inertia and gravity, maybe early next year. They both throw a fairly large spanner into our understanding of the fundamentals, so it may take a while after that before it gets accepted. If it works, of course….
Comments are closed. |
a3b097bca0722a98 | loading page
Controlling Quantum Wave Packet of Electronic Motion on Field-Dressed Coulomb Potential of H2+ by Carrier-Envelope Phase-Dependent Strong Field Laser Pulses
• Mohammad Noh Daud
Mohammad Noh Daud
University of Malaya
Author Profile
Solving numerically a non-Born-Oppenheimer time-dependent Schrödinger equation to study the dissociative-ionization of H2 subjected to strong field six-cycle laser pulses (I = 4 × 1014 W/cm2, λ = 800 nm) leads to newly ultrafast images of electron dynamics in H2+. The electron distribution in H2+ oscillates symmetrically with laser cycle with θ + π periodicity and gets trapped between two protons for about 8 fs by a Coulomb potential well. Nonetheless, this electron symmetrical distribution breaks up for the H2+ internuclear separation larger than 9 a.u. in the field-free region at a time duration of 24 fs as a result of the distortion of Coulomb potential where the ejected electron preferentially localizes in one of the double-well potential separated by the inner Coulomb potential barrier. Moreover, controlling laser carrier-envelope phase θ enables one to generate the highest total asymmetry Aetot of 0.75 and -0.75 at 10 and 190, respectively, associated with the electron preferential directionality being ionized to the left or the right paths along the H2+ molecular axis. Thus the laser-controlled electron slightly reorganizes its position accordingly to track the shift in the position of the protons despite much heavier the proton’s mass.
Peer review status:UNDER REVIEW
04 Mar 2021Submitted to International Journal of Quantum Chemistry
04 Mar 2021Submission Checks Completed
04 Mar 2021Assigned to Editor
09 Mar 2021Reviewer(s) Assigned
09 Mar 2021Review(s) Completed, Editorial Evaluation Pending
09 Mar 2021Editorial Decision: Revise Minor
16 Mar 20211st Revision Received
15 Apr 2021Submission Checks Completed
15 Apr 2021Assigned to Editor
15 Apr 2021Reviewer(s) Assigned |
7ce24278aa65512b |
Macroscopic dynamics of a trapped Bose-Einstein condensate in the presence of 1D and 2D optical lattices
M. Krämer, L. Pitaevskii and S. Stringari Dipartimento di Fisica, Università di Trento, and Istituto Nazionale per la Fisica della Materia, I-38050 Povo, Italy Kapitza Institute for Physical Problems, ul. Kosygina 2, 117334 Moscow, Russia
August 18, 2020
The hydrodynamic equations of superfluids for a weakly interacting Bose gas are generalized to include the effects of periodic optical potentials produced by stationary laser beams. The new equations are characterized by a renormalized interaction coupling constant and by an effective mass accounting for the inertia of the system along the laser direction. For large laser intensities the effective mass is directly related to the tunneling rate between two consecutive wells. The predictions for the frequencies of the collective modes of a condensate confined by a magnetic harmonic trap are discussed for both 1D and 2D optical lattices and compared with recent experimental data.
The experimental realization of optical lattices [1, 2, 3, 4, 5, 6] is stimulating new perpectives in the study of coherence phenomena in trapped Bose-Einstein condensates. A first direct measurement of the critical Josephson current has been recently obtained in [3] by studying the center of mass motion of a magnetically trapped gas in the presence of a 1D periodic optical potential. Under these conditions the propagation of collective modes is a genuine quantum effect produced by the tunneling through the barriers and by the superfluid behaviour associated with the coherence of the order parameter between different wells. The effect of the optical potential is to increase the inertia of the gas along the direction of the laser giving rise to a reduction of the frequency of the oscillation.
The purpose of the present work is to investigate the collective oscillations of a magnetically trapped gas in the presence of 1D and 2D optical lattices taking into account the effect of tunneling, the role of the mean field interaction and the 3D nature of the sample. Under suitable conditions these effects can be described by properly generalizing the hydrodynamic equations of superfluids [7].
Let us assume that the gas, at , be trapped by an external potential given by the sum of a harmonic trap of magnetic origin and of a stationary optical potential modulated along the -axis. The resulting potential is given by
where , , are the frequencies of the harmonic trap, is fixed by the wavelength of the laser light creating the stationary 1D lattice wave, is the so called recoil energy and is a dimensionless parameter providing the intensity of the laser beam. The optical potential has periodicity along the -axis. The case of a 2D lattice will be discussed later. In the following we will assume that the laser intensity be large enough to create many separated wells giving rise to an array of several condensates. Still, due to quantum tunneling, the overlap between the wave functions of two consecutive wells can be sufficient to ensure full coherence. In this case one is allowed to use the Gross-Pitaevskii (GP) theory for the order parameter to study both the equilibrium and the dynamic behaviour of the system at zero temperature [8]. Eventually, if the tunnelling becomes too small, the fluctuations of the relative phase between the condensates will destroy the coherence of the sample giving rise to new quantum configurations associated with the transition to a Mott insulator phase [2, 6].
In the presence of coherence it is natural to make the ansatz
for the order parameter in terms of a sum of many condensate wave-functions relative to each well. Here is the phase of the -component of the order parameter, while and are real functions. We will make the further periodicity assumption where is localized at the origin. The above assumptions for and are justified for relatively large values of where the interwell barriers are significantly higher than the chemical potential. In this case the condensate wave functions of different sites are well separated (tight binding approximation).
Using the ansatz (2) for the order parameter one finds the following result for the mean field expectation value of the effective Hamiltonian :
where in the two-body and in the magnetic interaction terms as well as in the radial kinetic energy we have ignored the overlap contributions arising from different wells. In the evaluation of the axial kinetic energy and of the optical potential term we have instead kept also the overlap terms originating from consecutive wells. These are proportional to the quantity
related to the tunneling rate and responsible for the occurrence of Josephson effects.
By setting (groundstate configuration), the variation of with respect to yields the differential equation
where is introduced to ensure the normalization condition which implies that the functions are normalized to the number of atoms occupying each site: . In eq. (5) we have ignored the contribution arising from the two-body interaction. Estimates of [9] show that this is a good approximation already at moderately large . We have also neglected the external magnetic potential which is justified if . Since in the following we are interested in the low energy excitations of the system we will always keep the function equal to the groundstate solution of (5).
In order to discuss the macroscopic properties of the system, including its low energy dynamics, it is convenient to transform the discretized formalism described above into the one of continuum variables. This is obtained through the replacement in the various terms of the energy. Through such a procedure one naturally introduces a smoothed or ”macroscopic” density defined by
with , and a smoothed phase ().
By applying the smoothing procedure to eq.(3) we obtain the following macroscopic expression for the energy functional
where we have introduced the renormalized coupling constant we have neglected quantum pressure terms originating from the radial term in the kinetic energy and we have set . We have also omitted some constant terms (first two terms in eq. (3)) which do not depend on or on .
With respect to the functional characterizing a trapped Bose gas in the absence of optical confinement, one notices two important differences: first the interaction coupling constant is renormalized due to the presence of the optical lattice. This is the result of the local compression of the gas produced by the tight optical confinement which increases the repulsive effect of the interactions. Second the kinetic energy term along the -direction has no longer the classical quadratic form as in the radial direction, but exhibits a periodic dependence on the gradient of the phase. By expanding this term for small gradients, which is the case in the study of small amplitude oscillations, one derives a quadratic term of the form characterized by the effective mass
where is defined by eq. (4). Notice that within the employed approximation the value of , and hence of , does not depend on the number of atoms, nor on the mean field interaction.
The equilibrium density profile, obtained by minimizing eq.(7) with has the typical form of an inverted parabola [10]
which conserves the aspect ratio of the original magnetic trapping. The size of the condensate has instead increased since . For large the increase of the coupling constant can be large ( [9]). However, since the radius of the sample scales like the 1/5-th power of the resulting increase in the size of the system is not very spectacular (for we find an increase of the size by for the experimental setting of [3]).
The functional (7) can be used to carry out dynamic calculations. In this case one needs the action with the second term given by . The resulting equations of motion are obtained by imposing the stationarity condition on the action with respect to arbitrary variations of the density and of the phase . The equations take the form
In particular, at equilibrium these equations reproduce result (9) for the equilibrium density. Furthermore, Josephson-type oscillations are among those captured by eqs. (10) and (11). To see this consider the case of a uniform gradient of the phase along , , where is a time-dependent parameter. From eqs. (10) and (11) one can then derive equations of motion for the center of mass and for the conjugate momentum variable [3, 11]
which have the typical Josephson form.
In the limit of small oscillations the solutions of eqs. (10) and (11) have the form with obeying the hydrodynamic equations:
where is the chemical potential of the sample and is the equilibrium density (9) evaluated at the center. The solutions of (14) provide the low energy excitations of the system. In the absence of magnetic trapping one finds phonons propagating at the velocity , in agreement with the result obtained in [12] for a 1D array of Josephson junctions. In the presence of harmonic trapping the discretized frequencies of the time-dependent solutions of (14) do not depend on the value of the coupling constant. By applying the transformation , one actually finds that the new frequencies are simply obtained from the results of [7] by replacing
For an elongated trap () the lowest solutions are given by the center-of-mass motion and by the quadrupole mode . The center-of-mass frequency coincides with the value obtained from eqs. (12) and (13) in the limit of small oscillations. Concerning the quadrupole frequency we note that the occurrence of the factor is a non-trivial consequence of the mean field interaction predicted by the hydrodynamic theory of superfluids in the presence of harmonic trapping [7]. In addition to the low-lying axial motion the system exhibits radial oscillations at high frequency, of the order of . The most important ones are the transverse breathing and quadrupole oscillations occuring at and respectively. For elongated traps the frequencies of these modes should not be affected by the presence of the optical potential. Different scenarios are obtained for disc-shaped traps (). The above results apply to the linear regime of small oscillations. Eqs. (12) and (13) show that in the case of center-of-mass oscillations, the linearity condition is achieved for initial displacements of the trap satisfying , a condition that becomes more and more severe as the laser intensity increases. For larger initial displacements the oscillation is described by the pendulum equations. For very large amplitudes the motion is however dynamically unstable [11, 13].
From the previous discussion it emerges that the effective mass is the crucial parameter needed to predict the value of the small amplitude collective frequencies. An estimate of can be made by neglecting the magnetic trapping as well as the role of the mean field interaction. Within this approximation the effective mass is easily obtained from the excitation spectrum of the Schrödinger equation for the 1D Hamiltonian , avoiding the explicit determination of the tunneling parameter (4). One looks for solutions of the form where is the quasi-momentum of the atom and is a periodic function of period . The resulting dispersion law provides, for small , the effective mass according to the identification . The value of , which turns out to be a universal function of the intensity parameter , has been evaluated for a wide range of values of (see fig.1). These results for can be used to estimate the actual value of the collective frequencies. The method described here to calculate is expected to be reliable not only for very large laser intensities when the tight binding approximation applies and the effective mass can be expressed in terms of the tunneling rate (see eqs. (8),(4)), but also for smaller values of . Of course for very small laser intensities, as in the experiment [14], the determination of requires the inclusion of the mean field interaction and of the magnetic trapping through the explicit solution of the GP-equation.
In fig. 2 we compare our predictions for the frequencies of the center-of-mass motion with the recent experimental data obtained in [3]. The comparison reveals good agreement with the experiments. Our results also agree well with those obtained from the numerical solution of the time-dependent GP-equation [11, 13].
The above formalism is naturally generalized to include a 2D optical lattice where the optical potential is . The actual potential now generates an array of 1D condensates which has already been the object of experimental studies [4]. For a 2D-lattice the ansatz for the order parameter is [15]
In the TF-limit the groundstate smoothed density still has the familiar form with the redefined coupling constant , where is still given by the solution of eq. (5) and we have used the same approximations as in the 1D case.
Also with regard to dynamics, one can proceed as for the 1D lattice. One finds that the equations of motions, after linearization, take the form
The frequencies of the low energy collective modes are then obtained from those in the absence of the lattice [7] by simply replacing and . For large laser intensities the value of coincides with the one calculated for the 1D array. If , the lowest energy solutions involve the motion in the plane. The oscillations in the -direction are instead fixed by the value of . These include the center-of-mass motion () and the lowest compression mode () [7, 8]. The frequency coincides with the value obtained by directly applying the hydrodynamic theory to 1D systems [16, 17] and reveals the 1D nature of the tubes generated by the 2D lattice. If the radial trapping generated by the lattice becomes too strong the motion along the tubes can no longer be described by the mean field equations and one jumps into more correlated 1D regimes [18].
Stimulating discussions with F. Cataliotti, C. Fort, M. Inguscio, A. Smerzi and A. Trombettoni are acknowledged. This research is supported by the Ministero della Ricerca Scientifica e Tecnologica (MURST).
• [1] B.P. Anderson and M.A. Kasevich, Science 282, 1686 (1998).
• [2] C. Orzel, A.K. Tuchman, M.L. Fensclau, M. Yasuda, and M.A. Kasevich, Science 291, 2386 (2001).
• [3] F.S. Cataliotti, S. Burger, C. Fort, P. Maddaloni, F. Minardi, A. Trombettoni, A. Smerzi, M. Inguscio, Science 293, 843 (2001).
• [4] M. Greiner, I. Bloch, O. Mandel, T.W. Hänsch, T. Esslinger, Phys. Rev. Lett. 87, 160405 (2001)..
• [5] O. Morsch, J.H. Müller, M. Cristiani, D. Ciampini, E. Arimondo, Phys. Rev. Lett. 87, 140402 (2001).
• [6] M. Greiner, O. Mandel, T. Esslinger, T.W. Hänsch, I. Bloch, Nature 415, 39 (2002).
• [7] S. Stringari, Phys. Rev. Lett. 77, 2360 (1996).
• [8] F. Dalfovo, S. Giorgini, L.P. Pitaevskii, and S. Stringari, Rev. Mod. Phys. 71, 463 (1999).
• [9] P. Pedri, L. Pitaevskii, S. Stringari, C. Fort, S. Burger, F.S. Cataliotti, P. Maddaloni, F. Minardi, M. Inguscio, Phys. Rev. Lett. 87, 220401 (2001).
• [10] The profile (9) can also be obtained by applying the smoothing procedure (6) to the equilibrium solution for given by eq.(8) in [9].
• [11] A. Trombettoni, PhD-thesis, SISSA, Trieste (2001).
• [12] J. Javanainen, Phys. Rev. A 60, 4902 (1999).
• [13] A. Trombettoni, A. Smerzi, unpublished.
• [14] S. Burger, F.S. Cataliotti, C. Fort, F. Minardi, M. Inguscio, M.L. Chiofalo and M.P. Tosi, Phys. Rev. Lett. 86, 4447 (2001).
• [15] In analogy with the results of [9] we find that, at equilibrium, the quantity is given by an inverted parabola as a function of . The number of particles occupying the corresponding site is given by with and .
• [16] S. Stringari, Phys. Rev. A 58, 2385 (1998).
• [17] T.-L. Ho and M. Ma, J. Low. Temp. Phys. 115, 61 (1999).
• [18] C. Menotti and S. Stringari, cond-mat/0201158.
Effective mass
as a function of the laser intensity
Figure 1: Effective mass as a function of the laser intensity (see eq.(1)) calculated neglecting the effects of interaction and harmonic trapping.
Frequency of the center-of-mass motion for a condensate trapped
by the combined magnetic and optical potential (
Figure 2: Frequency of the center-of-mass motion for a condensate trapped by the combined magnetic and optical potential (1) as a function of the laser intensity. The circles and triangles are, respectively, the experimental and theoretical data of [3]. The triangles have been obtained by evaluating the tunneling rate within a Gaussian approximation for the order parameter in each well [3]. The solid line refers to our theoretical prediction.
|
d2ab1bab3ce3f9a6 | May 15, 2019
How to describe nuclear properties ab initio at a low computational cost?
Figure 1: Reach of ab initio methods: quasi-exact methods (orange), valence-space methods (green) and wave-function expansion methods (red).
The predictions of nuclear properties based on a realistic description of the strong interaction is at the heart of the ab initio endeavour in low-energy nuclear theory. Ab initio calculations have long been limited to light nuclei or to nuclei with specific proton and neutron numbers. Theoreticians from Irfu/DPhN have developed novel ab initio methods that led to a significantly increase of the number of nuclei that can be accessed. The most recent one, called Bogoliubov many-body perturbation theory (BMBPT), provides a light-weighted alternative capable of providing the same accuracy as competing methods at a computational cost that is lowered by two orders of magnitude. This has been achieved by allowing symmetries of the nuclear Hamiltonian to spontaneously break in the calculation. This exciting new development, paving the way for precise computations of heavier nuclei using reasonable computing resources, has recently been published in Physics Letter B [1].
Atomic nuclei are systems composed of nucleons, i.e. protons and neutrons, interacting via inter-nucleon forces. These forces emerge from strong interactions between constituent quarks and gluons, whose dynamics is described by the quantum field theory of Quantum Chromo Dynamics (QCD). Unfortunately, QCD displays a non-perturbative character at low energies characterising the realm of nuclear structure. In this context, the systematic and controlled description of the atomic nucleus poses a formidable task. This longstanding (and still unanswered) problem is at the heart of the so-called ab initio approach to the nuclear quantum many-body problem. It requires:
i) modelling the inter-nucleon interactions entering the A-body Schrödinger equation (an eigenvalue equation for the Hamiltonian where A is the number of nucleons composing the nucleus) with a sound connection to QCD
ii) developing mathematical methods allowing for accurate and controlled approximations of the exact solutions of the A-body Schrödinger equation.
Three decades ago, the seminal work of S. Weinberg paved the way for a systematic theory of inter-nucleon interactions anchored into QCD. He created a mathematical framework, called chiral effective field theory (EFT), which allows for constructing systematically improvable nuclear Hamiltonians1. Such Hamiltonians have nowadays replaced previous phenomenological models and have become the standard input to the A-body Schrödinger equation. Nevertheless, finding the solution of the Schrödinger equation for a large range of nuclei remains a highly non-trivial problem, both from a formal and a computational perspective. Therefore, such calculations have long been limited to light systems with mass number A ? 12.
Figure 2: Neutron states of 16O (doubly closed shell) and 18O (singly open shell) in the standard non-interacting shell model. Filled (open) circles correspond to occupied (unoccupied) single-particle states in the ground-state reference state.
Expanding the exact solution
Over the past 15 years, mathematical methods expanding the exact solution with respect to a simple mean-field reference state have been designed and, thus, enabled the description of heavier nuclei up to tin isotopes (A =50). However, these methods have remained limited until recently to nuclei with specific numbers of protons and neutrons: the so-called doubly closed-shell nuclei. Indeed, to first approximation, a nucleus can be described by a state obtained by filling up protons and neutrons on two sets of shells that can only accept a specific number of them each. When the proton and neutron numbers are such that the upper neutron and proton shells are entirely filled, the corresponding nucleus is coined as ‘doubly closed-shell’ (see Fig. 2). These nuclei are relatively more stable and simpler to describe than their neighbours as they authorise the use of standard Slater determinants as reference states. In the past years, theoreticians from Irfu/DPhN have developed several different expansion methods allowing one to perform ab initio calculations of singly open-shell nuclei, i.e. nuclei whose upper proton or neutron shell is not fully occupied and that are relatively more challenging to solve for. This has extended the reach of ab initio calculations from a few tens to several hundreds of nuclei.
Breaking the symmetry of the Hamiltonian
The key idea behind these approaches is to allow the reference state to break a symmetry of the underlying Hamiltonian. For semi-magic nuclei, the relevant symmetry to be broken is the so-called U(1) global gauge symmetry, an abstract symmetry associated with the simple fact that nuclei are made of specific numbers of protons and neutrons. In these approaches, the system is first allowed to not have exactly Z protons or N neutrons in order to handle the complexity associated with the partially filled character of the upper shell. This idea leads to employing a so-called Bogoliubov reference state (solving the Hartree-Fock-Bogoliubov mean-field equations) that generalises the use of a simpler Slater determinant (solution of the well-celebrated Hartree-Fock mean-field theory). This allows to capture from the outset the superfluid character of singly open-shell nuclei. While the breaking of U(1) symmetry is a standard tool in simple mean-field descriptions, it had never been applied in beyond-mean-field methods aiming at an accurate solution of the A-body Schrödinger equation and, thus, allowing for ab initio calculations. The most recent formalism developed by theoreticians from Irfu/DPhN consists of a perturbative expansion around the particle-number-breaking Bogoliubov reference state and is, thus, coined as Bogoliubov many-body perturbation theory (BMBPT). In Fig. 3, a systematic comparison of BMBPT results with other state-of-the-art methods, among which one has also been developed by the same group ( ADC(2) ), is shown for three different isotopic chains. While it is obvious that BMBPT performs extremely well against existing methods for both binding energies and two-neutron separation energies, it does so for a computational price that is two orders of magnitude lower. This makes BMBPT an extremely useful candidate for performing large survey calculations across the nuclear chart, which enables in-depth testing of next-generation nuclear Hamiltonians. At the same time future extensions to even more challenging doubly open-shell nuclei are much simpler than in other frameworks.
Figure 3: Ground-state binding energies (top) and two-neutron separation energies (bottom) computed within second-order BMBPT along O, Ca and Ni isotopic chains. Results using other many-body methods are shown for comparison. Experimental values are shown as black bars. Larger deviations from experiment in mid-mass systems are due to approximations made in the construction of the input Hamiltonian that do not affect the conclusions of the benchmark.
In summary, the theoreticians from Irfu/DPhN have added a new ab initio quantum many-body method dedicated to the ab initio description of mid-mass open shell nuclei that can compete with all previously available methods at a much lower computational cost. Being almost entirely developed at Irfu/DPhN, this newly designed method marks the increasing significance of the CEA theory group in the sector of ab initio nuclear structure theory.
[1] A. Tichai, P. Arthuis, T. Duguet, H. Hergert, V. Somà, R. Roth, Phys. Lett. B786 (2018) 195
Contact: Alexander TICHAI CEA-Saclay/Irfu/DPhN/LENA
1. Hamiltonian: mathematical operator describing the dynamics of interacting particles. In the case of the nuclear Hamiltonian, it is written as the sum of the kinetic energies of the A nucleons and the sum of the two-body, three-body, … interactions between the nucleons. Contrary to simpler cases, like Coulomb repulsion in electromagnetism, the strong interaction does not allow for writing down a closed analytical form of the corresponding potential in terms of spatial, spin and isospin degrees of freedom
#4595 - Last update : 05/20 2019
Retour en haut |
45a847abcd1d4c8c | Mathematical Problems in Engineering
Mathematical Problems in Engineering / 2019 / Article
Research Article | Open Access
Volume 2019 |Article ID 2781437 | 21 pages |
Prediction of Ship Cabin Noise Based on RBF Neural Network
Academic Editor: Roberto G. Citarella
Received18 Nov 2018
Revised13 Mar 2019
Accepted25 Mar 2019
Published14 Apr 2019
Prediction of cabin noise for new types of ships and offshore platforms, based on measurement or simulation databases, is a common problem that needs a solution at the beginning of the design process. In this paper, we explore the use of a radial basis function (RBF) neural network to study this problem. Within the framework of the RBF network, we implement and compare several algorithms to devise a fast and precise cabin noise prediction model. We select a combination of algorithms after training the RBF with noise measurement samples. The results show that the RBF neural network trained using the DE algorithm has better prediction accuracy, generalization, and robustness than the others. Our work provides a new method for preliminary noise assessment during the schematic design phase and enables rapid analysis of vibration and noise control schemes for ships and offshore platforms.
1. Introduction
Increasing interest in environmental protection means that people pay more attention to the effects of vibration and noise from offshore platforms and ships on crew happiness, working environment, and physical health. Delaying the implementation of noise reduction measures needed to meet cabin noise requirements until after construction adds significant cost and affects both interior arrangements and weight [1]. Accurate prediction of cabin noise during the design or early construction phases is an urgent need.
For elaborate structures, such as ships and drilling platforms, the classical approach that requires defining and solving differential equations for each component is too complex. Therefore, most existing noise prediction is done using approximate methods to study sound radiating from dynamic systems. Multiple commercial acoustics software products, using different algorithms and frequency domains, have been available, including VA One and LMS Virtual.Lab Acoustics. The main solution methods of these software products involve Finite Element Method (FEM), Boundary Element Method (BEM), Statistical Energy Method (SEM), and so forth. In the field of ship industry and ocean engineering, Statistical Energy Method can be utilized in the process of compartment noise analysis of high-speed passenger ships, which helps detect the noise transmission route and find the solution for noise reduction. Boundary Element Method might be used for the analysis of normal ventilation systems of the ship, which helps explore the noise frequency characteristics of pipeline and the changing rules within different flow velocity of the pipe. Statistical Energy Method can also be used in the noise influence factor prediction of huge semisubmersible ocean platform, and the noise control methods that meet the requirements of specification are proposed [2, 3]. As for the civilian automobile and aerospace industry, the commercial software and solution methods used are basically the same, but the frequency range and research object might be different. Currently, various calculation methods are used to predict the low, intermediate, and high frequency, respectively, additionally improving the accuracy of the forecast [4]. However, as in the early stage of ship design and ocean engineering, the detailed design drawing has not been mapped and the required differential equation cannot be acquired. Therefore, the purpose of this paper is to find out a new method for the rapid and accurate prediction of ship compartment noise in the initial design stage of the project, under the background of the absence of detailed calculation parameters, that is, to introduce the artificial intelligence or, specifically, artificial nerves network technology into the field of compartment noise control.
2. The Working Mechanism of Radial Basis Function Neural Network and Algorithm
Artificial intelligence (AI) is a field devoted to the development of computer systems to simulate and extend human intelligence. Ideally, the goal is the creation of a machine that reacts in a way similar to human intelligence [5]. The study of neural networks is a branch within the field that models the neural network of the human brain using mathematical methods aimed at imitating the brain’s function and structure [6].
2.1. Radial Basis Function
The greatest advantage of artificial neural networks is their ability to approximate the output of a human brain without specific knowledge of its operation. As long as sufficient training samples are provided, the neural network can provide this output with arbitrary precision [7, 8]. Due to the complex structure and large number of vibrating machines on a ship, vibration and noise have a nonlinear relationship [9, 10]. The self-learning and self-adapting characteristics of a neural network can model the nonlinear system and predict the overall vibration and noise throughout the system. There is no need for geometric modeling and complex boundary and parameter settings. Once the neural network modeling the system is trained, it has all it needs to forecast the noise, which greatly shortens the time needed to make the prediction.
In 1988, Moody and Darken proposed the radial basis function (RBF) neural network for the precise interpolation of functions. The RBF network can approximate any nonlinear function, deal with the intractable regularity in the system, and converge quickly to a solution. It has been successfully applied in many fields, including pattern recognition, nonlinear function approximation, time series analysis, data classification, image processing, and control and fault diagnosis [1113]. We use the RBF network in our approach to ship noise prediction.
The RBF network is known as a feedforward neural network with three layers of neurons, as shown in Figure 1. The first layer is the input layer, with the number of nodes equal to the dimension of the input vector. The second layer is the hidden layer, with the number of nodes depending on the complexity of the problem and being larger than the number of input layer nodes. The third layer is the output layer, with the number of nodes equal to the dimension of the output vector. The weight parameter represents the link between nodes. The weight vector exists only between the hidden and output layers. The arrow in the figure indicates the direction in which information propagates through the network. The activation function of the RBF network can be selected in various forms as shown in Table 1, and its curve shape is shown in Figure 2.
Activation functionExpression
Gauss function
Reflected Sigmoid function
Inverse Multiquadric function
The three functions in Table 1 are all radially symmetric. The function value decreases when the independent variable deviates from the center, where is the spread or width. The smaller is, the smaller the curve width is and the faster the function falls, and the function is more selective. Because the width of the Gaussian function is the smallest, we generally choose the Gaussian function as the activation function of the RBF network. As can be seen from Figure 2, the output of a single hidden node can be expressed aswhere is the ith sample, N is the number of samples, is the width of the center of the jth hidden node, and the dimension of is D. Without affecting universality, we let ; it is easy to prove that its output can represent infinite multidimension:It is not difficult to get the feature mapping function from the above proof:There are infinite terms on the right side of the equation, so the feature space corresponding to the radial basis function is infinite. Thus, the kth output can be expressed asAs can be seen from (2)-(4), each hidden layer neuron responds to input , and the output of the RBF network is the weighted sum of these responses. If the input is close to the jth hidden node center , the response is large, which is equivalent to the implicit node being activated. If the distance is far away, the response is almost zero, which is equivalent to the implicit node being suppressed. This feature is known as the “local mapping” feature, which makes the RBF network converge faster than the “global mapped” network.
In order to design an RBF network for noise prediction, we must determine the number and coordinates of the hidden nodes, the expansion constants of the activation functions of the hidden nodes, and the weights of the output nodes. We will combine a variety of algorithms to train the network to achieve the quality required.
2.2. Gradient Descent Algorithm
The calculation process of the gradient descent method is to solve the minimum value of the error function along the direction of the gradient descent. Nowadays, the numerical iterative method is generally adopted. The specific formula is as follows:In the formula, parameters , , and are the learning rates of the corresponding variables; is the central coordinate of the hidden nodes; is the expansion constant; is the network weight.
The learning rate should be set smaller (increasing accuracy) when the gradient falls faster and larger when the gradient is flat (increasing the convergence speed). Weights, center coordinates, and width are typically initialized to random numbers.
2.3. Particle Swarm Optimization Algorithm
The particle swarm algorithm treats the potential solution of the optimization problem as a particle in the search space and searches for the optimal solution of the current space through the particle.
For the first time in a D search space, there are m particles forming a group, where the position of the ith particle is represented by a vector , and the velocity of the particle is represented by a vector . The optimal position searched by the ith particle is , and the optimal position searched by the entire particle swarm is . The particle update formula is as follows:When , take ; when , take .
; ; nonnegative constants and are called acceleration constants; and are uniformly distributed over the interval random number. is the current position of the ith particle, is the optimal position searched for the ith particle, is the optimal position for the entire particle swarm search, is the current velocity of the ith particle, and the negative number is the maximum speed limit.
The particle velocity update is divided into three parts: the first part is the influence of the current speed, the current state of the particle is connected, in addition to the ability to balance the global and local search; the second part is the influence of the particle’s own cognition, that is, the influence of the particle’s own memory. Make the particles have global search ability and avoid falling into local minimum values; the third part is the influence of group information, and realize information sharing and cooperation between particles.
2.4. Differential Evolution Algorithm
The differential evolution algorithm generates a difference vector by randomly selecting two individuals from the parent population and weights it and adds it to another random individual in the parent class. Then, the generated new individual and the corresponding individual in the parent are performed. The crossover operation generates a new individual and the individual with good fitness is the child individual.
The initial population is generated first, and the specific formula is as follows:where represents the ith individual in the initial population, NP represents the population size, , D represents the dimension of the solution to the problem, and rand is the uniformly distributed random number in the interval .
Next, the differential variability is as follows:, where is the th generation and F is the scaling factor.
Then there is the crossover operation; the specific formula is as follows:; is a randomly selected integer within , and CR is the crossover rate.
Finally, by comparing the parents and performing the selection operation, the specific formula is as follows:
3. Establishment of Ship Cabin Noise Database
Although the intelligent forecasting method does not need to provide a precise formula for calculating vibration noise, it is necessary to identify the independent variable that influences the dependent variable. This process is called feature extraction. Cabin noise samples are used for intelligent forecasting, and the quality of feature set extraction results is a key to accurate noise prediction. In this section, we explore the extraction of the main features affecting the ship cabin noise in detail and express them mathematically to establish a corresponding ship cabin noise database.
3.1. Input Parameter
(1) Noise Source. The ship belongs to a large offshore structure, and its internal contains a large number of sources of vibration and noise, including the main engine, motor unit, propeller, gear box, pump, and fan. For various excitation sources, the acceleration level can be determined by empirical formulas [14].
The vibration excitation generated by the propeller can be loaded onto the ship’s floor directly above the propeller. The specific formula is as follows:M is the number of propellers; N is the number of propeller blades; D is propeller diameter; is rated speed of propeller.
The empirical estimation formula for the diesel engine and motor acoustic radiation power level (reference sound power level ) is as follows: is rated power of diesel engines and motors; is octave correction of diesel and motor air noise.
The engine foot acceleration (reference acceleration is ) is determined by the formulam is the quality of the diesel engine; is rated power of diesel engine; is rated speed of diesel engine; is working speed of diesel engine; is octave correction value of diesel engine vibration.
The motor foot acceleration (reference acceleration is ) is determined by the formula is rated power of the motor; is rated speed of the motor; is the octave correction value of the motor vibration.
According to the above formula, the sound radiation power level, and the foot acceleration level of the existing equipment under the rated power, the excitation condition of the diesel engine and the motor at other powers can be roughly estimated by modifying the rated power value.
(2) Noise Propagation Path. On-board noise is transmitted primarily through the air and the hull structure. Because it is typically unnecessary to predict cabin noise near the cabin (i.e., in the air), we ignore air noise and address only the hull structure as a sound source parameter. Structural noise is attenuated by the structure of the cabin, deck, transverse bulkheads, and so on during propagation. We quantify these influencing factors as the average shelf thickness and surface area of the cabin, the number of decks between the cabin and the noise source, and the number of transverse bulkheads between the cabin and the noise source. We store these factors for multiple points in the ship and use them as input parameters as shown in Table 2.
Serial numberParameter nameSerial numberParameter name
1Number of decks separated from the previous host9Number of decks separated from the propeller
2Number of transverse bulkheads separated from the front mainframe10Number of transverse bulkheads separated from the propeller
3Number of decks separated from the front motor11Target cabin surface area
4Number of transverse bulkheads separated from the front motor12Target compartment plate average thickness
5Number of decks separated from the rear host13Front host power
6Number of transverse bulkheads separated from the rear host14Rear host power
7Number of decks separated from the rear motor15Front generator set power
8Number of transverse bulkheads separated from the rear motor16Rear generator set power
3.2. Output Measurements
For the output, we adopt the A-weighted sound pressure level (SPL) in dB(A). This measurement is commonly used as the noise evaluation index in current engineering. The A-weighted SPL is close to the human ear’s hearing characteristics: sensitive to high frequency noise and insensitive to low frequency noise. Its logarithmic representation is directly measurable with a sound level meter, which simplifies the work and reduces the measurement range.
3.3. Acquisition of Ship Cabin Noise Database Data
All data in our database is based on a ship with characteristics as given in Table 3.
Length overall55.00 m
Molded depth4.20 m
Molded breadth9.50 m
Draft2.20 m
Displacement381.00 t
Main engine power3×2528 kW
Cruising speed18.00 kn
We selected 10 cabins within the ship as training and test samples as shown in Figure 4. These cabins had no noise reduction measures installed. We selected the front and rear host groups, front and rear motor groups, and propellers as noise sources, also called excitation sources, positioned as shown in Figure 5. We treat the two rear main engines as a unit, operating synchronously and driving the propeller consistent with the rear host’s output. We selected 45 sets of combined working conditions under different power levels as excitation conditions and expressed the power parameters of the excitation source as percentages between 0 and 100. Based on the different working conditions of the ship, we used the software product VA One to simulate noise levels in the 10 cabins; the specific model SEA subsystem connection diagram is shown in Figure 3. We selected subsystems with a height of 1.5 m~2 m from the deck as research objects and obtained the A-level noise for each cabin under the 45 working conditions.
Because the statistical energy analysis method has a more accurate solution to the medium-high frequency dynamic response problem in complex structural systems, the ship belongs to a relatively complicated and large structural system. Generally, it is considered to belong to the low-frequency region when the frequency is less than 50 Hz, and the intermediate frequency is 50 to 200 Hz. In the area, above 200Hz can be regarded as high-frequency area, while the hearing range of normal human ear is within 20~20kHz. It can be seen that the proportion of low-frequency area is small, so it is inevitable to solve the problem of acoustic prediction of ship cabin and system dynamics problems in the mid-high frequency region. In addition, since the human ear is not sensitive to high-frequency noise to low-frequency noise, the proportion of low-frequency noise is weakened in the A sound level. Therefore, it is more suitable to use the statistical energy analysis method to solve the ship cabin noise prediction.
In summary, the 10 cabins and 45 test samples produced 450 sample pairs. The input vector consisted of 16 dimensions, the excitation source of 4 dimensions, and the cabin structure parameters of 12 dimensions.
4. RBF Network Design and Noise Prediction Based on Gradient Descent Algorithm
4.1. Network Process Design
The training process for the gradient descent algorithm in the RBF network is relatively simple, but there are many controlling parameters. The network training process is shown in Figure 6. Because the root mean square (RMS) error well describes the accuracy of the cabin noise prediction for each training iteration, we use the RMS error as the iteration test. The RMS error formula isRMSE denotes root mean square error; is network forecast; is simulation calculated value; n is number of trainings.
In order to facilitate the initialization of the hidden node center coordinates and to avoid computational saturation, we normalize the input data so that the parameters of all dimensions are in the interval.
4.2. Selection of Parameters and Their Influence on the Algorithm
Several parameters influence the output quality and learning rate but there are diminishing returns beyond a near-optimal setting. We now present our empirical investigation of these parameters.
(1) The Influence of the Number of Hidden Nodes. We start the process with 17 hidden nodes, increasing the number of nodes by 1 each time the prediction test for the four cabins is completed, up to a maximum of 62. We calculate the average relative error of noise prediction for all working conditions in each of the four compartments for each test and obtain the correlation between the number of hidden nodes and the forecast result of the algorithm, as shown in Figure 7. During the experiment, other parameters remain unchanged. The maximum number of iterations is set to 25000, and the target error is set to 5. The hidden node center coordinates, the extended constants of each hidden node, and the output weights are initialized to random numbers in the interval . The learning rate λ is set to 10−6 in all cases.
The figure shows that there is no correlation between the number of hidden nodes and the algorithm prediction results and that the average relative error of the four target cabin predictions fluctuates around a value. As the iterations continue, the extra hidden nodes appear to offset, attenuate, shrink, and merge. The curve with relatively few hidden nodes is smoother overall. Therefore, from the perspective of network generalization, we use 40 hidden nodes.
(2) The Effect of Learning Rate. We set the learning rate of the center coordinates of the hidden nodes, the expansion constants of the hidden nodes, and output weights to be all the same value, λ, with values of 10−3, 10−4, 10−5, and 10−6 in turn. After performing 20 training passes on the cabin data from the upper two decks in the database, we calculate the root mean square error after each iteration and average them. The results are shown in Figure 8. We left other parameters unchanged, with 40 hidden nodes, 25000 maximum iterations, the target error of 5, the hidden node center coordinates, extended constants of each hidden node, and output weights all initialized to a random number in the interval [15].
The figure shows that the larger the learning rate λ is, the faster the root mean square error decreases. However, when , the root mean square error fluctuates around the minimum value after iterating a certain number of steps and does not continue to fall. This is because a larger learning rate results in a greater parameter update at each iteration, which benefits the global optimization at the beginning, and quickly approaches the global minimum. However, near the global minimum, each movement is too large, crossing the minimum value each time and leading to the oscillation around it.
However, a smaller learning rate does not always mean that it is easier to approximate the error to the global minimum. As shown in the figure, with 25000 iterations, the RMS error for is still greater than the RMS error for . This situation occurs for two reasons. First, the smaller is, the slower the error moves, and more iterations are needed to fall to the minimum value. Second, the smaller is, the easier it is to fall into the local minimum and stay there, which is an obvious disadvantage of the gradient descent algorithm.
In summary, we set the learning rate at around .
(3) Effect of Target Error . Although the purpose of the iteration is to reduce the RMS error, the inevitable noise in the training samples coupled with a too-small target error causes the noise data to be accurately fitted, resulting in overfitting [16, 17]. Therefore, we set to an integer between 6 and 10 and leave the other parameters of the network unchanged. Carrying out the prediction method with cross-validation 20 times yields the results shown in Figure 9.
The figure shows that when the target error is 6 and 7, the average relative error of the actual forecast is even larger, and it is obvious that the noisy data is also accurately interpolated. When is 8 or 9, the actual forecast average relative error is the smallest. After comparing the data, we find that the relative error of each working condition is more stable when is taken as 8, and the maximum relative error is smaller than when is 9.
4.3. Forecast Error and Evaluation
We evaluate the forecast error by setting the number of hidden nodes to 40, the maximum number of iterations to 25,000, the target error to 8, and the learning rate λ to 10−5 and the hidden node center coordinates, the extended constants of the hidden nodes, and the output weights to random numbers in . We make 10 predictions using a cross-validation method and show the average of the forecast results in Figures 10 and 11 and Table 4.
Forecast cabinRight chord cabin (2 people)Right chord cabin (4 people)SuiteKitchen
Average relative error0.80%0.82%1.2%1.4%
Maximum relative error2.1%2.1%2.4%2.8%
The results show that the RBF network trained by the gradient descent method has a high prediction accuracy, a small prediction error for cabin noise on different decks, with different surface areas and with different numbers of transverse bulkheads from the excitation source, and stable error characteristics.
Although the RBF network trained by the gradient descent algorithm has a good performance on cabin noise prediction, it has equally notable shortcomings: more controlling parameters and sensitivity to those parameters. The target error , the learning rate , the initialization of hidden node center coordinates, individual hidden node expansion constants, and output weights all exert great influence on the performance of the algorithm. More relevant experience is needed to ensure selection of appropriate parameters in the actual application of our method to a project in order to avoid falling into the local minimum.
5. RBF Network Design and Noise Prediction Using Particle Swarm Optimization
5.1. Network Process Design
In this section, we present our use of the particle swarm optimization algorithm to train the hidden node center coordinates, expansion constants, and output weights of the RBF network. With M hidden nodes, the dimension of each particle is 16M+M+M=18M. The first 16M dimension represents the coordinates of the center of hidden nodes, the M dimension represents the expansion constant of each hidden node, and the last M dimension represents the output weight; namely,The design flow of the network is shown in Figure 12. We normalize the data so that the parameters of each dimension of the input variable are in the interval . In order to avoid the particles leaving the search space during training, we also set the boundary of the particle position. The first 16M dimension boundary is set to , the next M dimension is set to , and the output weight has no boundary. In MATLAB, we initialize the particle position to a uniformly distributed random number in the interval and the velocity to normal random distribution. We calculate the particle fitness value according to the RMS error of the RBF network [18].
5.2. Parameter Selection and Effects on the PSO Algorithm
Leaving other parameters unchanged [19, 20], we set the number of hidden nodes to an integer between 30 and 50 and used random weight optimization in the particle swarm optimization algorithm for the prediction experiment. The results are shown in Figure 13.
The figure shows that when the number of hidden nodes is 40, 45, or 46, the average relative error for the prediction of the four cabins is large, while the other cases fluctuate within a certain range. Since the particle swarm optimization algorithm is unstable, the algorithm converges to different training precisions even when the number of hidden nodes in the network is the same. Thus, the fluctuation range in the graph is not large. Therefore, we conclude that the number of hidden nodes has little effect on the forecast results. In our test, setting the number of hidden blocks to 41 results in the best prediction results, so we use 41 hidden nodes throughout the process described in this section.
Because the particle swarm optimization algorithm easily falls into the local minimum value, we now address enhancements to the algorithm’s convergence behavior. We begin by leaving other parameters unchanged and evaluating three alternatives—shrinkage factor, linear weighting, and random weighting—to alter the convergence behavior. We train the PSO algorithm 10 times with 1000 iterations using each method. Figure 14 plots the RMS error of the convergence results according to the method. The figure shows that the shrinkage factor method has the worst convergence behavior, converging only to a relatively large local minimum value each time. The linear weight and random weight methods are better, with the random weight method being more stable than the linear weight method and converging to a larger local minimum with fewer iterations [21]. Modifying PSO to use random weights leads to far faster convergence speed than the gradient descent algorithm. The RMS error achieved by training 1000 times with random weights is smaller than that achieved by training 25000 times with the gradient descent algorithm [22].
We have determined experimentally that forecast results differ even when the training precision of the PSO algorithm is constant. Using a training accuracy between 2 and 4 increases the probability of producing good results. The use of an value less than 2 leads to overfitting due to noise in the data. Therefore, we set the training accuracy to 2, and if the actual training error of the algorithm is greater than 4, we must retrain the analysis.
5.3. Prediction Error and Evaluation
We also consider the quality of the predictions produced by our approach. We begin by setting our core parameters: number of particles , and number of hidden nodes = 41. After cross-validating the forecasts, we choose the best results from our RBF network and plot them alongside the VA One software simulation in Figures 15 and 16 and Table 5.
Average relative error1.4%1.1%0.90%1.2%
Maximum relative error2.8%2.9%1.9%3.1%
The results show that the RBF network trained by our PSO algorithm offers higher accuracy with faster convergence and fewer training iterations compared to the gradient descent algorithm [23]. However, the PSO algorithm is also unstable, easily falls into the local optimal solution, and has many controlling parameters. Evaluating the above advantages and disadvantages, we combine the PSO algorithm with the gradient descent algorithm to train the RBF network for noise prediction to reduce the risk caused by algorithm instability and to improve the reliability of the forecast.
6. ADE-MRBF Network Design and Noise Prediction
The Differential Evolution (DE) algorithm is an optimization method proposed by R. Storm and K. Price in 1997 to solve the Chebyshev polynomial fitting problem [24]. The algorithm uses floating point calculations and offers fewer controlled parameters, easy implementation, good robust performance, and high reliability.
Our noise problem is complex. Even after simplification, 16 components are still extracted as input vector dimensions, and these components correlate with each other. Normalizing the input vector during preprocessing helps, but the method is still coarse. The addition of loss factors or other complex considerations reduces the performance of the algorithm. Therefore, we use the Mahalanobis distance instead of a simple normalization operation to decouple and dimensionless the sample to optimize the RBF network and predict the cabin noise [25]. We refer to our RBF network trained by the DE algorithm with adaptive parameter adjustment and based on Mahalanobis distance optimization as the ADE-MRBF network.
6.1. Network Process Design
The network design process is shown in Figure 17. Since the Mahalanobis distance is used instead of the Euclidean distance to measure the similarity between input variables and the hidden node centers, there is no need to normalize the input data. The initialization process is the same as that of the PSO algorithm, with all components initialized to a random number between 0 and 1 [26]. In addition, the center coordinate boundary of the hidden node in the cross operation is set to , the expansion constant is set to a positive number, and the output weight is not set to a boundary.
6.2. Parameter Selection and Effects
We note that while the number of hidden nodes has little effect on the performance of the algorithm, we use 40 hidden nodes to eliminate the interference of third-party factors and to facilitate the performance comparison between the DE algorithm and other algorithms [27].
(1) Effect of Population Size on Convergence Performance. Changing only the population size, we use the ADE-RBF algorithm when training with the data from the eight cabins on the upper two decks in the database. The training results are shown in Figure 18.
The figure shows that converge behavior is worst when the population size (NP) is 100, with the population diversity lacking in the later iterations, which leads to search stagnation. Increasing the NP causes the search stagnation phenomenon to appear later and strengthens the algorithm’s global optimization ability. When the NP is greater than 200, the convergence curves are very close, which indicates that further increases in NP have little effect on the convergence behavior [2831]. We use a population size of 300.
(2) The Effect of Different Scaling Factor Strategies on Convergence Ability. We also evaluate the choice of scaling factor in the reduction strategy. In this test, we change only the scaling factor, using one of four different adaptive reduction strategies as shown in (21), (22), (23), and (24) [32]. Exponential convergence strategy:
Linear strategy:
Parabolic strategy:
Sigmoid strategy: is scaling factor; is scaling factor minimum; is scale factor maximum; is Generation G; is set maximum optimal algebra.
Training 20 times using data for the eight cabins on the upper two decks in the database gives the results as shown in Figure 19. The algorithm has the highest training precision along with good global convergence and local optimization ability when using the sigmoid strategy. The exponential convergence strategy has too short of a global optimization time and falls easily into the local minimum value in later stages. The parabolic strategy fails to perform local optimization near the global optimum in spite of the large number of global optimizations and offers insufficient training accuracy.
(3) Influence of Different Cross-Rate Strategies on Convergence Ability. As the iteration progresses, population diversity declines. Increasing the size of the population can solve this problem to a certain extent, but it greatly increases the amount of calculation necessary. We try to optimize the performance of the algorithm by improving the value of the crossover rate. Maintaining a low crossover rate in the early stage of the search is beneficial to the algorithm for global search [33]. Increasing the crossover rate in the later stage can improve the accuracy of the algorithm. Therefore, we adopt an exponential increment strategy for the crossover rate, as shown in the following equation: is cross rate; is the maximum value of the set cross rate; is the minimum value of the set cross rate; is Generation ; is the maximum number of optimization algebras set.
The curve of the crossover rate CR under the exponential strategy and the line increment strategy is shown in Figure 20. It can be seen from the figure that the CR is almost unchanged at the beginning of the iteration when using the exponential increment strategy and stays near , making the algorithm spend more time on the global search. The rapid increase of CR in the late period maintains a high value, which is conducive to maintaining the diversity of the population [34]. After experimentation, this section selects .
We want to compare cross-rate strategies also. Again, leaving other parameters unchanged, we trained the algorithm 20 times each with linear and exponential cross-rate increase strategies using the eight-cabin data from the upper two decks in the database. Figure 21 shows the average convergence results.
The results indicate better convergence ability with the exponential strategy. However, the algorithm needs more time for global optimization in the early stage [35, 36].
(4) Influence of Mahalanobis Distance on Network Performance. Keeping the other parameters unchanged, based on the differential evolution algorithm using Mahalanobis distance and Euclidean distance, 20 trainings were performed on the eight-cabin data on the upper two decks in the database. Figure 22 shows the plot of the minimum RMS error decreasing as the number of iterations increases.
The RBF network performs better using the Mahalanobis distance. This indicates that the Mahalanobis distance better measures the similarity between multidimensional variables, which compensates for the deficiencies in the data mining process to a certain extent and has specific strengths in complex multidimensional engineering problems [37].
(5) Effect of Training Accuracy on Forecast Results. Because noise is present in our data, we must take steps to avoid overfitting. We use the ADE-MRBF network to perform five predictions using cross-validation prediction methods under five different training precisions. The relative errors are averaged and plotted as Figure 23.
The figure shows that the prediction accuracy is the highest when the training precision is set to 4. When is below 3, the forecast error is increased, which causes overfitting. Therefore, the training accuracy should be set to 4.
6.3. Prediction Error and Evaluation
Based on the above test analysis, we combine the best performing parameters and improvement strategies and use the cross-validation forecasting method to carry out 20 predictions. The average forecast results under different working conditions are shown in Figures 24 and 25 and Table 6.
Average relative error1.2%0.71%0.49%0.75%
Maximum relative error2.5%2.1%1.4%2.6%
The results show that the RBF network trained by the DE algorithm has high accuracy for cabin noise prediction, low forecast error, fast convergence speed, and few controlling parameters. The scaling factor and crossover rate have little effect on the performance of the algorithm. The optimized RBF network has strong applicability in the field of ship cabin noise prediction.
7. Conclusion
Using an artificial intelligence method, we have been able to predict the noise in an unknown cabin of a ship. We reach the following conclusions.
The large numbers of vibrating equipment coupled with an irregular hull shape and complicated structure produce a large number of sound source dimensions and make the forecast model complex and difficult to train. Using the Mahalanobis distance in the RBF network effectively decoupled the nondimensional influence compared with the Euclidean distance and had stronger classification ability when measuring the similarity between high-dimensional vectors.
After evaluating the RBF network for predicting noise, we determined that the differential evolution algorithm obtains the best prediction results with fewer controlling parameters and greater flexibility. Further, this approach is less sensitive to changes in the controlling parameters. The combination of particle swarm optimization and gradient descent algorithms yields good performance, stability, and robustness for training. Our work provides a suitable reference for applying an RBF neural network for noise prediction.
Our preliminary exploration of noise prediction for an unknown cabin of a ship under various working conditions shows that the forecast results from our RBF approach offer good accuracy overall. The average relative error is within 1%, and the maximum relative error is within 3%. We expect good noise prediction for other unknown cabins with similar structures.
Data Availability
Conflicts of Interest
This study was funded by high performance numerical wind tunnel algorithm and software research (20160131), High Technology Ship Funds of Ministry of Industry and Information of P.R. China, and major innovation projects of High Technology Ship Funds of Ministry of Industry and Information of P.R. China.
1. Y. Yang, C.-D. Che, and W.-Y. Tang, “Applications of reducing vibration and noises in a polar scientific icebreaker based on green shipbuilding technologies,” Journal of Ship Mechanics, vol. 18, no. 6, pp. 724–737, 2014. View at: Google Scholar
2. B.-N. Liang, H.-L. Yu, and Y.-N. Cai, “Research on noise prediction and acoustics design of shipboard cabin,” Journal of Vibroengineering, vol. 18, no. 3, pp. 1991–2003, 2016. View at: Publisher Site | Google Scholar
3. W.-H. Joo, S.-H. Kim, J.-G. Bae, and S.-Y. Hong, “Control of radiated noise from a ship's cabin floor using a floating floor,” Noise Control Engineering Journal, vol. 57, no. 5, pp. 507–514, 2009. View at: Publisher Site | Google Scholar
4. R. Citarella and L. Federico, “Advances in vibroacoustics and aeroacustics of aerospace and automotive systems,” Applied Sciences, vol. 8, no. 3, article no 366, 2018. View at: Publisher Site | Google Scholar
5. A. Sabharwal and B. Selman, “Artificial intelligence: a modern approach,” Artificial Intelligence, vol. 175, no. 5-6, pp. 935–937, 2011. View at: Publisher Site | Google Scholar
6. D. Zhu, “The research progress and prospects of artificial neural networks,” Journal of Southern Yangtze University, vol. 2004, no. 01, pp. 103–110, 2004. View at: Google Scholar
7. E. J. Hartman, J. D. Keeler, and J. M. Kowalski, “Layered neural networks with Gaussian hidden units as universal approximations,” Neural Computation, vol. 2, no. 2, pp. 210–215, 1990. View at: Publisher Site | Google Scholar
8. F. Girosi and T. Poggio, “Networks and the best approximation property,” Biological Cybernetics, vol. 63, no. 3, pp. 169–176, 1990. View at: Publisher Site | Google Scholar
9. H. Li, “Application of first-order shear deformation theory for the vibration analysis of functionally graded doubly-curved shells of revolution,” Composite Structures, vol. 212, pp. 22–42, 2019. View at: Publisher Site | Google Scholar
10. F. Pang, H. Li, H. Chen et al., “Free vibration analysis of combined composite laminated cylindrical and spherical shells with arbitrary boundary conditions,” Mechanics of Advanced Materials and Structures, pp. 1–18, 2019. View at: Google Scholar
11. M. Dehghan and V. Mohammadi, “A numerical scheme based on radial basis function finite difference (RBF-FD) technique for solving the high-dimensional nonlinear Schrödinger equations using an explicit time discretization: Runge–Kutta method,” Computer Physics Communications, vol. 217, pp. 23–34, 2017. View at: Publisher Site | Google Scholar
12. W. Zhu and D. Fu, “PMSM control system based on RBF neural network,” Electronic Science and Technology, vol. 29, no. 1, pp. 161–164, 2016. View at: Google Scholar
13. Z. Huang and H. Yuan, “Lonospheric single-station TEC short‐term forecast using RBF neural network,” Radio Science, vol. 49, no. 4, pp. 283–292, 2014. View at: Publisher Site | Google Scholar
14. J. Fu, Y.-S. Wang, K. Ding, and Y.-S. Wei, “Research on vibration and underwater radiated noise of ship by propeller excitations,” Journal of Ship Mechanics, vol. 19, no. 4, pp. 470–476, 2015. View at: Google Scholar
15. H. Yang, X. Li, and W. Jiang, “Simulation and analysis of stochastic parallel gradient descent control algorithm for adaptive optics system,” Acta Optica Sinica, vol. 27, no. 8, pp. 1355–1360, 2007. View at: Google Scholar
16. P. Zhou, Z. Liu, X. Wang, Y. Ma, and X. Xu, “Theoretical and experimental investigation on coherent beam combining of fiber lasers using SPGD algorithm,” Acta Optica Sinica, vol. 29, no. 8, pp. 2232–2237, 2009. View at: Publisher Site | Google Scholar
17. R. K. Tyson, Principles of Adaptive Optics, Academic Press, Inc, San Diego, 1991.
18. Y. Shi and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks (ICNN ’95), vol. 4, pp. 1942–1948, Perth, Western Australia, November-December 1995. View at: Publisher Site | Google Scholar
19. L. Zhang, The Theorem and Practice upon the Particle Swarm Optimization Algorithm, Zhejiang Uniwersity, 2005.
20. P. N. Suganthan, “Particle swarm optimiser with neighbourhood operator,” in Proceedings of the 1999 Congress on Evolutionary Computation, CEC 1999, pp. 1958–1962, USA, July 1999. View at: Publisher Site | Google Scholar
21. C. Guang, Temprerature Predition Model Based on Improved PSO-RBF Neural Network, Lanzhou University, 2015.
22. G. P. Liu and Q. Zeng, “PSO based on multiple target optimization,” Journal of Hangzhou Teachers College: Natural Science Edition, vol. 4, no. 1, pp. 30–33, 2005. View at: Google Scholar
23. G. Z. Chen, H. M. Xie, and X. Y. Lu, “Nonlinear optimization based on genetic algorithm toolbox of matlab,” Computer Technology and Development, vol. 3, pp. 246–248, 2008. View at: Google Scholar
24. R. Storn and K. Price, “Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. View at: Publisher Site | Google Scholar
25. X. H. Wu, S. J. Niu, and C. O. Wu, “An improvement on estimation covariance matrix during cluster analysis using mahalanobis distance,” Journal of Applied Statistics and Management, vol. 30, no. 2, pp. 240–245, 2011. View at: Google Scholar
26. B. Liu, L. Wang, and Y. H. Jin, “Advances in differential evolutional,” Control and Decision, vol. 22, no. 7, pp. 721–729, 2007. View at: Google Scholar
27. K. V. Price, R. M. Storn, and J. A. Lampinen, Differential Evolution: A Practical Approach to Global Optimization, Springer, Berlin, Germany, 2005. View at: MathSciNet
28. R. Mendes and A. S. Mohais, “DynDE: a differential evolution for dynamic optimization problems,” Evolutionary Computation, vol. 3, pp. 2808–2815, 2005. View at: Google Scholar
29. T. Blackwell and J. Branke, “Multi-swarm optimization in dynamic environments,” Workshops on Applications of Evolutionary Computation, vol. 3005, pp. 489–500, 2004. View at: Publisher Site | Google Scholar
30. Y.-C. Lin, F.-S. Wang, and K.-S. Hwang, “A hybrid method of evolutionary algorithms for mixed-integer nonlinear optimization problems,” in Proceedings of the 1999 Congress on Evolutionary Computation, CEC 1999, pp. 2159–2166, USA, July 1999. View at: Google Scholar
31. L. Wu, Y. Wang, and X. Yuan, “Differential evolution algorithm with adaptive second mutation,” Control and Decision, vol. 21, no. 8, p. 898, 2006. View at: Google Scholar
32. M. Lin, F. Luo, and Y. Xu, “Optimization control of wastewater treatment process based on improved differential evolution algorithm,” Information and Control, vol. 44, no. 3, pp. 339–345, 2015. View at: Google Scholar
33. Y. Tan, G. Z. Tan, and L. Tu, “Differential evolution algorithm with local search strategy,” Computer Engineering and Application, vol. 45, no. 7, pp. 56–58, 2009. View at: Google Scholar
34. K. Zielinski, P. Weitkemper, R. Laur, and K.-D. Kammeyer, “Parameter study for differential evolution using a power allocation problem including interference cancellation,” in Proceedings of the 2006 IEEE Congress on Evolutionary Computation, (CEC '06), pp. 1857–1864, Canada, July 2006. View at: Google Scholar
35. W. Yang, F. Yao, and M. Zhang, “Differential evolution algorithm based on adaptive crossover probability factor and its application,” Information and Control, vol. 39, no. 2, pp. 187–193, 2010. View at: Google Scholar
36. Z. X. Deng and X. J. Liu, “Study on strategy of increasing cross rate in differential evolution algorithm,” Computer Engineering and Applications, vol. 44, no. 27, pp. 33–36, 2008. View at: Publisher Site | Google Scholar
37. R. De Maesschalck, D. Jouan-Rimbaud, and D. L. Massart, “The Mahalanobis distance,” Chemometrics and Intelligent Laboratory Systems, vol. 50, no. 1, pp. 1–18, 2000. View at: Publisher Site | Google Scholar
More related articles
520 Views | 325 Downloads | 1 Citation
PDF Download Citation Citation
Download other formatsMore
Order printed copiesOrder
Related articles
|
fc1f3b613876d64b | Composite Vector Particles in External Electromagnetic Fields
Composite Vector Particles in External Electromagnetic Fields
Zohreh Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA William Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
Lattice quantum chromodynamics (QCD) studies of electromagnetic properties of hadrons and light nuclei, such as magnetic moments and polarizabilities, have proven successful with the use of background field methods. With an implementation of nonuniform background electromagnetic fields, properties such as charge radii and higher electromagnetic multipole moments (for states of higher spin) can be additionally obtained. This can be achieved by matching lattice QCD calculations to a corresponding low-energy effective theory that describes the static and quasi-static response of hadrons and nuclei to weak external fields. With particular interest in the case of vector mesons and spin-1 nuclei such as the deuteron, we present an effective field theory of spin-1 particles coupled to external electromagnetic fields. To constrain the charge radius and the electric quadrupole moment of the composite spin-1 field, the single-particle Green’s functions in a linearly varying electric field in space are obtained within the effective theory, providing explicit expressions that can be used to match directly onto lattice QCD correlation functions. The viability of an extraction of the charge radius and the electric quadrupole moment of the deuteron from the upcoming lattice QCD calculations of this nucleus is discussed.
preprint: MIT/CTP-4723
I Introduction
Electromagnetic (EM) interactions serve as valuable probes by which to shed light on the internal structure of strongly interacting single and multi-hadron systems. They provide insight into the charge and current distributions inside the hadrons. These are conventionally characterized by EM form factors, and are accessible through experimental measurements of electron-hadron scattering as well as EM transitions. The static and quasi-static limits of form factors, known as EM moments and charge radii, are independently accessible through high-precision low-energy experiments, such as in the spectroscopy of electronic and muonic atoms. These two different experimental approaches can serve to test the accuracy of the obtained quantities, and an apparent discrepancy, such as the one reported on the charge radius of the proton Pohl et al. (2013); Carlson (2015), promotes investigations that can deepen our understanding of the underlying dynamics. In bound systems of nucleons, EM probes further serve as a tool to constrain the form of hadronic forces. As a primary example, the measurement of a nonvanishing electric quadrupole moment for the lightest nucleus, the deuteron, led to the establishment of the existence of tensor components in the nuclear forces Kellogg et al. (1940).
Since quantum chromodynamics (QCD) governs the interactions of quark and gluon constituents of hadrons, any theoretical determination of the EM properties of hadronic systems must tie to a QCD description. The spread of theoretical predictions based on QCD-inspired models, such as those reported on the EM moments of vector mesons Aliev and Savci (2004); Braguta and Onishchenko (2004); Choi and Ji (2004); Bhagwat and Maris (2008), highlights the importance of performing first-principles calculations that only incorporate the parameters of quantum electrodynamics (QED) and QCD as input. The only such calculations are those based on the method of lattice QCD (LQCD), and involve a numerical evaluation of the QCD path integral on a finite, discrete spacetime. By controlling/quantifying the associated systematics of these calculations, the QCD values of hadronic quantities can be obtained with systematically improvable uncertainties.
QED can be introduced in LQCD calculations, alongside with QCD, in the generation of gauge-field configurations. This, however, leads to large finite-volume (FV) effects arising from the long range of QED interactions Hayakawa and Uno (2008); Davoudi and Savage (2014); Borsanyi et al. (2014); Endres et al. (2015); Lucini et al. (2015). The numerical cost of a lattice calculation which treats photons as dynamical degrees of freedom has forbidden comprehensive first-principles studies of EM properties of hadrons and nuclei through this avenue.333Significant progress has been made in recent years on this front, resulting in increasingly more precise determinations of QED corrections to mass splittings among hadronic multiplets Blum et al. (2007); Basak et al. (2008); Blum et al. (2010); Portelli et al. (2010, 2011); Aoki et al. (2012); de Divitiis et al. (2013); Borsanyi et al. (2013); Drury et al. (2013); Borsanyi et al. (2014); Endres et al. (2015); Horsley et al. (2015), and recently more refined calculations of the hadronic light-by-light contribution to the muon anomalous magnetic moment, albeit at unphysical kinematics Blum et al. (2015). Alternatively, as is done in most studies of hadron structure, the matrix elements of the EM currents can be accessed through the evaluation of three-point correlation functions in a background of pure QCD gauge fields, with insertions of quark-level current operators between hadronic states.
An alternative method, that has advantages over the aforementioned methods with regard to its simplicity, and potentially its computational costs, is the background field method. In this approach, a background EM field can be introduced in a LQCD calculation by imposing the gauge links onto the gauge links.444In order to reduce the computational cost, one may introduce the gauge links solely in the valence-quark sector of QCD. With this approximation, one can only reliably study those EM properties of the state that do not receive contributions from the sea quarks (receive no disconnected contributions). This is motivated by original experimental determinations of the static EM properties of hadrons and nuclei in external EM fields. By measuring the difference in the energy of the system with and without the background fields, and by matching to the knowledge of the Hamiltonian of the system deduced from the appropriate effective hadronic theory Caswell and Lepage (1986); Labelle (1992); Labelle et al. (1997); Kaplan et al. (1998a, b); Chen et al. (1999); Detmold and Savage (2004); Detmold et al. (2006, 2009); Hill et al. (2013); Lee and Tiburzi (2014a, b), the parameters of the low-energy Hamiltonian, i.e., those characterizing the coupling of the composite hadron to external fields, can be systematically constrained. This procedure has been successfully implemented to determine the magnetic moments of single hadrons and their electric and magnetic polarizabilities Bernard et al. (1982); Martinelli et al. (1982); Fiebig et al. (1989); Christensen et al. (2005); Lee et al. (2005, 2006); Detmold et al. (2006); Aubin et al. (2009); Detmold et al. (2009, 2010); Primer et al. (2014); Lujan et al. (2014). The utility of this method in accessing information about the structure of nuclei has been demonstrated recently through a determination of the magnetic moments and polarzabilities of nuclei with atomic number (at an unphysically heavy light-quark mass) Beane et al. (2014); Chang et al. (2015). It is desirable to gain further insight into the structure of these nuclei by studying their charge radii and quadrupole moments (for nuclei with spin ). These quantities require new developments that extend the implementation of uniform background fields to the case of nonuniform fields. We have recently presented such developments in Ref. Davoudi and Detmold (2015), providing the recipe for implementing general nonuniform background fields that satisfy the periodicity of a FV calculation.555See Refs. Lee and Alexandru (2011); Engelhardt (2011) for previous implementations of selected nonuniform, but nonperiodic, background EM fields in LQCD calculations of spin polarizabilities of the nucleon, and Ref. Bali and Endrodi (2015) for a periodic implementation of a plane-wave EM field in a LQCD calculation of the hadronic vacuum polarization function. In the present paper, motivated by interest in extracting the quadrupole moment of the deuteron, we provide the theoretical framework for performing a systematic matching between a suitable hadronic theory for spin-1 fields and the corresponding LQCD calculations in nonuniform background fields. Although LQCD studies of partial-wave mixing in the two-nucleon coupled channel can also reveal the noncentral feature of nuclear forces as demonstrated in Refs. Briceno et al. (2013); Briceño et al. (2013); Orginos et al. (2015), only a direct evaluation can incorporate the short-distance contributions to the quadrupole moment Kaplan et al. (1999); Chen et al. (1999).666Here we must distinguish the mass quadrupole moment of the deuteron from its electric quadrupole moment. It is the former that may be related to the S/D mixing in the deuteron channel. Although these two moments are comparable in the physical world, this might not be the case necessarily at unphysical values of quark masses. Other phenomenologically interesting quantities such as the (electric and magnetic) charge radii, which have been calculated so far through studies of the momentum dependence of the form factors,777See Ref. de Divitiis et al. (2012) for an alternative method to extract the form factors at zero momentum transfer by evaluating the derivatives of the correlation functions with respect to external momenta. This method circumvents the need for an extrapolation to zero momentum transfer, and has been extended in Ref. Tiburzi (2014a) to access the charge radii. can be also accessed via the nonuniform field technique. This formalism is equally applicable to the case of scalar and vector mesons so as long as they are nearly stable with regard to strong and EM interactions.888This assumption remains justified for several vector resonances such as the meson at heavy quark masses.
In Sec. II, we present a general effective field theory (EFT) of composite vector particles coupled to perturbatively weak EM fields. Such effective theories have been worked out extensively in both classic and modern literature, with features and results that sometimes differ one another. Here we follow the most natural path, building up the Lagrangian of the theory from the most general set of nonminimal interactions (those arising from the composite nature of the fields) consistent with symmetries of the relativistically covariant theory, in an expansion in . denotes a typical scale of the hadronic theory which we take to be the physical mass of the composite particle. Since the organization of nonminimal couplings is only possible in the low-energy limit, this approach, despite its relativistically covariant formulation, can only be considered to be semi-relativistic. This means that the spin-1 field satisfies a relativistic dispersion relation in the absence of EM fields. However, once these external fields are introduced, one only accounts for those nonminimal interactions that will be relevant in the nonrelativistic (NR) Hamiltonian of the system at a given order in expansion (see Refs. Lee and Tiburzi (2014a, b) for a similar strategy in the case of spin-0 and spin- fields). We next match the low-energy parameters of the semi-relativistic Lagrangian to on-shell processes at low momentum transfers, and discuss subtleties when electromagnetism is only introduced through classical fields. The effective theory developed here relies on a -component representation of the vector fields which reveals a first order (with respect to time derivative) set of equations of motion (EOM). It resembles largely that presented in earlier literature by Sakata and Taketani Sakata and Taketani (1940), Young and Bludman Young and Bludman (1963), and Case Case (1954), but has also new features. In particular, it incorporates the most general nonminimal couplings at and therefore systematically includes operators that probe the electric and magnetic charge radii of the composite particle. The semi-relativistic Green’s functions are then constructed in Sec. IV for the case an electric field varying linearly in a spatial direction. These Green’s functions are related to the quantum-mechanical propagator of anharmonic oscillator and have no closed analytics forms, making it complicated to match them to LQCD calculations.
To match to lattice correlation functions, it is of practical convenience to first deduce an effective NR Hamiltonian via the standard procedure of Foldy, Wouthuysen and Case Foldy and Wouthuysen (1950); Case (1954), as presented in Sec. III. We derive the quantum-mechanical wavefunctions of spin-1 particles in a linearly varying electric field (in space) and their corresponding Green’s functions in Sec. V, and show that, for a particular choice of the field, they are the Landau-level wavefunctions of a particle trapped in a harmonic potential. Despite their simple form, these NR Green’s functions can not be directly matched to LQCD correlation functions, unless a NR transformation is performed on the correlation functions, or alternatively, an inverse transformation is applied to NR Green’s functions, as discussed in Sec. V.1. This leads to at least two practical strategies to constrain the EM couplings of the low-energy theory, namely the quadrupole moment and the electric charge radius, as are presented in Secs. V.1 and V.2: one may try to match the transformed correlation function to the NR Green’s function directly, or alternatively, by projecting the NR Green’s functions onto given Landau eigenstates to identify the NR energy eigenvalues, and match them to the NR limit of energies extracted from the long-(Euclidean) time behavior of (spatially projected) LQCD correlation functions. Finally, the extracted quadrupole moment and charge radius must be extrapolated to their infinite-volume values by performing calculations in multiple volumes, or by determining their volume dependencies through an effective theory that is sensitive to the substructure of the hadron or nuclei, (e.g., chiral perturbation theory in the former case and pionless EFT in the latter). By inputting the knowledge of the charge radius and the quadrupole moment of the deuteron, we have investigated the range of validity of the results obtained in this paper under a single-particle effective theory of the deuteron given various electric field choices. The viability of an extraction of the deuteron’s quadruple moment and the charge radius within the framework of this paper from future LQCD calculations is then discussed, as presented in Sec. V.3. We conclude in Sec. VI by summarizing the results and commenting on future extensions. Additionally, the paper includes two appendices: appendix A is devoted to clarify the gauge dependency of the relativistic Green’s functions of Sec. IV, and Appendix B discusses the relation between the relativistic and NR Green’s functions through an example.
Ii Composite Spin-1 Particles Coupled to External Electromagnetic Fields
Any relativistic description of massive vector particles, due to the requirement of Lorentz invariance, must introduce fields that have redundant degrees of freedom. The most obvious choice is to represent the spin-1 field by a Lorentz four-vector, , the so-called Proca field Proca (1936). The redundant degree of freedom of the Proca field, , can be eliminated using the EOM. These EOM are second order differential equations, and their reduced form, i.e., after the elimination of the redundant component, turns out to be non-Hermitian. Consequently, the solutions are in general nonorthogonal and difficult to construct in external EM fields Silenko (2004). To avoid these difficulties, an equivalent formalism can be adopted by casting the Proca equation into coupled first-order differential equations, known as the Duffin-Kemmer equations Duffin (1938); Kemmer (1939). This requires raising the number of degrees of freedom of the field and consequently introducing more redundancies. However, these redundant components can be eliminated in a straightforward manner, leading to EOM that can be readily solved (see the next section). There is a rich literature on relativistic spin-1 fields and their couplings to external EM fields via different first- and second-order formalisms, see for example Refs. Corben and Schwinger (1940); Vijayalakshmi et al. (1979); Santos and Van Dam (1986); Daicic and Frankel (1993); Khriplovich and Pomeransky (1998); Pomeransky and Sen’kov (1999); Silenko (2004, 2005, 2013). Here we follow closely the work of Young and Bludman Young and Bludman (1963) which is a generalization of first-order Sakata-Taketani equations for spin-1 fields Sakata and Taketani (1940). However, due to the spread of existing results, and occasionally inconsistencies among them, we independently work out the construction of an EFT for massive spin- fields towards our goal of deducing Green’s functions of spin-1 fields in a selected external field. In particular, the nonminimal couplings in our Lagrangian, as will be discussed shortly, are more general than those presented in all previous studies, and include all the possible terms needed to consistently match to not only the particle’s electric quadrupole moment but also its electric and magnetic charge radii at (we neglect terms that are proportional to the field-strength squared with coefficients that are matched to polarizabilities). Although fields and interactions have been described in a Lorentz-covariant relativistic framework, the nonminimal couplings to external fields can only be organized in an expansion in the mass of the particle, or in turn a generic hadronic scale above which the single-particle description breaks down.999Although the expansion parameter is taken to be the mass, the size of nonminimal interactions is indeed governed by the compositeness scale of the particle. In fact, as we will see shortly, when these compositeness scales, such as radii and moments, arise in matching the coefficients to on-shell processes, the factors of mass cancel. At low energies, one can truncate these nonminimal interactions at an order such that, after a full NR reduction, the effective theory incorporates information about as many low-energy parameters as one is interested in.
ii.1 A semi-relativistic effective field theory
We start by writing down the most general Lorentz-invariant Lagrangian for a single massive spin-1 field, coupled to electromagnetism, that is invariant under charge conjugation, time reversal and parity. We choose to construct the Lagrangian out of a four-component field and a rank-two tensor (). However, as we shall see below, the EOM of the resulting theory constrain the number of independent degrees of freedom to those needed to describe the physical modes of a spin-1 field. The Lagrangian, in terms of and degrees of freedom, can be written as
where denotes the covariant derivate, is the EM field strength tensor, denotes the photon gauge field, and refers to the electric charge of the particle. The superscripts on the coefficients denote the order of the corresponding terms in an expansion in . By we indicate any Lorentz-invariant term bilinear in and with appropriate numbers of covariant derivates and s such that the overall mass dimension is four when accompanied by . Similarly, corresponds to any Lorentz-invariant term with mass dimension four that contains two s. In particular, this latter include -type interactions that are of the same order in the inverse mass expansion as are the nonminimal terms we have considered, and whose coefficients are matched to electric and magnetic polarizabilities of the particle. By assuming a small external field strength, we can neglect these contributions. In order to access polarizabilities, Eq. (1) must be revisited to include such terms.
The coefficients of the leading contributions are fixed to reproduce the canonical normalization of the resulting kinetic term for massive spin-1 particles Proca (1936). We have taken advantage of the following property of the EM field strength tensor to eliminate redundant terms at . Additionally, the number of terms with a given Lorentz structure at each order can be considerably reduced by using the constraint of vanishing surface terms in the action. This constraint is not trivial in the presence of EM background fields which extend to infinite boundaries of spacetime (which is an unphysical but technically convenient situation). To rigorously define a field theory in the background of classical fields, one shall assume background fields are finite range, are adiabatically turned on in distant past and will be adiabatically turned off in far future. Mathematically, this means that one must accompany external fields by a factor of , where is positive and . This ensures that for any finite value of , the background field is independent and nonzero, while as , the field gradually vanishes. This procedure is particularly important when space-time dependent background fields are considered. This is because the sensibility of the expansion of nonminimal couplings in Eq. (1) when is guaranteed only if a mechanism similar to what described above is in place. In a calculation performed in a finite volume, such a procedure does not eliminate the contributions at the boundary. However, in this case one is free to choose the boundary conditions. For example, if periodic boundary conditions (PBCs) are imposed on the fields, the contributions of the surface terms to the action will in fact vanish just as in the infinite volume. As a result, the only relevant interactions in both scenarios have been already included in the Lagrangian in Eq. (1), with coefficients that could be meaningfully constrained by matching to on-shell processes in the infinite spacetime volume. To satisfy PBCs in a finite volume, certain quantization conditions must be imposed on the parameters of the background fields, which can be seen to also prevent potential large background field strengths at the boundaries of the volume, see Ref. Davoudi and Detmold (2015).
The Euler-Lagrange EOM arising from the Lagrangian in Eq. (1) are
where in Eq. (2) (Eq. (3)) denotes any Lorentz-invariant term with mass dimension two (three) with at most one or field. Similarly, in Eq. (2) (Eq. (3)) denotes any Lorentz-invariant terms with mass dimension two (three) with at least two powers of the field strength tensor and at most one or field. Note that from the first equation, it is established that is an antisymmetric tensor up to corrections. We have anticipated this feature in writing down all possible terms at in the Lagrangian Eq. (1), as the nonantisymmetric piece of gives rise to contributions that are of higher orders. This also makes any term containing one and one field at redundant.
In writing the Lagrangian in Eq. (1), we have neglected terms of the type . These can be reduced to terms that have been already included in the Lagrangian at this order using the EOM. A number of inconsistencies might occur when the EOM operators are naively discarded in the presence of background fields. However, as is discussed in Refs. Lee and Tiburzi (2014a, b), the neglected terms in the Lagrangian only modify Green’s functions by overall spacetime-independent factors that can be safely neglected. The other sets of operators at that we have taken the liberty to exclude due to the constraint from the EOM are those containing at least one . These vanish up to corrections that scale as (see Eqs. (2) and (3) above), and therefore give rise to higher order terms, i.e., , in the Lagrangian.101010According to Refs. Lee and Tiburzi (2014a, b), the EOM operators in fact must be given special care only in the NR theory. The contribution from these operators to on-shell processes could be nontrivial in situations where QED is introduced through a background EM field. Given that we follow a direct NR reduction of the relativistic theory, all such subtleties will be automatically taken care of. In particular, it is notable that the semi-relativistic Lagrangian with a background electric field up to generates terms of the type in the NR Hamiltonian, see Sec. III. This is despite the fact that we have already neglected terms of in the semi-relativistic Lagrangian. These are the type of contributions that are shown to correspond to an EOM operator in the scalar NR Lagrangian, and will add to contributions that correspond to a polarizability shift in the energy of the NR particle. It is shown in Refs. Lee and Tiburzi (2014a, b) that by keeping track of these terms, inconsistencies that are observed in the second-order energy shifts of spin-0 and spin- particles in uniform external electric fields can be resolved. Although we do not explicitly work out the polarizability contributions in this paper, we expect the same mechanism to be in place with our framework for the case of spin-1 fields.
Before concluding the discussion of the semi-relativistic Lagrangian, it is worth pointing out that a number of pathologies have been noted in literature for relativistic theories of massive spin-1 (and higher) particles in background (EM or gravitational) fields. One issue that is most relevant to our discussion here is the emergence of superluminal modes from nonminimal couplings (such as quadrupole coupling) to EM fields, as noted by Velo and Zwanziger Velo and Zwanziger (1969). However, as is discussed in Ref. Porrati and Rahman (2008), the acasuality arising from nonminimal interactions are manifest as singularities (that can not be removed by any field redefinition) when one takes the limit. Therefore, the pathologies associated with these modes arise at a scale which is comparable or higher than the mass of the vector particle. Since the effective theory for nonminimal couplings already assumes a cutoff scale of , these pathologies are not relevant in our discussions. Thus, there in no contradiction to the existence of a well-defined low-energy effective theory that describes interactions of particles with any spin in external fields, as characterized by their EM moments, polarizabilities, and their higher static and quasi-static properties. With the assumption of weak external EM fields, other possibilities discussed in literature, such as the spontaneous EM superconductivity of vacuum due to the charged vector-particle condensation Ambjorn and Olesen (1989a, b); Chernodub (2011), will not be relevant in the framework of this paper.
In what follows, we carry out the matching to on-shell amplitudes at low-momentum transfer to constrain the values of the coefficients in the effective Lagrangian.
ii.2 Matching the effective theory to on-shell amplitudes
Electromagnetic current and form-factor decomposition: The form-factor decomposition of the matrix elements of the EM current for spin-1 particles is well known, as is its connection to the EM multipole decomposition of NR charge and current densities, see for example Refs. Arnold et al. (1980); Lorce (2009). We briefly review the relevant discussions; this also serves as an introduction to our conventions.
Considering Lorentz invariance, vector-current conservation and charge-conjugation invariance, the most general form of the matrix element of an EM current, , between on-shell vector particles can be written as
where denotes the initial state of a vector particle with momentum and polarization , and denotes its final state with momentum and polarization , and where the momentum transferred to the final state due to interaction with the EM current is . denotes the polarization vector of the particle with momentum . For massive on-shell particles runs from to . Additionally, and we have defined . Lorentz structures proportional to , and have been discarded by utilizing the following conditions on the polarization vectors: and . Although the right-hand sides of these conditions are modified in external electric, , and magnetic, , fields by terms of , this will not matter for calculating on-shell matrix element as long as the adiabatic procedure described above Eq. (2) is in place to eliminate surface terms in the Lagrangian. By introducing the external fields adiabatically, the asymptotic “in” and “out” states of the theory are free and the corresponding polarization vectors satisfy the noninteracting relations.
To relate the form factors in Eq. (LABEL:eq:current-decomp) at low to the low-energy EM properties of the spin-1 particle, one may interpret this current matrix element, when expressed in the Breit frame, as multipole decomposition of the classical electric and magnetic charge densities. These decompositions are defined through Sachs form factors,
where and are the Sachs electric and magnetic form factors, respectively, and denotes the value of spin. If the particle was infinitely massive, such interpretation of the relativistic relation (LABEL:eq:current-decomp) would have been exact, and the current matrix element would be precisely the Fourier transform of some classical charge or current density distributed inside the hadron. However, away from this limit, there are small recoil effects at low energies that are hard to characterize in the hadronic theory. In the Breit frame, in which the energy of the transferred photon, , is zero, such effects are minimal as the initial and final states have the same energy. In fact, as is well known, by expressing Eq. (LABEL:eq:current-decomp) in this frame, and by taking the moving-frame polarizations vectors satisfying and , this matrix element resembles the classical forms in Eqs. (5) and (6). This enables one to directly relate the form factors and , to Sachs form factors and . For spin- particles this results in the relations
The electric charge, electric quadrupole moment and magnetic dipole moment are defined as the zero momentum transfer limit of the Coulomb, , quadrupole, , and magnetic, , Sachs form factors, respectively,
is the particle’s quadrupole moment in units of , and denotes its magnetic moment in units of . Additionally, the mean-squared electric and magnetic charge radii can be expressed, respectively, as the derivatives of the Coulomb and magnetic form factors with respect to at zero momentum transfer,
The quadrupole charge radius can be defined similarly from the derivative of the quadrupole Sachs form factor, however the dependence on this radius only occurs at higher orders in than is considered below.
One-photon amplitude from the effective theory: The next step is to evaluate the one-photon amplitude from the effective Lagrangian in Eq. (1). Explicitly, the following quantity
must be evaluated from the Lagrangian in Eq. (1) to match to Eq. (LABEL:eq:current-decomp). In obtaining this on-shell amplitude, the condition of the orthogonality of the momentum vectors to their corresponding polarization vectors can be used once again. Moreover, we use the EOM (see Eq. (2)) to convert fields to fields. A straightforward but slightly lengthy calculation gives
By comparing Eqs. (16) and (LABEL:eq:current-decomp), and with the aid of Eqs. (10)-(14), the following relations can be deduced,
These fully constrain the values of the four coefficients in the effective Lagrangian as following
With nonminimal interactions being constrained by the on-shell amplitudes, Eq. (1) can now be utilized to study properties of spin-1 particles in external fields. This is pursued in the next section through analyzing the EOM of the vector particle in time-independent but otherwise general and fields and their reduced forms in the NR limit.
Iii Equations of Motion in External Fields and their Nonrelativistic Reductions
To be able to find the physical solutions of the EOM, one must first eliminate the redundant degrees of freedom of the spin-1 field in Eqs. (2) and (3). This can be established by eliminating and , with , in favor of the remaining 6 components of the fields, namely
Our choice here is justified by noting that these latter are the only dynamical components of the fields (according to Eqs. (2) and (3), the time derivatives of and are absent from the EOM). From Eq. (2) it is manifest that the fields are related to the derivative of the fields
It is also deduced from Eq. (3) that the field can be written in terms of the and fields,
where and . and refer to the scalar and vector EM potentials, respectively. The bold-faced quantities now represent ordinary three-vectors; as a result from here on we do not distinguish the upper and lower indices and let them all represent cartesian spatial indices. The terms that originate from the LHS of Eq. (3) contribute to at or higher. As can be seen from the EOM for the dynamical fields (see below), such terms give rise to contributions that are of or higher and will be neglected in our analysis. By taking into account these relations, and further by assuming time-independent external fields, the coupled EOM for the and fields can be written as
where we have transformed the field to . The line over the derivatives indicates that the operator acts solely on the electric or magnetic field and not on the spin-1 fields following them.
These equations can be cast into an elegant matrix form. This can be achieved by introducing the following matrices
with the properties: and , where is the three-dimensional Levi-Civita tensor. These matrices are closely related to the notion of spin in a NR theory as will become clear shortly.111111These are the analogues of Pauli matrices for spin- particles. In the following, the EOM are further analyzed by separating the case of electric and magnetic fields. This is solely to keep the presentation tractable, and the results for the case of nonvanishing electric and magnetic fields can be straightforwardly obtained following the same procedure.
iii.1 An external electric field
For the case of an electric field with no time variation, the EOM for the and fields can be rewritten as
with the aid of spin matrices in Eq. (29). These two equations can be represented by a single EOM for a 6-component vector, conveniently defined as
This equation resembles a Schrödinger equation for the field , 121212For this wavefunction, the expectation values of operators are defined by (33) This imposes the condition of pseudo-Hermiticity on the Hamiltonian, , which is clearly the case for the Hamiltonians in Eqs. (35) and (48). See Ref. Case (1954) for more details.
where the semi-relativistic Hamiltonian is
is the conjugate momentum operator corresponding to the spatial covariant derivative, , and the coordinate is consequently promoted to a quantum-mechanical operator, (as is any space-dependent function such as the electric field). The s are the Pauli matrices and act either on an implicit unity matrix or the spin-1 matrices through a direct multiplication.
The Hamiltonian in Eq. (35) is comprised of
where and denote operators that are proportional to (even) and (odd), respectively. The superscript on these operators denote the order at which they contribute in a expansion. The odd operators couple the upper and lower components of the wavefunction in the EOMs. These equations can be decoupled order by order in the expansion using the familiar Foldy-Wouthuysen-Case (FWC) transformation Foldy and Wouthuysen (1950); Case (1954)). Explicitly, one has
where the unitary transformation
removes the odd terms at in the transformed Hamiltonian, , leaving only the odd terms that are of or higher. The next transformation,
takes the odd operators in and builds a new Hamiltonian, , that is free of odd terms also at ,
By iteratively performing this transformation, all the odd operators can be eliminated up to the order one desires. Through this procedure, the NR reduction of the semi-relativistic theory can be systematically obtained.
Following the above procedure, we find that the NR Hamiltonian for the case of a nonzero field up to is131313A useful formula is the Baker-Campbell-Hausdorff relation,
Note that, as expected, this Hamiltonian is invariant under parity and time-reversal, and is no longer proportional to and . Additionally, by utilizing the matching conditions in Eq. (20), (21) and (23), one finds
Since the most general effective Lagrangian was used, with low-energy coefficients that are directly matched to the low-energy EM properties of the spin-1 particle, the expected NR interactions are automatically produced with the desired coefficients: the value of gives the correct coefficient of the spin-orbit interaction in Eq. (41). Moreover, the coefficients of the Darwin term, , and the quadrupole interaction, , are correctly produced to be proportional to the particle’s mean-squared electric charge radius and the quadrupole moment, respectively.
The coefficient of the Darwin (contact) term we have obtained here differs that obtained by Young and Bludman Young and Bludman (1963) which is found to be (this reference assumes ). This is only a definitional issue as if one defines the electric charge radius in Eq. (13) to be the derivative of the form factor with respect to at (instead of the derivative of the Sachs form factor, , that has been adopted here), both results agree.141414We note however that from a physical point of view, these are the Sachs form factors that are directly related to the NR charge and current distributions inside the hadrons, see Eqs. (5) and (6), and so the current definitions appear more natural (for a discussion of different definitions and associated confusions see Ref. Friar et al. (1997)). With our definition of the charge radius, the coefficient of the Darwin term for spin-0 and spin-1 particles Lee and Tiburzi (2014a) turns out to be the same, both having the value of , which is a convenient feature. After accounting for this difference, the Hamiltonian in Eq. (41) is in complete agreement with those presented in Refs. Sakata and Taketani (1940); Case (1954); Young and Bludman (1963), and extends the results in the literature by including all the operators at . The NR Hamiltonian in Eq. (41) applies straightforwardly to scalar particles in an external electric field by setting .
iii.2 An external magnetic field
Eqs. (27) and (28) for the case of an external magnetic field that is constant in time can be rewritten as
with the help of spin-1 matrices in Eq. (29). In terms of the 6-component field introduced in Eq. (32), the EOM reads
with the semi-relativistic Hamiltonian
The decoupling of the EOM for the upper and lower three components of can be performed via the FWC procedure as detailed above. The result is |
b2f1723d2a89cb10 | TY - JOUR AB - Quantum illumination uses entangled signal-idler photon pairs to boost the detection efficiency of low-reflectivity objects in environments with bright thermal noise. Its advantage is particularly evident at low signal powers, a promising feature for applications such as noninvasive biomedical scanning or low-power short-range radar. Here, we experimentally investigate the concept of quantum illumination at microwave frequencies. We generate entangled fields to illuminate a room-temperature object at a distance of 1 m in a free-space detection setup. We implement a digital phase-conjugate receiver based on linear quadrature measurements that outperforms a symmetric classical noise radar in the same conditions, despite the entanglement-breaking signal path. Starting from experimental data, we also simulate the case of perfect idler photon number detection, which results in a quantum advantage compared with the relative classical benchmark. Our results highlight the opportunities and challenges in the way toward a first room-temperature application of microwave quantum circuits. AU - Barzanjeh, Shabir AU - Pirandola, S. AU - Vitali, D AU - Fink, Johannes M ID - 7910 IS - 19 JF - Science Advances TI - Microwave quantum illumination using a digital receiver VL - 6 ER - TY - JOUR AB - Microelectromechanical systems and integrated photonics provide the basis for many reliable and compact circuit elements in modern communication systems. Electro-opto-mechanical devices are currently one of the leading approaches to realize ultra-sensitive, low-loss transducers for an emerging quantum information technology. Here we present an on-chip microwave frequency converter based on a planar aluminum on silicon nitride platform that is compatible with slot-mode coupled photonic crystal cavities. We show efficient frequency conversion between two propagating microwave modes mediated by the radiation pressure interaction with a metalized dielectric nanobeam oscillator. We achieve bidirectional coherent conversion with a total device efficiency of up to ~60%, a dynamic range of 2 × 10^9 photons/s and an instantaneous bandwidth of up to 1.7 kHz. A high fidelity quantum state transfer would be possible if the drive dependent output noise of currently ~14 photons s^−1 Hz^−1 is further reduced. Such a silicon nitride based transducer is in situ reconfigurable and could be used for on-chip classical and quantum signal routing and filtering, both for microwave and hybrid microwave-optical applications. AU - Fink, Johannes M AU - Kalaee, M. AU - Norte, R. AU - Pitanti, A. AU - Painter, O. ID - 8038 IS - 3 JF - Quantum Science and Technology TI - Efficient microwave frequency conversion mediated by a photonics compatible silicon nitride nanobeam oscillator VL - 5 ER - TY - JOUR AB - Practical quantum networks require low-loss and noise-resilient optical interconnects as well as non-Gaussian resources for entanglement distillation and distributed quantum computation. The latter could be provided by superconducting circuits but existing solutions to interface the microwave and optical domains lack either scalability or efficiency, and in most cases the conversion noise is not known. In this work we utilize the unique opportunities of silicon photonics, cavity optomechanics and superconducting circuits to demonstrate a fully integrated, coherent transducer interfacing the microwave X and the telecom S bands with a total (internal) bidirectional transduction efficiency of 1.2% (135%) at millikelvin temperatures. The coupling relies solely on the radiation pressure interaction mediated by the femtometer-scale motion of two silicon nanobeams reaching a Vπ as low as 16 μV for sub-nanowatt pump powers. Without the associated optomechanical gain, we achieve a total (internal) pure conversion efficiency of up to 0.019% (1.6%), relevant for future noise-free operation on this qubit-compatible platform. AU - Arnold, Georg M AU - Wulf, Matthias AU - Barzanjeh, Shabir AU - Redchenko, Elena AU - Rueda Sanchez, Alfredo R AU - Hease, William J AU - Hassani, Farid AU - Fink, Johannes M ID - 8529 JF - Nature Communications KW - General Biochemistry KW - Genetics and Molecular Biology KW - General Physics and Astronomy KW - General Chemistry SN - 2041-1723 TI - Converting microwave and telecom photons with a silicon photonic nanomechanical interface VL - 11 ER - TY - JOUR AB - The superconducting circuit community has recently discovered the promising potential of superinductors. These circuit elements have a characteristic impedance exceeding the resistance quantum RQ ≈ 6.45 kΩ which leads to a suppression of ground state charge fluctuations. Applications include the realization of hardware protected qubits for fault tolerant quantum computing, improved coupling to small dipole moment objects and defining a new quantum metrology standard for the ampere. In this work we refute the widespread notion that superinductors can only be implemented based on kinetic inductance, i.e. using disordered superconductors or Josephson junction arrays. We present modeling, fabrication and characterization of 104 planar aluminum coil resonators with a characteristic impedance up to 30.9 kΩ at 5.6 GHz and a capacitance down to ≤ 1 fF, with lowloss and a power handling reaching 108 intra-cavity photons. Geometric superinductors are free of uncontrolled tunneling events and offer high reproducibility, linearity and the ability to couple magnetically - properties that significantly broaden the scope of future quantum circuits. AU - Peruzzo, Matilda AU - Trioni, Andrea AU - Hassani, Farid AU - Zemlicka, Martin AU - Fink, Johannes M ID - 8755 IS - 4 JF - Physical Review Applied TI - Surpassing the resistance quantum with a geometric superinductor VL - 14 ER - TY - JOUR AB - We propose an efficient microwave-photonic modulator as a resource for stationary entangled microwave-optical fields and develop the theory for deterministic entanglement generation and quantum state transfer in multi-resonant electro-optic systems. The device is based on a single crystal whispering gallery mode resonator integrated into a 3D-microwave cavity. The specific design relies on a new combination of thin-film technology and conventional machining that is optimized for the lowest dissipation rates in the microwave, optical, and mechanical domains. We extract important device properties from finite-element simulations and predict continuous variable entanglement generation rates on the order of a Mebit/s for optical pump powers of only a few tens of microwatts. We compare the quantum state transfer fidelities of coherent, squeezed, and non-Gaussian cat states for both teleportation and direct conversion protocols under realistic conditions. Combining the unique capabilities of circuit quantum electrodynamics with the resilience of fiber optic communication could facilitate long-distance solid-state qubit networks, new methods for quantum signal synthesis, quantum key distribution, and quantum enhanced detection, as well as more power-efficient classical sensing and modulation. AU - Rueda Sanchez, Alfredo R AU - Hease, William J AU - Barzanjeh, Shabir AU - Fink, Johannes M ID - 7156 JF - npj Quantum Information SN - 2056-6387 TI - Electro-optic entanglement source for microwave to telecom quantum state transfer VL - 5 ER - TY - CONF AB - We demonstrate electro-optic frequency comb generation using a doubly resonant system comprising a whispering gallery mode disk resonator made of lithium niobate mounted inside a three dimensional copper cavity. We observe 180 sidebands centred at 1550 nm. AU - Rueda Sanchez, Alfredo R AU - Sedlmeir, Florian AU - Leuchs, Gerd AU - Kumari, Madhuri AU - Schwefel, Harald G.L. ID - 7233 SN - 9781557528209 T2 - Nonlinear Optics, OSA Technical Digest TI - Resonant electro-optic frequency comb generation in lithium niobate disk resonator inside a microwave cavity ER - TY - JOUR AB - We prove that the observable telegraph signal accompanying the bistability in the photon-blockade-breakdown regime of the driven and lossy Jaynes–Cummings model is the finite-size precursor of what in the thermodynamic limit is a genuine first-order phase transition. We construct a finite-size scaling of the system parameters to a well-defined thermodynamic limit, in which the system remains the same microscopic system, but the telegraph signal becomes macroscopic both in its timescale and intensity. The existence of such a finite-size scaling completes and justifies the classification of the photon-blockade-breakdown effect as a first-order dissipative quantum phase transition. AU - Vukics, A. AU - Dombi, A. AU - Fink, Johannes M AU - Domokos, P. ID - 7451 JF - Quantum SN - 2521-327X TI - Finite-size scaling of the photon-blockade breakdown dissipative quantum phase transition VL - 3 ER - TY - JOUR AB - Recent technical developments in the fields of quantum electromechanics and optomechanics have spawned nanoscale mechanical transducers with the sensitivity to measure mechanical displacements at the femtometre scale and the ability to convert electromagnetic signals at the single photon level. A key challenge in this field is obtaining strong coupling between motion and electromagnetic fields without adding additional decoherence. Here we present an electromechanical transducer that integrates a high-frequency (0.42 GHz) hypersonic phononic crystal with a superconducting microwave circuit. The use of a phononic bandgap crystal enables quantum-level transduction of hypersonic mechanical motion and concurrently eliminates decoherence caused by acoustic radiation. Devices with hypersonic mechanical frequencies provide a natural pathway for integration with Josephson junction quantum circuits, a leading quantum computing technology, and nanophotonic systems capable of optical networking and distributing quantum information. AU - Kalaee, Mahmoud AU - Mirhosseini, Mohammad AU - Dieterle, Paul B. AU - Peruzzo, Matilda AU - Fink, Johannes M AU - Painter, Oskar ID - 6053 IS - 4 JF - Nature Nanotechnology SN - 1748-3387 TI - Quantum electromechanics of a hypersonic crystal VL - 14 ER - TY - JOUR AB - Light is a union of electric and magnetic fields, and nowhere is the complex relationship between these fields more evident than in the near fields of nanophotonic structures. There, complicated electric and magnetic fields varying over subwavelength scales are generally present, which results in photonic phenomena such as extraordinary optical momentum, superchiral fields, and a complex spatial evolution of optical singularities. An understanding of such phenomena requires nanoscale measurements of the complete optical field vector. Although the sensitivity of near- field scanning optical microscopy to the complete electromagnetic field was recently demonstrated, a separation of different components required a priori knowledge of the sample. Here, we introduce a robust algorithm that can disentangle all six electric and magnetic field components from a single near-field measurement without any numerical modeling of the structure. As examples, we unravel the fields of two prototypical nanophotonic structures: a photonic crystal waveguide and a plasmonic nanowire. These results pave the way for new studies of complex photonic phenomena at the nanoscale and for the design of structures that optimize their optical behavior. AU - Le Feber, B. AU - Sipe, J. E. AU - Wulf, Matthias AU - Kuipers, L. AU - Rotenberg, N. ID - 6102 IS - 1 JF - Light: Science and Applications SN - 20955545 TI - A full vectorial mapping of nanophotonic light fields VL - 8 ER - TY - JOUR AB - Mechanical systems facilitate the development of a hybrid quantum technology comprising electrical, optical, atomic and acoustic degrees of freedom1, and entanglement is essential to realize quantum-enabled devices. Continuous-variable entangled fields—known as Einstein–Podolsky–Rosen (EPR) states—are spatially separated two-mode squeezed states that can be used for quantum teleportation and quantum communication2. In the optical domain, EPR states are typically generated using nondegenerate optical amplifiers3, and at microwave frequencies Josephson circuits can serve as a nonlinear medium4,5,6. An outstanding goal is to deterministically generate and distribute entangled states with a mechanical oscillator, which requires a carefully arranged balance between excitation, cooling and dissipation in an ultralow noise environment. Here we observe stationary emission of path-entangled microwave radiation from a parametrically driven 30-micrometre-long silicon nanostring oscillator, squeezing the joint field operators of two thermal modes by 3.40 decibels below the vacuum level. The motion of this micromechanical system correlates up to 50 photons per second per hertz, giving rise to a quantum discord that is robust with respect to microwave noise7. Such generalized quantum correlations of separable states are important for quantum-enhanced detection8 and provide direct evidence of the non-classical nature of the mechanical oscillator without directly measuring its state9. This noninvasive measurement scheme allows to infer information about otherwise inaccessible objects, with potential implications for sensing, open-system dynamics and fundamental tests of quantum gravity. In the future, similar on-chip devices could be used to entangle subsystems on very different energy scales, such as microwave and optical photons. AU - Barzanjeh, Shabir AU - Redchenko, Elena AU - Peruzzo, Matilda AU - Wulf, Matthias AU - Lewis, Dylan AU - Arnold, Georg M AU - Fink, Johannes M ID - 6609 JF - Nature TI - Stationary entangled radiation from micromechanical motion VL - 570 ER - TY - CONF AB - Optical frequency combs (OFCs) are light sources whose spectra consists of equally spaced frequency lines in the optical domain [1]. They have great potential for improving high-capacity data transfer, all-optical atomic clocks, spectroscopy, and high-precision measurements [2]. AU - Rueda Sanchez, Alfredo R AU - Sedlmeir, Florian AU - Leuchs, Gerd AU - Kuamri, Madhuri AU - Schwefel, Harald G. L. ID - 7032 SN - 9781728104690 T2 - 2019 Conference on Lasers and Electro-Optics Europe & European Quantum Electronics Conference TI - Electro-optic frequency comb generation in lithium niobate whispering gallery mode resonators ER - TY - JOUR AB - High-speed optical telecommunication is enabled by wavelength-division multiplexing, whereby hundreds of individually stabilized lasers encode information within a single-mode optical fibre. Higher bandwidths require higher total optical power, but the power sent into the fibre is limited by optical nonlinearities within the fibre, and energy consumption by the light sources starts to become a substantial cost factor1. Optical frequency combs have been suggested to remedy this problem by generating numerous discrete, equidistant laser lines within a monolithic device; however, at present their stability and coherence allow them to operate only within small parameter ranges2,3,4. Here we show that a broadband frequency comb realized through the electro-optic effect within a high-quality whispering-gallery-mode resonator can operate at low microwave and optical powers. Unlike the usual third-order Kerr nonlinear optical frequency combs, our combs rely on the second-order nonlinear effect, which is much more efficient. Our result uses a fixed microwave signal that is mixed with an optical-pump signal to generate a coherent frequency comb with a precisely determined carrier separation. The resonant enhancement enables us to work with microwave powers that are three orders of magnitude lower than those in commercially available devices. We emphasize the practical relevance of our results to high rates of data communication. To circumvent the limitations imposed by nonlinear effects in optical communication fibres, one has to solve two problems: to provide a compact and fully integrated, yet high-quality and coherent, frequency comb generator; and to calculate nonlinear signal propagation in real time5. We report a solution to the first problem. AU - Rueda Sanchez, Alfredo R AU - Sedlmeir, Florian AU - Kumari, Madhuri AU - Leuchs, Gerd AU - Schwefel, Harald G.L. ID - 6348 IS - 7752 JF - Nature SN - 00280836 TI - Resonant electro-optic frequency comb VL - 568 ER - TY - JOUR AB - Conventional ultra-high sensitivity detectors in the millimeter-wave range are usually cooled as their own thermal noise at room temperature would mask the weak received radiation. The need for cryogenic systems increases the cost and complexity of the instruments, hindering the development of, among others, airborne and space applications. In this work, the nonlinear parametric upconversion of millimeter-wave radiation to the optical domain inside high-quality (Q) lithium niobate whispering-gallery mode (WGM) resonators is proposed for ultra-low noise detection. We experimentally demonstrate coherent upconversion of millimeter-wave signals to a 1550 nm telecom carrier, with a photon conversion efficiency surpassing the state-of-the-art by 2 orders of magnitude. Moreover, a theoretical model shows that the thermal equilibrium of counterpropagating WGMs is broken by overcoupling the millimeter-wave WGM, effectively cooling the upconverted mode and allowing ultra-low noise detection. By theoretically estimating the sensitivity of a correlation radiometer based on the presented scheme, it is found that room-temperature radiometers with better sensitivity than state-of-the-art high-electron-mobility transistor (HEMT)-based radiometers can be designed. This detection paradigm can be used to develop room-temperature instrumentation for radio astronomy, earth observation, planetary missions, and imaging systems. AU - Botello, Gabriel AU - Sedlmeir, Florian AU - Rueda Sanchez, Alfredo R AU - Abdalmalak, Kerlos AU - Brown, Elliott AU - Leuchs, Gerd AU - Preu, Sascha AU - Segovia Vargas, Daniel AU - Strekalov, Dmitry AU - Munoz, Luis AU - Schwefel, Harald ID - 22 IS - 10 JF - Optica SN - 23342536 TI - Sensitivity limits of millimeter-wave photonic radiometers based on efficient electro-optic upconverters VL - 5 ER - TY - JOUR AB - There has been significant interest recently in using complex quantum systems to create effective nonreciprocal dynamics. Proposals have been put forward for the realization of artificial magnetic fields for photons and phonons; experimental progress is fast making these proposals a reality. Much work has concentrated on the use of such systems for controlling the flow of signals, e.g., to create isolators or directional amplifiers for optical signals. In this Letter, we build on this work but move in a different direction. We develop the theory of and discuss a potential realization for the controllable flow of thermal noise in quantum systems. We demonstrate theoretically that the unidirectional flow of thermal noise is possible within quantum cascaded systems. Viewing an optomechanical platform as a cascaded system we show here that one can ultimately control the direction of the flow of thermal noise. By appropriately engineering the mechanical resonator, which acts as an artificial reservoir, the flow of thermal noise can be constrained to a desired direction, yielding a thermal rectifier. The proposed quantum thermal noise rectifier could potentially be used to develop devices such as a thermal modulator, a thermal router, and a thermal amplifier for nanoelectronic devices and superconducting circuits. AU - Barzanjeh, Shabir AU - Aquilina, Matteo AU - Xuereb, André ID - 436 IS - 6 JF - Physical Review Letters TI - Manipulating the flow of thermal noise in quantum devices VL - 120 ER - TY - JOUR AB - In this paper, we discuss biological effects of electromagnetic (EM) fields in the context of cancer biology. In particular, we review the nanomechanical properties of microtubules (MTs), the latter being one of the most successful targets for cancer therapy. We propose an investigation on the coupling of electromagnetic radiation to mechanical vibrations of MTs as an important basis for biological and medical applications. In our opinion, optomechanical methods can accurately monitor and control the mechanical properties of isolated MTs in a liquid environment. Consequently, studying nanomechanical properties of MTs may give useful information for future applications to diagnostic and therapeutic technologies involving non-invasive externally applied physical fields. For example, electromagnetic fields or high intensity ultrasound can be used therapeutically avoiding harmful side effects of chemotherapeutic agents or classical radiation therapy. AU - Salari, Vahid AU - Barzanjeh, Shabir AU - Cifra, Michal AU - Simon, Christoph AU - Scholkmann, Felix AU - Alirezaei, Zahra AU - Tuszynski, Jack ID - 287 IS - 8 JF - Frontiers in Bioscience - Landmark TI - Electromagnetic fields and optomechanics In cancer diagnostics and treatment VL - 23 ER - TY - JOUR AB - Spontaneous emission spectra of two initially excited closely spaced identical atoms are very sensitive to the strength and the direction of the applied magnetic field. We consider the relevant schemes that ensure the determination of the mutual spatial orientation of the atoms and the distance between them by entirely optical means. A corresponding theoretical description is given accounting for the dipole-dipole interaction between the two atoms in the presence of a magnetic field and for polarizations of the quantum field interacting with magnetic sublevels of the two-atom system. AU - Redchenko, Elena AU - Makarov, Alexander AU - Yudson, Vladimir ID - 307 IS - 4 JF - Physical Review A - Atomic, Molecular, and Optical Physics TI - Nanoscopy of pairs of atoms by fluorescence in a magnetic field VL - 97 ER - TY - CONF AB - There is currently significant interest in operating devices in the quantum regime, where their behaviour cannot be explained through classical mechanics. Quantum states, including entangled states, are fragile and easily disturbed by excessive thermal noise. Here we address the question of whether it is possible to create non-reciprocal devices that encourage the flow of thermal noise towards or away from a particular quantum device in a network. Our work makes use of the cascaded systems formalism to answer this question in the affirmative, showing how a three-port device can be used as an effective thermal transistor, and illustrates how this formalism maps onto an experimentally-realisable optomechanical system. Our results pave the way to more resilient quantum devices and to the use of thermal noise as a resource. AU - Xuereb, André AU - Aquilina, Matteo AU - Barzanjeh, Shabir ED - Andrews, D L ED - Ostendorf, A ED - Bain, A J ED - Nunzi, J M ID - 155 TI - Routing thermal noise through quantum networks VL - 10672 ER - TY - CONF AB - We present results on nonlinear electro-optical conversion of microwave radiation into the optical telecommunication band with more than 0.1% photon number conversion efficiency with MHz bandwidth, in a crystalline whispering gallery mode resonator AU - Rueda Sanchez, Alfredo R AU - Sedlmeir, Florian AU - Collodo, Michele AU - Vogl, Ulrich AU - Stiller, Birgit AU - Schunk, Gerhard AU - Strekalov, Dmitry AU - Marquardt, Christoph AU - Fink, Johannes M AU - Painter, Oskar AU - Leuchs, Gerd AU - Schwefel, Harald ID - 485 SN - 978-155752820-9 TI - Single sideband microwave to optical photon conversion-an-electro-optic-realization VL - F54 ER - TY - JOUR AB - Microtubules provide the mechanical force required for chromosome separation during mitosis. However, little is known about the dynamic (high-frequency) mechanical properties of microtubules. Here, we theoretically propose to control the vibrations of a doubly clamped microtubule by tip electrodes and to detect its motion via the optomechanical coupling between the vibrational modes of the microtubule and an optical cavity. In the presence of a red-detuned strong pump laser, this coupling leads to optomechanical-induced transparency of an optical probe field, which can be detected with state-of-the art technology. The center frequency and line width of the transparency peak give the resonance frequency and damping rate of the microtubule, respectively, while the height of the peak reveals information about the microtubule-cavity field coupling. Our method opens the new possibilities to gain information about the physical properties of microtubules, which will enhance our capability to design physical cancer treatment protocols as alternatives to chemotherapeutic drugs. AU - Barzanjeh, Shabir AU - Salari, Vahid AU - Tuszynski, Jack AU - Cifra, Michal AU - Simon, Christoph ID - 700 IS - 1 JF - Physical Review E Statistical Nonlinear and Soft Matter Physics SN - 24700045 TI - Optomechanical proposal for monitoring microtubule mechanical vibrations VL - 96 ER - TY - JOUR AB - We present the fabrication and characterization of an aluminum transmon qubit on a silicon-on-insulator substrate. Key to the qubit fabrication is the use of an anhydrous hydrofluoric vapor process which selectively removes the lossy silicon oxide buried underneath the silicon device layer. For a 5.6 GHz qubit measured dispersively by a 7.1 GHz resonator, we find T1 = 3.5 μs and T∗2 = 2.2 μs. This process in principle permits the co-fabrication of silicon photonic and mechanical elements, providing a route towards chip-scale integration of electro-opto-mechanical transducers for quantum networking of superconducting microwave quantum circuits. The additional processing steps are compatible with established fabrication techniques for aluminum transmon qubits on silicon. AU - Keller, Andrew J AU - Dieterle, Paul AU - Fang, Michael AU - Berger, Brett AU - Fink, Johannes M AU - Painter, Oskar ID - 796 IS - 4 JF - Applied Physics Letters SN - 00036951 TI - Al transmon qubits on silicon on insulator for quantum device integration VL - 111 ER - TY - JOUR AB - Phasenübergänge helfen beim Verständnis von Vielteilchensystemen in der Festkörperphysik und Fluiddynamik bis hin zur Teilchenphysik. Unserer internationalen Kollaboration ist es gelungen, einen neuartigen Phasenübergang in einem Quantensystem zu beobachten [1]. In einem Mikrowellenresonator konnte erstmals die spontane Zustandsänderung von undurchsichtig zu transparent nachgewiesen werden. AU - Fink, Johannes M ID - 797 IS - 3 JF - Physik in unserer Zeit TI - Photonenblockade aufgelöst VL - 48 ER - TY - JOUR AB - Nonreciprocal circuit elements form an integral part of modern measurement and communication systems. Mathematically they require breaking of time-reversal symmetry, typically achieved using magnetic materials and more recently using the quantum Hall effect, parametric permittivity modulation or Josephson nonlinearities. Here we demonstrate an on-chip magnetic-free circulator based on reservoir-engineered electromechanic interactions. Directional circulation is achieved with controlled phase-sensitive interference of six distinct electro-mechanical signal conversion paths. The presented circulator is compact, its silicon-on-insulator platform is compatible with both superconducting qubits and silicon photonics, and its noise performance is close to the quantum limit. With a high dynamic range, a tunable bandwidth of up to 30 MHz and an in situ reconfigurability as beam splitter or wavelength converter, it could pave the way for superconducting qubit processors with multiplexed on-chip signal processing and readout. AU - Barzanjeh, Shabir AU - Wulf, Matthias AU - Peruzzo, Matilda AU - Kalaee, Mahmoud AU - Dieterle, Paul AU - Painter, Oskar AU - Fink, Johannes M ID - 798 IS - 1 JF - Nature Communications SN - 20411723 TI - Mechanical on chip microwave circulator VL - 8 ER - TY - JOUR AB - From microwave ovens to satellite television to the GPS and data services on our mobile phones, microwave technology is everywhere today. But one technology that has so far failed to prove its worth in this wavelength regime is quantum communication that uses the states of single photons as information carriers. This is because single microwave photons, as opposed to classical microwave signals, are extremely vulnerable to noise from thermal excitations in the channels through which they travel. Two new independent studies, one by Ze-Liang Xiang at Technische Universität Wien (Vienna), Austria, and colleagues [1] and another by Benoît Vermersch at the University of Innsbruck, also in Austria, and colleagues [2] now describe a theoretical protocol for microwave quantum communication that is resilient to thermal and other types of noise. Their approach could become a powerful technique to establish fast links between superconducting data processors in a future all-microwave quantum network. AU - Fink, Johannes M ID - 1013 IS - 32 JF - Physics TI - Viewpoint: Microwave quantum states beat the heat VL - 10 ER - TY - JOUR AB - Cellulose is the most abundant biopolymer on Earth. Cellulose fibers, such as the one extracted form cotton or woodpulp, have been used by humankind for hundreds of years to make textiles and paper. Here we show how, by engineering light-matter interaction, we can optimize light scattering using exclusively cellulose nanocrystals. The produced material is sustainable, biocompatible, and when compared to ordinary microfiber-based paper, it shows enhanced scattering strength (×4), yielding a transport mean free path as low as 3.5 μm in the visible light range. The experimental results are in a good agreement with the theoretical predictions obtained with a diffusive model for light propagation. AU - Caixeiro, Soraya AU - Peruzzo, Matilda AU - Onelli, Olimpia AU - Vignolini, Silvia AU - Sapienza, Riccardo ID - 1020 IS - 9 JF - ACS Applied Materials and Interfaces SN - 19448244 TI - Disordered cellulose based nanostructures for enhanced light scattering VL - 9 ER - TY - JOUR AB - Nonequilibrium phase transitions exist in damped-driven open quantum systems when the continuous tuning of an external parameter leads to a transition between two robust steady states. In second-order transitions this change is abrupt at a critical point, whereas in first-order transitions the two phases can coexist in a critical hysteresis domain. Here, we report the observation of a first-order dissipative quantum phase transition in a driven circuit quantum electrodynamics system. It takes place when the photon blockade of the driven cavity-atom system is broken by increasing the drive power. The observed experimental signature is a bimodal phase space distribution with varying weights controlled by the drive strength. Our measurements show an improved stabilization of the classical attractors up to the millisecond range when the size of the quantum system is increased from one to three artificial atoms. The formation of such robust pointer states could be used for new quantum measurement schemes or to investigate multiphoton phases of finite-size, nonlinear, open quantum systems. AU - Fink, Johannes M AU - Dombi, András AU - Vukics, András AU - Wallraff, Andreas AU - Domokos, Peter ID - 1114 IS - 1 JF - Physical Review X SN - 21603308 TI - Observation of the photon blockade breakdown phase transition VL - 7 ER - TY - CONF AB - Nonlinear electro-optical conversion of microwave radiation into the optical telecommunication band is achieved within a crystalline whispering gallery mode resonator, reaching 0.1% photon number conversion efficiency with MHz bandwidth. AU - Rueda, Alfredo AU - Sedlmeir, Florian AU - Collodo, Michele AU - Vogl, Ulrich AU - Stiller, Birgit AU - Schunk, Gerhard AU - Strekalov, Dmitry AU - Marquardt, Christoph AU - Fink, Johannes M AU - Painter, Oskar AU - Leuchs, Gerd AU - Schwefel, Harald ID - 482 TI - Nonlinear single sideband microwave to optical conversion using an electro-optic WGM-resonator ER - TY - JOUR AB - We study a polar molecule immersed in a superfluid environment, such as a helium nanodroplet or a Bose–Einstein condensate, in the presence of a strong electrostatic field. We show that coupling of the molecular pendular motion, induced by the field, to the fluctuating bath leads to formation of pendulons—spherical harmonic librators dressed by a field of many-particle excitations. We study the behavior of the pendulon in a broad range of molecule–bath and molecule–field interaction strengths, and reveal that its spectrum features a series of instabilities which are absent in the field-free case of the angulon quasiparticle. Furthermore, we show that an external field allows to fine-tune the positions of these instabilities in the molecular rotational spectrum. This opens the door to detailed experimental studies of redistribution of orbital angular momentum in many-particle systems. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim AU - Redchenko, Elena AU - Lemeshko, Mikhail ID - 1206 IS - 22 JF - ChemPhysChem TI - Libration of strongly oriented polar molecules inside a superfluid VL - 17 ER - TY - JOUR AB - Near-field imaging is a powerful tool to investigate the complex structure of light at the nanoscale. Recent advances in near-field imaging have indicated the possibility for the complete reconstruction of both electric and magnetic components of the evanescent field. Here we study the electro-magnetic field structure of surface plasmon polariton waves propagating along subwavelength gold nanowires by performing phase- and polarization-resolved near-field microscopy in collection mode. By applying the optical reciprocity theorem, we describe the signal collected by the probe as an overlap integral of the nanowire's evanescent field and the probe's response function. As a result, we find that the probe's sensitivity to the magnetic field is approximately equal to its sensitivity to the electric field. Through rigorous modeling of the nanowire mode as well as the aperture probe response function, we obtain a good agreement between experimentally measured signals and a numerical model. Our findings provide a better understanding of aperture-based near-field imaging of the nanoscopic plasmonic and photonic structures and are helpful for the interpretation of future near-field experiments. AU - Kabakova, Irina AU - De Hoogh, Anouk AU - Van Der Wel, Ruben AU - Wulf, Matthias AU - Le Feber, Boris AU - Kuipers, Laurens ID - 1246 JF - Scientific Reports TI - Imaging of electric and magnetic fields near plasmonic nanowires VL - 6 ER - TY - JOUR AB - Linking classical microwave electrical circuits to the optical telecommunication band is at the core of modern communication. Future quantum information networks will require coherent microwave-to-optical conversion to link electronic quantum processors and memories via low-loss optical telecommunication networks. Efficient conversion can be achieved with electro-optical modulators operating at the single microwave photon level. In the standard electro-optic modulation scheme, this is impossible because both up- and down-converted sidebands are necessarily present. Here, we demonstrate true single-sideband up- or down-conversion in a triply resonant whispering gallery mode resonator by explicitly addressing modes with asymmetric free spectral range. Compared to previous experiments, we show a 3 orders of magnitude improvement of the electro-optical conversion efficiency, reaching 0.1% photon number conversion for a 10 GHz microwave tone at 0.42 mW of optical pump power. The presented scheme is fully compatible with existing superconducting 3D circuit quantum electrodynamics technology and can be used for nonclassical state conversion and communication. Our conversion bandwidth is larger than 1 MHz and is not fundamentally limited. AU - Rueda, Alfredo AU - Sedlmeir, Florian AU - Collodo, Michele AU - Vogl, Ulrich AU - Stiller, Birgit AU - Schunk, Gerhard AU - Strekalov, Dmitry AU - Marquardt, Christoph AU - Fink, Johannes M AU - Painter, Oskar AU - Leuchs, Gerd AU - Schwefel, Harald ID - 1263 IS - 6 JF - Optica TI - Efficient microwave to optical photon conversion: An electro-optical realization VL - 3 ER - TY - JOUR AB - We present a microelectromechanical system, in which a silicon beam is attached to a comb-drive actuator, which is used to tune the tension in the silicon beam and thus its resonance frequency. By measuring the resonance frequencies of the system, we show that the comb-drive actuator and the silicon beam behave as two strongly coupled resonators. Interestingly, the effective coupling rate (1.5 MHz) is tunable with the comb-drive actuator (10%) as well as with a side-gate (10%) placed close to the silicon beam. In contrast, the effective spring constant of the system is insensitive to either of them and changes only by 60.5%. Finally, we show that the comb-drive actuator can be used to switch between different coupling rates with a frequency of at least 10 kHz. AU - Verbiest, Gerard AU - Xu, Duo AU - Goldsche, Matthias AU - Khodkov, Timofiy AU - Barzanjeh, Shabir AU - Von Den Driesch, Nils AU - Buca, Dan AU - Stampfer, Christoph ID - 1339 JF - Applied Physics Letter TI - Tunable mechanical coupling between driven microelectromechanical resonators VL - 109 ER - TY - JOUR AB - Fabrication processes involving anhydrous hydrofluoric vapor etching are developed to create high-Q aluminum superconducting microwave resonators on free-standing silicon membranes formed from a silicon-on-insulator wafer. Using this fabrication process, a high-impedance 8.9-GHz coil resonator is coupled capacitively with a large participation ratio to a 9.7-MHz micromechanical resonator. Two-tone microwave spectroscopy and radiation pressure backaction are used to characterize the coupled system in a dilution refrigerator down to temperatures of Tf=11 mK, yielding a measured electromechanical vacuum coupling rate of g0/2π=24.6 Hz and a mechanical resonator Q factor of Qm=1.7×107. Microwave backaction cooling of the mechanical resonator is also studied, with a minimum phonon occupancy of nm≈16 phonons being realized at an elevated fridge temperature of Tf=211 mK. AU - Dieterle, Paul AU - Kalaee, Mahmoud AU - Fink, Johannes M AU - Painter, Oskar ID - 1354 IS - 1 JF - Physical Review Applied TI - Superconducting cavity electromechanics on a silicon-on-insulator platform VL - 6 ER - TY - JOUR AB - Radiation pressure has recently been used to effectively couple the quantum motion of mechanical elements to the fields of optical or microwave light. Integration of all three degrees of freedom—mechanical, optical and microwave—would enable a quantum interconnect between microwave and optical quantum systems. We present a platform based on silicon nitride nanomembranes for integrating superconducting microwave circuits with planar acoustic and optical devices such as phononic and photonic crystals. Using planar capacitors with vacuum gaps of 60 nm and spiral inductor coils of micron pitch we realize microwave resonant circuits with large electromechanical coupling to planar acoustic structures of nanoscale dimensions and femtoFarad motional capacitance. Using this enhanced coupling, we demonstrate microwave backaction cooling of the 4.48 MHz mechanical resonance of a nanobeam to an occupancy as low as 0.32. These results indicate the viability of silicon nitride nanomembranes as an all-in-one substrate for quantum electro-opto-mechanical experiments. AU - Fink, Johannes M AU - Kalaee, Mahmoud AU - Pitanti, Alessandro AU - Norte, Richard AU - Heinzle, Lukas AU - Davanço, Marcelo AU - Srinivasan, Kartik AU - Painter, Oskar ID - 1355 JF - Nature Communications TI - Quantum electromechanics on silicon nitride nanomembranes VL - 7 ER - TY - JOUR AB - We study coherent phonon oscillations and tunneling between two coupled nonlinear nanomechanical resonators. We show that the coupling between two nanomechanical resonators creates an effective phonon Josephson junction, which exhibits two different dynamical behaviors: Josephson oscillation (phonon-Rabi oscillation) and macroscopic self-trapping (phonon blockade). Self-trapping originates from mechanical nonlinearities, meaning that when the nonlinearity exceeds its critical value, the energy exchange between the two resonators is suppressed, and phonon Josephson oscillations between them are completely blocked. An effective classical Hamiltonian for the phonon Josephson junction is derived and its mean-field dynamics is studied in phase space. Finally, we study the phonon-phonon coherence quantified by the mean fringe visibility, and show that the interaction between the two resonators may lead to the loss of coherence in the phononic junction. AU - Barzanjeh, Shabir AU - Vitali, David ID - 1370 IS - 3 JF - Physical Review A - Atomic, Molecular, and Optical Physics TI - Phonon Josephson junction with nanomechanical resonators VL - 93 ER - TY - JOUR AB - Solitons are localized waves formed by a balance of focusing and defocusing effects. These nonlinear waves exist in diverse forms of matter yet exhibit similar properties including stability, periodic recurrence and particle-like trajectories. One important property is soliton fission, a process by which an energetic higher-order soliton breaks apart due to dispersive or nonlinear perturbations. Here we demonstrate through both experiment and theory that nonlinear photocarrier generation can induce soliton fission. Using near-field measurements, we directly observe the nonlinear spatial and temporal evolution of optical pulses in situ in a nanophotonic semiconductor waveguide. We develop an analytic formalism describing the free-carrier dispersion (FCD) perturbation and show the experiment exceeds the minimum threshold by an order of magnitude. We confirm these observations with a numerical nonlinear Schrödinger equation model. These results provide a fundamental explanation and physical scaling of optical pulse evolution in free-carrier media and could enable improved supercontinuum sources in gas based and integrated semiconductor waveguides. AU - Husko, Chad AU - Wulf, Matthias AU - Lefrançois, Simon AU - Combrié, Sylvain AU - Lehoucq, Gaëlle AU - De Rossi, Alfredo AU - Eggleton, Benjamin AU - Kuipers, Laurens ID - 1429 JF - Nature Communications TI - Free-carrier-induced soliton fission unveiled by in situ measurements in nanophotonic waveguides VL - 7 ER - TY - CONF AB - We present a coherent microwave to telecom signal converter based on the electro-optical effect using a crystalline WGM-resonator coupled to a 3D microwave cavity, achieving high photon conversion efficiency of 0.1% with MHz bandwidth. AU - Rueda, Alfredo AU - Sedlmeir, Florian AU - Collodo, Michele AU - Vogl, Ulrich AU - Stiller, Birgit AU - Schunk, Georg AU - Strekalov, Dimitry AU - Marquardt, Christoph AU - Fink, Johannes M AU - Painter, Oskar AU - Leuchs, Gerd AU - Schwefel, Harald ID - 1115 TI - Efficient single sideband microwave to optical conversion using a LiNbO inf 3 inf WGM-resonator ER - |
b3aaff9b77bf85eb | Who was Erwin Schrödinger?
When you think about Austrian physicist Erwin Schrödinger, your mind may first turn to his famous thought experiment surrounding a poisoned cat in a box. However, as part of his contributions to quantum mechanics, he is often most celebrated for devising the ‘Schrödinger equation’.
During the 1920s, the field of quantum mechanics emerged, along with a race involving all the top scientists to find ways to describe and explain the motion of some of the smallest building blocks of the universe. Schrödinger’s scientific capabilities had evolved through studying and working at multiple top universities alongside the likes of Albert Einstein. It was while he was working as a professor at the University of Zurich that he delved deep into the research of theoretical physics, allowing him to piece together some of the most important models of his career.
The first of his major breakthroughs came when he began studying the different energy states of electrons in an atom. Using a previous hypothesis that particles in an atom move in waves – similar to the movement of light waves – Schrödinger was the first person to organise data into an equation to prove how this happened. The equation is often compared to Newton’s law of motion in its level of importance to quantum mechanics.
Used across physics and chemistry, Schrödinger’s equation is used to deal with any issues regarding atomic structure, such as where in an atom electron waves are found. Additionally, his wave equation demonstrated superposition: a state that includes all possible solutions. Because his equation was linear and had more than two changeable factors, it created a range of possible outcomes.
Revolutionising the way quantum mechanics was visualised, Schrödinger focused again on superposition in his most renowned thought experiment. For this, he asked people to imagine a cat inside a sealed container. Trapped alongside the cat is a Geiger counter, poison, a hammer and a radioactive substance. In this situation, due to the random process in which the radioactive substance will decay, there is no way of knowing when this will happen. When it does, the activity will be detected by the Geiger counter, which will trigger the hammer to release the poison and eventually lead to the death of the cat.
As this process cannot be seen, the cat can’t be definitively pronounced dead or alive. For this reason, Schrödinger explained that it must be assumed that the cat is in two states – living and deceased – until the box is opened and its contents revealed.
Later on in his life, Schrödinger went on to publish academic books and journals. In one of his most well-known, called What is Life?, he used his expertise in quantum physics to delve into the world of biology and explore how his findings could explain the stability of genetic structure. While later developments and research in this area have led to adaptations of his findings, his work still holds great use in introducing students to quantum mechanics.
Across the board of Schrödinger’s experiments, thought processes and scientific writing, he is one of the highest regarded physicists of his time. Acknowledged during his life through prestigious awards and general science development, his studies continue to influence the worlds of both science and philosophy to this day.
Leave a Reply
%d bloggers like this: |
40d6e6bd84ec43c4 |
Bilinear recurrences and addition formulae for hyperelliptic sigma functions
Harry W. Braden, Victor Z. Enolskii and Andrew N.W. Hone School of Mathematics, University of Edinburgh, James Clerk Maxwell Building, Kings Buildings, Mayfield Road, Edinburgh EH9 3JZ, U.K. E-mail: Department of Mathematics, Heriot-Watt University, Edinburgh EH14 4AS, U.K. E-mail: Institute of Mathematics and Statistics, University of Kent, Canterbury CT2 7NF, U.K. E-mail: A.N.W.H
The Somos 4 sequences are a family of sequences satisfying a fourth order bilinear recurrence relation. In recent work, one of us has proved that the general term in such sequences can be expressed in terms of the Weierstrass sigma function for an associated elliptic curve. Here we derive the analogous family of sequences associated with an hyperelliptic curve of genus two defined by the affine model . We show that the sequences associated with such curves satisfy bilinear recurrences of order 8. The proof requires an addition formula which involves the genus two Kleinian sigma function with its argument shifted by the Abelian image of the reduced divisor of a single point on the curve. The genus two recurrences are related to a Bäcklund transformation (BT) for an integrable Hamiltonian system, namely the discrete case (ii) Hénon-Heiles system.
1 Introduction
In recent work [25], one of us has considered fourth order quadratic recurrences of the form
where and are constant parameters. Such recurrences arise in the theory of elliptic divisibility sequences [46, 47, 42] and their generalizations, the Somos 4 sequences [40, 44]. In that context, both the parameters , and the iterates are integers, or more generally take values in or a Galois extension, and in that case the sequences have applications in number theory, as they provide a potential source of large prime numbers [14, 16]. Moreover the Somos 4 sequences, defined by a recurrence of the form (1.1), provide a simple example of the Laurent phenomenon: taking the initial data and the parameters as variables, all subsequent terms for in the sequence are Laurent polynomials in these variables. Fomin and Zelevinsky have proved that this remarkable “Laurentness” property is shared by a variety of other recurrences in one and more dimensions, with applications in combinatorics and commutative algebra (see [19] and references).
In [25] the following theorem was proved:
Theorem 1. The general solution of the quadratic recurrence relation (1.1) takes the form
where and are non-zero complex numbers, the constants and are given by
and denotes the Weierstrass sigma function of an associated elliptic curve
The values , and the invariants , are precisely determined from the initial data and the parameters .
In the next section we summarize some facts about elliptic divisibility sequences, Somos 4 sequences and the details of the above theorem. In particular we explain how the result of Theorem 1 is connected to the second order solvable mapping
which is a degenerate case of the type of mapping studied by Quispel, Roberts and Thompson [38]. (See also [39] for some recent work on the global behaviour of real-valued solutions of such mappings.)
Remark. The case , which was excluded from the statement of the Theorem in [25], corresponds to being a half period, so that is a branch point of , but then the formula for is has a rather trivial alternating form: , . The map (1.5) is the autonomous version of the discrete Painlevé I equation (qdPI)
which has a continuum limit to the first Painlevé equation [33, 39]. The qdPI map (1.6) has tau-functions that yield a sequence of -polynomials [24], and in the autonomous case this map reduces to (1.5). Matsutani has constructed some particular solutions of (1.5) using elliptic functions, and has also considered certain higher order recurrences associated with genus two hyperelliptic functions [30] (see section 3). The above Theorem guarantees that only elliptic functions are necessary to specify the general solution of the second order map (1.5).
The result of Theorem 1 can also be understood via the addition formula
for the forward and backward shifted Weierstrass sigma function in terms of the function (see e.g. [48]). In section 3 this leads us to derive a higher order generalization of the recurrence relation (1.1) by considering a suitable addition formula for the Kleinian hyperelliptic sigma function associated to a curve of genus two. The hyperelliptic Kleinian sigma functions are a natural extension of Weierstrass elliptic functions to the case of higher genus (see e.g. [3, 9] and references). The addition formula we consider is a special case of the generalized Frobenius-Stickelberger formula in [13, 34], which is the exact genus two analogue of (1.7). The main result of our considerations is to derive an eighth order bilinear recurrence whose terms are given by an analogue of the formula (1.2). As a corollary, we also derive the solution of a family of sixth order nonlinear difference equations in terms of Kleinian functions. The fourth section explains how this recurrence is related to the BT (integrable discretization) of a Hamiltonian system with two degrees of freedom, namely the integrable case (ii) Hénon-Heiles system; this BT first appeared in [22, 23], and was put in an algebro-geometric setting in [29]. The extension to higher genus is briefly discussed in our concluding section.
2 Elliptic divisibility and Somos 4 sequences
The sequence
is an example of an elliptic divisibility sequence. It is obtained from the recurrence
with initial data taken as
The sequence can be consistently extended backwards for negative , to give an antisymmetric sequence with . Remarkably, despite the division by at each iteration of (2.2), the subsequent terms of the sequence are all integers, and they satisfy the divisibility property
More generally Morgan Ward [46, 47] introduced a family of such antisymmetric sequences defined by recurrences of the form
which are derived by considering sequences of rational points on an elliptic curve over . To obtain integer sequences of this kind it is required that
Using the addition law on and considering the multiples of a single point , Ward derived the bilinear recurrence (2.4) for , with the general term being written in terms of the sigma function associated with the curve , as
(See also [17, 42].)
Using the addition formula (1.7) for the Weierstrass sigma function, it is a simple exercise to use the formula (2.5) in order show that the terms of the elliptic divisibility sequence satisfy the Hankel determinant relation
for all . Starting from the Hankel determinant formula, it is then easy to prove by induction that all are integers with the divisibility property (2.3).
If we consider the same recurrence (2.2) but instead take initial data , then we find the sequence of integers
known as the Somos 4 sequence (see [40, 43]). (In fact this Somos sequence is just obtained by selecting the odd index terms of the elliptic divisibility sequence (2.1), up to an alternating sign.) More generally, following the terminology of [40, 44], we refer to any sequence defined by a bilinear recurrence of the form (1.1) as a Somos 4 sequence, while the particular sequence above is denoted Somos (4). It turns out that any such sequence is associated to a sequence of points on an associated elliptic curve : this fact was proved by algebraic means in the thesis of Swart [44], which refers to unpublished results established independently by both Nelson Stephens and Noam Elkies. In [25], one of us gave an alternative complex analytic proof, leading to the construction of the functional form (1.2) of the general term, as in Theorem 1 above.
The approach taken in [25] was to regard equation (1.1) as the bilinear form of an integrable map, analogous to the bilinear equation satisfied by the tau function for a soliton equation [20, 32], and then solve the initial value problem for the bilinear equation with specified initial data and parameters , . The quantity may be regarded as being the tau function for the second order nonlinear map (1.5), to which it is related by the substitution
The map (1.5) has a first integral, given by
The algebraic formula for itself implies that the pair lies on an elliptic curve for all . In fact the general solution of the recurrence can be written in terms of the Weierstrass function for a curve in the canonical form (1.4), as
The construction of this curve and the points solves the initial value problem for (1.5) which then yields the solution (1.2) for the recurrence (1.1). It is convenient for us to summarize the results of [25] by expressing the solution of this initial value problem in the form of an algorithm, as follows:
Step 1: Find the backwards iterate from the initial data, evaluate the quantities and , and use these to calculate the integral
Step 2: Use , , to calculate
This gives the point , with .
Step 3: Construct the invariants of the curve as in (1.4), from the formulae
Step 4: Iterate (1.5) backwards to obtain from and . Hence find the point from the formulae
Step 5: Calculate the values from the elliptic integrals
these should be interpreted as the points in the Jacobian corresponding to the points respectively. Note that because of the involution these values are only defined by to an overall sign, subject to the constraint that as in Step 4. Once and are obtained then are are found from the formulae (1.3).
Remarks. It is useful to note that the coefficients in the recurrence are given as elliptic functions of by
The above solution of the initial value problem establishes an exact correspondence between two sets of six parameters: the parameters that specify the elliptic curve , the two points , and the prefactors in (1.2); and the parameters specifying the constant coefficients and initial data for the recurrence (1.1). In order to interpret (1.5) as an integrable map, it is necessary to further specify a symplectic structure [7, 45]; symplectic coordinates and a Lax pair were given in [25], which make (1.5) equivalent to the discrete odd Mumford system in [29].
As an example of the above algorithm, we present the results for the Somos (4) sequence (2.7), with . We find , so , gives in Step 1. In Steps 2 and 3 we have , set and then find , , and in Step 4 we obtain so that , . Thus the Somos (4) sequence corresponds to the sequence of points on the curve
Finally, evaluating the elliptic integrals and sigma functions to 9 decimal places using the MAPLE computer algebra package (version 8), we find that the curve has real and imaginary half-periods and respectively, while
which yield the other quantities in (1.2) as
However, the sequence of arguments of the sigma function can be written more succinctly as
so that the iterates of the recurrence correspond to the sequence of points on the curve E, where , . The full sequence of points is associated with the elliptic divisibility sequence (2.1).
Elliptic divisibility sequences are currently of considerable interest due to the fact that large prime numbers can occur therein (i.e. may be prime when the index is prime, see [14, 16, 42]). Cantor has considered the division polynomials for odd hyperelliptic curves [12], corresponding to sequences of divisors , which also satisfy higher order recurrences written in terms of Hankel determinants; Matsutani has obtained the functional form of these division polynomials in genus two [31]. In the next section we shall derive an eighth order bilinear recurrence associated with the sequence of divisors , where is the reduced divisor of two points on a hyperelliptic curve of genus two.
3 Addition of one point in genus two
Let us consider an algebraic curve of genus two defined by the affine model
which realizes the curve as a two-sheeted covering of the Riemann sphere with branch points in the complex plane plus a single branch point at infinity. The vectors of canonical holomorphic differentials and canonical meromorphic (second kind) differentials are denoted
respectively. If we let denote the canonical homology basis for the compact Riemann surface corresponding to , with non-vanishing intersections , then the matrices of - and -periods are given by
The Jacobian of is the complex torus , where is the lattice generated by the periods of canonical holomorphic differentials. The elements of the symmetric product can be identified with degree zero divisors , which are mapped to by the Abel map:
(where here we are basing the map at ).
The Kleinian sigma function , which is a quasiperiodic function of , is the genus two analogue of the Weierstrass sigma function. The Kleinian and functions are defined by
We refer the reader to other works such as [3, 9] for a detailed introduction to hyperelliptic curves, Kleinian functions and their definition in terms of Riemann theta functions (see also [5, 13, 34, 35] and references).
Vectors in the theta divisor in Jac() can be characterized by the fact that
We wish to take a vector in the theta divisor, given by
so corresponds to the (reduced) divisor of a single point , with .
In [9] (Theorem 4.9) it is proved that the Baker function
defined for and by
satisfies the Schrödinger equation
Note that we have chosen a particular normalization for the Baker function compared with [9], including the denominator , and the principal value symbol in (3.3) denotes the fact that the integral of the meromorphic differential is regularized at infinity.
Let us define two different Baker-Akhiezer functions related by the hyperelliptic involution, as
Then from the proof of Theorem 4.9 in [9] (restricting to ) we have that
is the Bolza polynomial in genus two [6]. Hence it follows that satisfies the Ermakov-Pinney equation with respect to derivatives in the variable , namely
It is a well known classical result of Ermakov (see [21] for references) that the general solution of the Ermakov-Pinney equation (3.7) is just given by a product of two solutions of the Schrödinger equation with Wronskian .
Taking the difference of the equations (3.5) we have
is the Wronskian. Clearly this must be independent of , but we claim that in fact this Wronskian has precisely the value , which means that the product Rewriting this in terms of the sigma function, we can state the following result.
Proposition. The Kleinian sigma function for a hyperelliptic curve (3.1) of genus two satisfies the following formula for addition of a single point on the curve:
In the above, is a generic vector in the Jacobian, is the image of the single point under the Abel map, and is the Bolza polynomial defined by (3.6).
Proof: Starting from Baker’s addition theorem for genus two [3],
where , are generic points in , and multiplying both sides by , the result follows by taking the limit as tends to the theta divisor. It is necessary to use the fact (see e.g. [34]) that the coordinate of the point is given, in terms of derivatives of the sigma function evaluated on the theta divisor , by the expression
This follows from the fact that differentiating with respect to gives, by the chain rule,
for given by (3.2).
Remark. Enolskii and Gibbons recently calculated the exact analogue of (3.8) in genus three. The addition formula (3.8) is a special case of the generalized Frobenius-Stickelberger addition formula in genus two considered in [13, 34]. Onishi has further generalized the Frobenius-Stickelberger formula to hyperelliptic sigma functions for all genera [35], and the special case of the formula corresponding to addition of one point has been applied to the problem of construction of Wannier functions for quasi-periodic finite-gap potentials in [5].
Cantor has constructed the division polynomials for hyperelliptic curves, and obtained certain recurrence relations for them in the paper [12], where in particular an eighth order bilinear recurrence is found in genus two. Up to a suitable normalization, Matsutani has considered the exact analytic expression for these division polynomials, which are equivalent to the sequence of functions
known as as hyperelliptic psi-functions [30, 31]. In the following theorem, we present a sequence of tau-functions that generalize these psi-functions and yet satisfy the same recurrence of Somos 8 type.
Theorem 2. Define the sequence by
where is a generic vector in the Jacobian of the genus two curve (3.1), is the image of the single point under the Abel map, denotes the Kleinian sigma function of the curve, and are arbitrary constants. Then the terms of the sequence satisfy a bilinear recurrence of order 8, given by
where the coefficients (independent of ) are given by
Proof. Substituting the expression (3.10) into (3.11) and using Baker’s formula (3.9) together with the result of the Proposition yields an expression of the form
The three functions , on are not linearly dependent (although they do satisfy a nonlinear relation [9, 13], giving the Kummer surface in ). Therefore each of the coefficients , must vanish, which leads to a linear system for the as functions of . This determines the above formulae for the coefficients uniquely in terms of , its derivative , and the evaluated at various multiples of . The terms involving can be removed by making use of the addition formulae (3.9) and (3.8) to yield the expressions (3.12) and (3.13) in terms of and alone. Taking the limit , these are equivalent to Matsutani’s expressions for the coefficients in the eighth order bilinear recurrrence for the psi-function (see formula (3.13) in [30]).
Corollary. The sequence of Bolza polynomials
for and , satisfies the sixth order nonlinear difference equation
with the coefficients as given in equations (3.12), and (3.13).
Proof of Corollary. Upon setting
with as in (3.10), and using the addition formula (3.8), the result is an immediate consequence of Theorem 2.
Remarks. Taking as the Abelian image of , the sequence (3.14) corresponds to the linear flow in the Jacobian, or equivalently the sequence of divisors with and , the image of under the hyperelliptic involution. The Bolza polynomial leads to the solution of the Jacobi inversion problem for the curve (3.1) (see Theorem 2.2 in [9], and section 4 below), so that in particular if , then we have
Cantor’s results in [12] concern the sequence of reduced divisors , corresponding to the multiples of a single point on an odd hyperelliptic curve of genus . In particular for he obtains a bilinear recurrence of order 8, which is the degenerate case (, ) of our construction, while the analytic derivation in that case appears in the work of Matsutani [30, 31]. The sixth order difference equation (3.15) appears as equation (3.15) in [30], where the special solutions with are also presented.
The sigma functions of genus odd hyperelliptic curves, given by with a polynomial of odd degree , are known to be tau functions of the Korteweg–deVries (KdV) hierarchy of partial differential equations (see [9] for instance). It is also known that when the curve degenerates completely to , the corresponding sigma function degenerates to a polynomial (see [10, 35]), which gives a rational solution of KdV in terms of a Schur function (see [2] and chapter 14 in [27]). It is instructive to consider the case when the curve (3.1) for degenerates to a singular rational curve:
In that case the Kleinian sigma function degenerates to the Schur function
which is the tau function of the three-pole rational solution of the KdV equation
The theta divisor consists of vectors of the form
satisfying . It is trivial to check that the Schur function satisfies the addition formula (3.8). Defining in terms of the Schur function (3.16) by (3.10), it is easy to verify that this gives a particular solution of the eighth order recurrence (3.11) with
4 BT for the case (ii) Hénon-Heiles system
The integrable case (ii) Hénon-Heiles system is a system of two degrees of freedom defined by the natural Hamiltonian
The coordinates and momenta are canonically conjugate, and Hamilton’s equations
are equivalent to the ordinary differential equation for travelling wave solutions of the fifth order flow in the KdV hierarchy [18]. The equations of motion (4.2) can be written in the form of a Lax equation
where the Lax matrix is
(Note that we have , compared with reference [22].) The Lax equation is the compatibility condition for the linear system
The genus two spectral curve is of the precise form (3.1), namely
being the second independent integral, in involution with i.e. . The integral generates a second commuting flow
Up to a shift of origin, the time variables can be identified with the coordinates respectively on . Using the results of Theorem 2.2 in [9], the solution of the Hénon-Heiles system can be reduced to the Jacobi inversion problem
where the separation coordinates are found from the entry in the Lax matrix (4.3), given as a multiple of the Bolza polynomial by
The separation variables , correspond to the reduced divisor , and they are related to the Kleinian functions by
(see e.g. [13]). Thus the connection with the Bolza polynomial immediately leads to the solution of the Hénon-Heiles system in terms of Kleinian functions, which is
It is shown in [22] that the case (ii) Hénon-Heiles system has a Bäcklund transformation (BT) with parameter , which is a symplectic map with generating function such that . The explicit form of the generating function is
where and are defined by
The BT can be realized as a similarity transformation on the Lax pair (discrete Lax equation)
where and
is the elementary Darboux matrix (see [41]). Clearly from (4.5) the BT preserves the spectrum of , and so maps solutions to solutions.
In fact, the BT was constructed in [22] by making use of the formulae for the Darboux transformation of the Schrödinger equation, since the components of in the linear system (4.4) are given by with . Then the quantity appearing in the Darboux matrix can be given explicitly in terms of the Baker function defined in (3.3) as
and by a simple calculation using ( |
8c2e31699316c26d | open search form
The Mathematics of Flow
The ways in which water meanders through rivers or makes its way through pipes to your kitchen sink is much more complex than you might think. Mathematicians have been trying to model the flow of water and air for centuries in a field known as fluid dynamics, but according to Philip Isett, a new assistant professor of mathematics at Caltech, the problem is incredibly challenging.
"Because fluids are ubiquitous in nature, we really have to grapple with understanding them," he says. "Fluids are hard to describe inherently because they exhibit a very chaotic and erratic kind of motion called turbulence."
Isett received bachelor's degrees in math and economics, with a minor in physics, from the University of Maryland, College Park, in 2008. He earned his PhD in mathematics from Princeton University in 2013. After working at MIT as a C.L.E. Moore Instructor and a National Science Foundation postdoctoral scholar, Isett became an assistant professor at the University of Texas at Austin in 2016. He joined Caltech in 2018, and recently won a Sloan Research Fellowship.
Isett uses partial differential equations to model fluids; in particular, he studies the Euler equations of fluid dynamics, which date back to their namesake, Leonhard Euler (pronounced "Oiler"), an 18th-century Swiss scientist. Recently, Isett solved a problem related to the Euler equations known as the Onsager's conjecture, named after its proposer Lars Onsager, who won the Nobel Prize in Chemistry in 1968.
We met with Isett to learn more about fluid dynamics and his love of math.
What are partial differential equations?
The general field I work in, which is a form of calculus, is called nonlinear partial differential equations. Differential equations are used to measure change. The word "partial" in front of differential equations means that we are calculating more than one variable, such as position and time.
You can take pretty much any branch of physics and there will be some kind of partial differential equation behind it. In quantum physics, there is the Schrödinger equation; in the general theory of relativity, there are the Einstein equations; and in fluid dynamics, the key equations are the Navier-Stokes and the Euler equations, the latter being what I study.
Why are the equations of fluid dynamics important?
The Navier-Stokes fluid dynamics equations [proposed in 1822 by Claude-Louis Navier and George Gabriel Stokes] are very useful in a practical sense for solving problems related to all sorts of things like the weather, or the air flow around the wings of planes, where you are predicting what will happen. But in a purely mathematical sense, there are fundamental questions we do not know how to answer about the Navier-Stokes equations. In particular, we do not know if the equations break down and if solutions become so irregular that we cannot use them to predict the future.
The Euler equations are a special case of Navier-Stokes where there is zero internal friction, or viscosity. They are especially interesting for studying turbulence, because they describe a limiting regime where the internal friction can be ignored. This is a regime where there is a lot of chaotic motion, an example being the turbulence you see in air or water behind a jet or submarine. This turbulence can even happen when you turn on the sink and water comes out very quickly.
What are you trying to learn with the Euler equations?
We are trying to learn about energy dissipation in these systems. The Euler equations describe a scenario where there is no internal friction, so friction is not what is dissipating the energy but rather something else. Lars Onsager proposed in 1949 that there should exist solutions to the Euler equations that would dissipate kinetic energy without friction and that also would have velocity fluctuations and other properties similar to turbulent flow, thereby linking the concepts of frictionless energy dissipation and turbulence. Building on the work of others and previous work of my own, I was able to prove that Onsager's conjecture is true.
What does this mean in a big-picture sense?
Solving this problem has theoretical implications because it shows that the idea of energy dissipation independent of internal friction, which is something theorized to occur, is compatible with the predictions about velocity fluctuations and turbulence. This offers some philosophical assurance that the ideas in turbulence theory don't necessarily contradict each other. But also, hopefully the math used to prove these statements is the kind of math that will be truly useful for doing future analyses of the fluid equations.
What are you working on now?
We are trying to go further than Onsager's conjecture to show that this energy dissipation happens on a local level. There shouldn't be some parts of the fluid where energy is going up and other parts where energy is going down. Energy should be dissipating everywhere. Solving this problem would bring us closer to more realistically describing physical turbulent flow.
How did you first get interested in math?
I always liked math growing up. I remember when I learned the Pythagorean Theorem. I saw more and more how it was applied to practical problems, for example, to calculate the distance between two points, and I was just so amazed that some person could discover mathematics that could be so useful. I could see that it has a large impact on society.
When people use the word mathematics, they refer to two different things. On the one hand, there is all the math that people have discovered and know and do, and on the other hand, there is the entire universe of mathematics that is yet to be discovered. I picture the math we know as some kind of surface with lots of winding twists and tangles in it that grow as we learn more, while the math that we don't know can be pictured as a higher-dimensional universe containing that surface. The job of a mathematician is to discover this new math we do not know yet.
Written by Whitney Clavin
Whitney Clavin
(626) 395-1944 |
598df2a897fce251 | Download Chapter_2 - Experimental Elementary Particle Physics Group
yes no Was this document useful for you?
Thank you for your participation!
Document related concepts
Gravity wikipedia, lookup
Negative mass wikipedia, lookup
Lorentz ether theory wikipedia, lookup
Electromagnetism wikipedia, lookup
Time wikipedia, lookup
Electromagnetic mass wikipedia, lookup
Aristotelian physics wikipedia, lookup
Spacetime wikipedia, lookup
Lagrangian mechanics wikipedia, lookup
Free fall wikipedia, lookup
Introduction to general relativity wikipedia, lookup
History of physics wikipedia, lookup
Thomas Young (scientist) wikipedia, lookup
Anti-gravity wikipedia, lookup
History of Lorentz transformations wikipedia, lookup
Woodward effect wikipedia, lookup
Work (physics) wikipedia, lookup
History of special relativity wikipedia, lookup
Relational approach to quantum physics wikipedia, lookup
History of optics wikipedia, lookup
Speed of light wikipedia, lookup
Classical mechanics wikipedia, lookup
Equations of motion wikipedia, lookup
Newton's laws of motion wikipedia, lookup
Length contraction wikipedia, lookup
Centripetal force wikipedia, lookup
A Brief History of Time wikipedia, lookup
Inertial navigation system wikipedia, lookup
Four-vector wikipedia, lookup
Speed of gravity wikipedia, lookup
Theoretical and experimental justification for the Schrödinger equation wikipedia, lookup
Time dilation wikipedia, lookup
Velocity-addition formula wikipedia, lookup
Special relativity wikipedia, lookup
Faster-than-light wikipedia, lookup
Derivations of the Lorentz transformations wikipedia, lookup
Time in physics wikipedia, lookup
2. A Complex of Phenomena
2.1 The Spacetime Interval
…and then it was
There interposed a fly,
With blue, uncertain, stumbling buzz,
Between the light and me,
And then the windows failed, and then
I could not see to see.
Emily Dickinson, 1879
The advance of the quantum wave function of any physical system as it passes uniformly
from the event (t,x,y,z) to the event (t+dt, x+dx, y+dy, z+dz) is proportional to the value
of d given by
where t,x,y,z are any system of inertial coordinates and c is a constant (the speed of light,
equal to 300 meters per microsecond). The quantity d is called the elapsed proper time
of the interval, and it is invariant with respect to any system of inertial coordinates. To
illustrate, consider a muon particle, which has a radioactive mean life of roughly 2 sec
with respect to its inertial rest frame coordinates. In other words, between the appearance
of a typical muon (arising from, say, the decay of a pion) and its decay there is an interval
of about 2 sec in terms of the time coordinate of the muon's inertial rest frame, so the
components of this interval are {2,0,0,0}, and the quantum phase of the particle advances
by an amount proportional to d, where
Now suppose we assess this same physical phenomenon with respect to a relatively
moving system of inertial coordinates, e.g., a system with respect to which the muon
moved from the spatial origin [0,0,0] all the way to the spatial position [980m, -750m,
1270m] before it decayed. With respect to these coordinates, the muon traveled a spatial
distance of 1771 meters. Since the advance of the quantum wave function (i.e., the
proper time) of a system or particle over any interval of its worldline is invariant, the
corresponding time component of this physical interval with respect to these relatively
moving inertial coordinates must be much greater than 2 sec. If we let (dT,dX,dY,dZ)
denote the components of this interval with respect to the relatively moving system of
inertial coordinates, we must have
Solving for dT and substituting for the spatial components noted above, we have
This represents the time component of the muon decay interval with respect to the
moving system of inertial coordinates. Since the muon has moved a spatial distance of
1771 meters in 6.23 sec, we see that its velocity with respect to these coordinates is 284
m/sec, which is 0.947c.
The identification of the spacetime interval with quantum phase applies to null intervals
as well, consistent with the fact that the quantum phase of a photon does not advance at
all between its emission and absorption. (For a further discussion of this, see Section
9.10.) Hence the physical significance of a null spacetime interval is that the quantum
state of any system is constant along that interval. In other words, the interval represents
a single quantum state of the system. It follows that the emission and absorption of a
photon must be regarded as, in some sense, a single quantum event.
Note, however, that the quantum phase is path dependent. In other words, two particles
at opposite ends of a lightlike (null) interval do not share the same quantum state unless
the second particle reached that event by passing along that null interval. Hence the
concept of the spacetime interval as a measure of the phase of the quantum wave function
does not conflict with the exclusion principle for fermions such as electrons, because
even though two electrons can be null-separated, they cannot have separated along that
null path, because they have non-zero rest mass. Of course, it is possible for two photons
at opposite ends of a null interval to have reached that condition by progressing along
that interval, in which case they represent the same quantum phase (and in some sense
may be regarded as "the same photon"), but photons are bosons, and hence not excluded
from occupying the same state. In fact, the presence of one photon in a particular
quantum state actually enhances the probability of another photon entering that state.
(This is responsible for the phenomenon of stimulated emission, which is the basis of
operation of lasers.)
In this regard it's interesting to consider neutrinos, which (like electrons) are fermions,
meaning that they have anti-symmetric eigenfunctions, and hence are subject to the Pauli
exclusion principle. On the other hand, neutrinos were traditionally regarded as massless,
meaning they propagate along null intervals. This raises the prospect of two instances of
a neutrino at opposite ends of a null interval, with the second occupying the same
quantum state as the first, in violation of the exclusion principle for fermions. It might be
argued that these two instances are really the same neutrino, and a particle obviously can't
exclude itself from occupying its own state. However, this is somewhat problematic due
to the indistinguishability and the lack of definite identities for individual particles. A
different approach would be to argue that all fermions, including neutrinos, must have
mass, and thus be excluded from traveling along null intervals. The idea that neutrinos
actually do have mass seems to be supported by recent experimental observations, but the
questions remains open.
Based on the general identification of the invariant magnitude (proper time) of a timelike
interval with quantum phase along that interval, it follows that all physical processes and
characteristic sequences of events will evolve in proportion to this quantity. The name
"proper time" is appropriate because this quantity represents the most meaningful known
measure of elapsed time along that interval, based on the fact that the quantum state is the
most complete possible description of physical reality. Since not all spacetime intervals
are timelike, we conclude that the temporal relations between events induce only a partial
ordering, rather than a total ordering (as discussed in Section 1.2), because a set of events
can be totally ordered only if they are each inside the future or past null cone of each of
the others. This doesn't hold if any of the pairwise intervals is spacelike. As a
consequence of this partial ordering, between two fixed timelike separated events there
exist timelike paths with different lapses of proper time.
Admittedly a partial ordering of events has been considered unacceptable by some
people, basically because they regard total temporal ordering in a classical Cartesian
setting as an inviolable first principle. Rather than accept partial ordering they prefer to
(more or less arbitrarily) select one particular inertial reference system and declare it to
be the "true" configuration, as in Lorentz's original theory, in an attempt to restore an
unambiguous total temporal ordering to events. They then account for the apparent
differences in elapsed time (as in muon observations) by regarding them as effects of
absolute velocity relative to the "true" frame of reference, again following Lorentz.
However, unlike Lorentz, we now have a theory of quantum mechanics, and the quantum
state of a system gives (arguably) the most complete possible objective description of the
system. Therefore, modern advocates of total temporal ordering face the daunting task of
finding some mechanism underlying quantum mechanics (i.e., hidden variables) to
provide a physical significance for their preferred total ordering. Unfortunately, the only
prospects for a viable hidden-variable theory seem to be things like the explicitly nonlocal contrivances described by David Bohm, which must surely be anathema to those
who seek a physics based on classical Cartesian mechanisms. So, although the theories
of relativity and quantum mechanics are in some respects incongruent, it is nevertheless
true that the (putative) validity and completeness of quantum mechanics constitutes one
of the strongest argument in favor of the relativistic interpretation of Lorentz invariance.
We should also mention that a tacit assumption has been made above, namely, the
assumption of physical equivalence between instantaneously co-moving frames,
regardless of acceleration. For example, we assume that two co-moving clocks will keep
time at the same instantaneous rate, even if one is accelerating and the other is not. This
is just a hypothesis - we have no a priori reason to rule out physical effects of the 2nd,
3rd, 4th,... time derivatives. It just so happens that when we construct a theory on this
basis, it works pretty well. (Similarly we have no a priori reason to think the field
equations necessarily depend only on the metric and its 1st and 2nd derivatives; but it
Another way of expressing this "clock hypothesis" is to say that an ideal clock is
unaffected by acceleration, and to regard this as the definition of an "ideal clock", i.e.,
one that compensates for any effects of 2nd or higher derivatives. Of course the physical
significance of this definition arises from the hypothesized fact that acceleration is
absolute, and therefore perfectly detectable (in principle). In contrast, we hypothesize
that velocity is perfectly undetectable, which explains why we cannot define our "ideal
clock" to compensate for velocity (or, for that matter, position). The point is that these
are both assumptions invoked by relativity: (1) the zeroth and first derivatives of position
are perfectly relative and undetectable, and (2) the second and higher derivatives of
position are perfectly absolute and detectable. Most treatments of relativity emphasize
the first assumption, but the second is no less important.
The notion of an ideal clock takes on even more physical significance from the fact that
there exist physical entities (such a vibrating atoms, etc) in which the intrinsic forces far
exceed any accelerating forces we can apply, so that we have in fact (not just in principle)
the ability to observe virtually ideal clocks. For example, in the Rebka and Pound
experiments it was found that nuclear clocks were slowed by precisely the factor (v),
even though subject to accelerations up to 1016 g (which is huge in normal terms, but of
course still small relative to nuclear forces).
It was emphasized in Section 1 that a pulse of light has no inertial rest frame, but this
may seem puzzling at first. The pulse has a well-defined spatial position versus time with
respect to some inertial coordinate system, representing a fixed velocity c relative to that
system, and we know that any system of orthogonal coordinates in uniform non-rotating
motion relative to an inertial coordinate system is also inertial, so why can we not simply
apply the velocity c to the base frame to arrive at the rest frame of the light pulse? How
can an entity have a well-defined velocity and yet have no well-defined rest frame? The
only answer can be that the transformation is singular, i.e., the coordinate system moving
with a uniform speed c relative to an inertial frame is not well defined. The singular
behavior of the transformation corresponds to the fact that the absolute magnitude of the
spacetime intervals along lightlike paths is null. The transformation through a velocity v
from the xt to the x't' coordinates is t' = (tvx)/ and x' = (xvt)/ where = (1v2)1/2,
so it's clear that for v = 1 the individual t' and x' components are undefined, but the ratio
of dt' over dx' remains well-defined, with magnitude 1 and the opposite sign from v. The
singularity of the Lorentz transformation for the speed c suggests that the conception of
light as an entity in itself may be somewhat misleading, and it is often useful to regard
light as simply an interaction between two massive bodies along a null spacetime
Discussions of special relativity often refer to the use of clocks and reflected light signals
for the evaluation of spacetime intervals. For example, suppose two identical clocks are
moving uniformly with speeds +v and -v along the x axis of a given inertial coordinate
system, and these clocks are set to zero at the intersection of their worldlines. When the
leftward clock indicates the proper time 1, it emits a pulse of light, which bounces off the
rightward clock when that clock indicates 2, and arrives back at the leftward clock when
that clock reads 3. This is illustrated in the drawing below.
By similar triangles we immediately have 2/1 = 3/2, and thus 22 = 13. Of course,
this same relation holds good in Galilean spacetime as well (not to mention Euclidean
plane geometry, using distances instead of time intervals), and the reflected signal need
not be a light pulse. Any object moving at the same speed (angle) in both directions with
respect to this coordinate system would serve just as well, and would lead to the same
result that 2 is the geometric mean of 1 and 3. Naturally if we apply any Minkowskian,
Galilean, or Euclidean transformation (respectively), the pictorial angles of the lines will
differ, but the three absolute intervals will remain unchanged.
It is, of course, possible to distinguish between the Galilean and Minkowskian cases
based just on the values of the elapsed times, provided we know the relative speeds of the
clocks and the signal. In Galilean spacetime each proper time j equals the coordinate
time tj, whereas in Minkowski spacetime it equals (tj2 xj2)1/2 where xj = v tj. Hence the
proper time j in Minkowski spacetime is tj(1 v2)1/2. This might seem to imply that the
ratios of proper times are the same in the Galilean and Minkowskian cases, but in fact we
have not made a valid comparison for equal relative speeds between the clocks. In this
example each clock is moving with speed v away from the midpoint, which implies that
the relative speed is 2v in the Galilean case, but only 2v/(1 + v2) in the Minkowskian
To give a valid comparison for equal relative speeds between the clocks, let's transform
the events to a system of coordinate such that the left-hand clock is stationary and the
right-hand clock is moving at the speed v. Now this v represents magnitude of the actual
relative speed between the two clocks. We now stipulate that the original signal is
moving with speed u relative to the left-hand clock, and the reflected signal is moving
with speed -u relative to the right-hand clock. The situation is illustrated in the figure
The speed, with respect to these coordinates, of the reflected signal is what distinguishes
the Galilean from the Minkowskian case. Letting x2 and t2 denote the coordinates of the
reflection event, and noting that 1 = t1 and 3 = t3, we have v = x2/t2 and u = x2/(t21).
We also have
Dividing the numerator and denominator of the expression for u by t2, and replacing x2/t2
with v, gives u = v/[1(1/t2)]. Likewise the above expressions can be written as
Solving these equations for the time ratios, we have
Consequently, depending on whether the metric is Galilean or Minkowskian, the ratio of
t3 over t1 is given by
respectively. If u happens to be unity (meaning that the signals propagate at the speed of
light), these expressions reduce to the squares of the Galilean and relativistic Doppler
shift factors, i.e., 1/(1v)2 and (1+v)/(1v), discussed more fully in Section 2.4.
Another distinguishing factor between the two metrics is that with the Minkowski metric
the speed of light is invariant with respect to any system of inertial coordinates, so
(arguably) we can even say that it represents the same "u" relative to a spacelike interval
as it does relative to a timelike interval, in order to adhere to our stipulation that the
reflected signal has the speed u relative to "the rest frame of the right-hand clock". Of
course, a spacelike interval cannot actually be the worldline of a clock (or any other
material object), but the invariance of the speed of light under Minkowskian
transformations enables us to rationally apply the same "geometric mean" formula to
determine the magnitudes of spacelike intervals, provided we use light-like signals, as
illustrated below.
In this case we have 1 = 3, so 22 = 32, meaning that squared spacelike intervals are
2.2 Force Laws and Maxwell's Equations
While speaking of this state, I must immediately call your attention to the
curious fact that, although we never lose sight of it, we need by no means
go far in attempting to form an image of it and, in fact, we cannot say
much about it.
Lorentz, 1909
Perhaps the most rudimentary scientific observation is that material objects exhibit a
natural tendency to move in certain circumstances. For example, objects near the surface
of the Earth tend to move in the local "downward" direction, i.e., toward the Earth's
center. The Newtonian approach to describing such tendencies was to imagine a "force
field" representing a vectorial force per unit charge that is applied to any particle at any
given point, and then to postulate that the acceleration vector of each particle equals the
applied force divided by the particle's inertial mass. Thus the "charge" of a particle
determines how strongly that particle couples with a particular kind of force field,
whereas the inertial mass determines how susceptible the particle's velocity is to arbitrary
applied forces. In the case of gravity, the coupling charge happens to be the same as the
inertial mass, denoted by m, but for electric and magnetic forces the coupling charge q
differs from m.
Since the coupling charge and the response coefficient for gravity are identical, it follows
that gravity can only operate in a single directional sense, because changing the sign of m
for a particle would reverse the sense of both the coupling and the response, leaving the
particle's overall behavior unchanged. In other words, if we considered gravitation to
apply a repulsive force to a certain particle by setting the particle's coupling charge to -m,
we would also set its inertial coefficient to -m, so the particle would still accelerate into
the applied force. Of course, the identity of the gravitational coupling and response
coefficients not only implies a unique directional sense, it implies a unique quantitative
response for all material particles, regardless of m. In contrast, the electric and magnetic
coupling charge q is separately specifiable from the inertial coefficient m, so by changing
the sign of q while leaving m constant we can represent either negative or positive
response, and by changing the ratio of q/m we can scale the quantitative response.
According to this classical picture, a small test particle with mass m and electric charge q
at a given location in space is subject to a vectorial force f given by
where g is the gravitational field vector, E is the electric field vector, and B is the
magnetic field vector at the given location, and v is the velocity vector of the test particle.
(See Part 1 of the Appendix for a review of vector products such as the cross product
denoted by v B.) As noted above, the acceleration vector a of the particle is simply f/m,
so we have the equation of motion
Given the mass, charge, and initial position of a test particle, and the vectors g,E,B for
every point in vicinity of the particle, this equation enables us to compute the particle's
subsequent motion. Notice that acceleration of a test particle due to gravity is
independent of the particle's properties and state of motion (to the first approximation),
whereas the accelerations due to the electric and magnetic fields are both proportional to
the particle's charge divided by it's inertial mass. In addition, the contribution of the
magnetic field is a function of the particle's velocity. This dependence on the state of
motion has important consequences, and leads naturally to the unification of the electric
and magnetic fields, but before describing these effects it's worthwhile to briefly review
the effect of the classical gravitational field on the motion of a particle.
The gravitational acceleration field g at a point p due to a distant particle of mass m was
specified classically by Newton's law
where r is the displacement vector (of magnitude r) from the mass particle to the point p.
Noting that r2 = x2 + y2 + z2 and r = ix + jy + kz, it's straightforward to verify that the
divergence of the gravitational field g vanishes at any point p away from the mass, i.e.,
we have
(See Part 3 of the Appendix for a review of the differential operator notation.) The
field due to multiple mass particles is just the sum of the individual fields, so the
divergence of g due to any configuration of matter vanishes at every point in empty
space. Of course, the field is singular (infinite) at any point containing a finite amount of
mass, so we can't express the field due to a mass point precisely at the point. However, if
we postulate a continuous distribution of gravitational charge (i.e., mass), with a density
g specified at every point in a region, then it can be shown that the gravitational
acceleration field at every point satisfies the equation
Incidentally, if we define the gravitational potential (a scalar field) due to any particle of
mass as = -m / r where r is the distance from the source particle (and noting that the
potential due to multiple particles is simply additive), it's easy to show that
so equations (3) and (4) can be expressed equivalently in terms of the potential, in which
case they are called Laplace's equation and Poisson's equation, respectively. The equation
of motion for a test particle in the absence of any electromagnetic effects is simply a = g,
so equation (2) gives the three components
To illustrate the use of these equations of motion, consider a circular path for our test
particle, given by
In this case we see that r is constant and the second derivatives of x and y are r2sin(wt)
and r2cos(t) respectively. The equation of motion for z is identically satisfied and the
equations for x and y both reduce to r32 = m, which is Kepler's third law for circular
Newton's analysis of gravity into a vectorial force field and a response was spectacularly
successful in quantifying the effects of gravity, and by the beginning of the 20th century
this approach was able to account for nearly all astronomical phenomena in the solar
system within the limits of observational accuracy (the only notable exception being a
slightly anomalous precession in the orbit of the planet Mercury, as discussed in Section
6.2). Based on this success, it was natural that the other forces of nature would be
formalized in a similar way.
The next two most obvious forces that apply to material bodies are the electric and
magnetic forces, represented by the last two terms in equation (1a). If we imagine that all
of space is filled with a mist of tiny electrical charges qi with velocities vi, then we can
define the classical charge density e and current density j as follows
where V is an incremental volume of space. For the remainder of this section we will
omit the subscript "e" with the understanding the signifies the electric charge density. If
we let x,y,z denote the position of the incremental quantity of charge, we can write out
the individual components of the current density as
Maxwell's equations for the electro-magnetic fields are
where E is the electric field, B is the magnetic field. Equations (5a) and (5b) suggest that
the electric and magnetic fields are similar to the gravitational field g, since the
divergences at each point equal the respective charge densities, with the difference being
that the electric charge density may be positive or negative, and there does not exist (as
far as we know) an isolated magnetic charge, i.e., no magnetic monopoles. Equations (5a)
and (5b) are both static equations, in the sense that they do not involve the time
parameter. By themselves they could be taken to indicate that the electric and magnetic
fields are each individually similar to Newton's conception of the gravitational field, i.e.,
instantaneous "force-at-a-distance". (On this static basis we would presumably never
have identified the magnetic field at all, assuming magnetic monopoles don't exist, and
that the universe is not subject to any boundary conditions that caused B to be non-zero.)
However, equations (5c) and (5d) reveal a completely different aspect of the E and B
fields, namely, that they are dynamically linked together, so the fields are not only
functions of each other, but their definitions explicitly involve changes in time. Recall
that the Newtonian gravitational field g was defined totally by the instantaneous spatial
condition expressed by g = g , so at any given instant the Newtonian gravitational
field is totally determined by the spatial distribution of mass in that instant, consistent
with the notion that simultaneity is absolute. In contrast, Maxwell's equations indicate
that the fields E and B depend not only on the distribution of charge at a given putative
"instant", but also on the movement of charge (i.e., the current density) and on the rates of
change of the fields themselves at that "instant".
Since these equations contain a mixture of partial derivatives of the fields E and B with
respect to the temporal as well as the spatial coordinates, dimensional consistency
requires that the effective units of space and time must have a fixed relation to each other,
assuming the units of E and B have a fixed relation. Specifically, the ratio of space units
to time units must equal the ratio of electrostatic and electromagnetic units (all with
respect to any frame of reference in which the above equations are applicable). This is the
reason we were able to write the above equations without constant coefficients, because
the fixed absolute ratio between the effective units of measure of time and space enables
us to specify all the variables x,y,z,t in the same units.
Furthermore, this fixed ratio of space to time units has an extremely important physical
significance for electromagnetic fields in empty space, where and j are both zero. To
see this, take the curl of both sides of (5c), which gives
Now, for any arbitrary vector S it's easy to verify the identity
Therefore, we can apply this to the left hand side of the preceding equation, and noting
that E = 0 in empty space, we are left with
Also, recall that the order of partial differentiation with respect to two parameters doesn't
matter, so we can re-write the right-hand side of the above expression as
Finally, since (5d) gives B = E/t in empty space, the above equation becomes
Similarly we can show that
Equations (6a) and (6b) are just the classical wave equation, which implies that
electromagnetic changes propagate through empty space at a speed of 1 when using
consistent units of space and time. In terms of conventional units this must equal the ratio
of the electrostatic and electromagnetic units, which gives the speed
where 0 and 0 are the permeability and permittivity of the vacuum. To some extent our
choice of units is arbitrary, and in fact we conventionally define our units so that the
permeability constant has the value
Since force has units of kgm/sec2 and charge has units of ampsec, these conventions
determine our units of force and charge, as well as distance, so we can then
(theoretically) use Coulomb's law F = q1q2/(40 r2) to determine the permittivity
constant by measuring the static force that exists between known electric charges at a
certain distance. The best experimental value is
Substituting these values into equation (7) gives
This constant of proportionality between the units of space and time is based entirely on
electrostatic and electromagnetic measurements, and it follows from Maxwell's equations
that electromagnetic waves propagate at the speed c in a vacuum. In Section 3.3 we
review the history of attempts to measure the speed of light (which of course for most of
human history was not known to be an electromagnetic phenomenon), but suffice it to
say here that the best measured value for the speed of light is 299792457.4 m/sec, which
agrees with Maxwell's predicted propagation speed for electromagnetic waves to nine
significant digits.
This was Maxwell's greatest triumph, showing that electromagnetic waves propagate at
the speed of light, from which we infer that light itself consists of electromagnetic waves,
thereby unifying optics and electromagnetism. However, this magnificent result also
presented Maxwell, and other physicists of the late 19th century, with a puzzle that would
baffle them for decades. Equation (7) implies that, assuming the permittivity and
permeability of the vacuum are the same when evaluated at rest with respect to any
inertial frame of reference, in accord with the classical principle of relativity, and
assuming Maxwell's equations are strictly valid in all inertial frames of reference, then it
follows that the speed of light must be independent of the frame of reference. This agrees
with the Galilean principle of relativity, but flatly violates the Galilean transformation
rules, because it does not yield simply additive composition of speeds.
This was the conflict that vexed the young Einstein (age 16) when he was attending "prep
school" in Aarau, Switzerland in 1895, preparing to re-take the entrance examination at
the Zurich Polytechnic. Although he was deficient in the cultural subjects, he already
knew enough mathematics and physics to realize that Maxwell's equations don't support
the existence of a free wave at any speed other than c, which should be a fixed constant
of nature according to the classical principle of relativity. But to admit an invariant speed
seemed impossible to reconcile with the classical transformation rules.
Writing out equations (5d) and (5a) explicitly, we have four partial differential equations
The above equations strongly suggest that the three components of the current density j
and the charge density ought to be combined into a single four-vector, such that each
component is the incremental charge per volume multiplied by the respective component
of the four-velocity of the charge, as shown below
where the parameter is the proper time of the charge's rest frame. If the charge is
stationary with respect to these x,y,z,t coordinates, then obviously the current density
components vanish, and jt is simply our original charge density . On the other hand, if
the charge is moving with respect to the x,y,z,t coordinates, we acquire a non-vanishing
current density, and we find that the charge density is modified by the ratio dt/d.
However, it's worth noting that the incremental volume elements with respect to a
moving frame of reference are also modified by the same Lorentz transformation, which
ensures that the electrical charge on a physical object is invariant for all frames of
We can also see from the four differential equations above that if the arguments of the
partial derivatives on the left-hand side are arranged according to their denominators,
they constitute a perfect anti-symmetric matrix
If we let x1,x2,x3,x4 denote the coordinates x,y,z,t respectively, then equations (5a) and
(5d) can be combined and expressed in the form
In exactly the same way we can combine equations (5b) and (5c) and express them in the
where the matrix Q is an anti-symmetric matrix defined by
Returning again to equation (1a), we see that in the absence of a gravitational field the
force on a particle with q = m = 1 and velocity v at a point in space where the electric and
magnetic field vectors are E and B is given by
In component form this can be written as
Consequently the components of the acceleration are
Thus if the particle is stationary with respect to the original x,y,z,t coordinates, the force
on the particle has the components
Now consider the same physical situation, but with respect to a system of inertial
coordinates x',y',z',t' , aligned with the original coordinates, but moving in the positive x
direction with speed v. Hence the components of the particle’s velocity in terms of these
coordinates are vx’ = v and vy = vz = 0. For any given v there are constants K and k such
that the components of the force parallel and perpendicular to x axis (respectively) are
Naturally the constants K and k both equal 1 at v = 0. From the preceding equations we
see that the components of the electric field with respect to the primed and unprimed
coordinate systems are related according to
By symmetry, replacing v with -v, we also have the reciprocal transformation
We've used the same K and k factors for both transformations, because to the first order
we know k(v) is simply 1, implying that the dependence of k on v is of the second order,
which suggests that K(v) and k(v) are even functions, i.e., K(v) = K(-v) and k(v) = k(-v).
The two equations for the x components directly imply K = 1. Also, substituting the
expression for Ey' into the expression for Ey and solving the resulting equation for Bz'
By the same token, substituting the expression for Ez' into the expression for Ez and
solving for By' gives
Therefore, letting (v) denote the quantity in square brackets for any given v, the general
transformation equations for the electric and magnetic field components perpendicular to
the velocity are
By analogous reasoning to that used in Section 1.7, we infer that (v) = 1, and hence
Therefore, from equation (9), we see that the transformed components of the total
electromagnetic force are
It also follows that the components of the electric and magnetic field give the following
Naturally the field components parallel to the velocity exhibit the corresponding
invariance, i.e.,
from which we infer the final transformation equation Bx' = Bx. So, the complete set of
transformation equations for the electric and magnetic field components from one system
of inertial coordinates to another (with a relative velocity v in the positive x direction) is
Just as the Lorentz transformation for space and time intervals shows that those intervals
are the components of a unified space-time interval, these transformation equations show
that the electric and magnetic fields are components of a unified electro-magnetic field.
The decomposition of the electromagnetic field into electric and magnetic components
depends on the frame of reference. From the invariants noted above we see that, letting
E2 and B2 denote squared magnitudes of the electric and magnetic field vectors at a given
point, the quantity E2 B2 is invariant (as is the dot product EB), analogous to the
invariant X2 T2 for spacetime intervals. The combined electromagnetic field can be
represented by the matrix P defined previously, which transforms as a tensor of rank 2
under Lorentz transformations. So too does the matrix Q, and since Maxwell's equations
can be expressed in terms of P and Q (as shown by equations (8a) and (8b)), we see that
Maxwell's equations are invariant under Lorentz transformations. Moreover, any physical
force consistent with special relativity must transform in accord with (10), because
otherwise a comparison of the forces in different frames of reference would give different
2.3 The Inertia of Energy
Please reveal who you are of such fearsome form... I wish to clearly know
you, the primeval being, because I cannot fathom your intention. Lord
Krsna said: I am terrible Time, destroyer of all beings in all worlds, here
to destroy this world. Of those heroic soldiers now arrayed in the
opposing army, even without you, none will be spared.
Bhagavad Gita
The fact that inertial coordinate systems are related by Lorentz transformations (rather
than Galilean transformations) has very profound implications, because acceleration is
not invariant under Lorentz transformations. As a result, the acceleration of an object
subjected to a given force depends on the frame of reference. Since acceleration is a
measure of the object’s inertia, this implies that the object’s “inertial mass” depends on
the frame of reference. Now, the kinetic energy of an object also depends on the frame of
reference, and we find that the variation of kinetic energy is always exactly c2 times the
variation in inertial mass, where c is the speed of light. Thus the Lorentz covariance of
the inertial measures of space and time implies that all forms of energy possess inertia,
which in turn suggests that all inertia represents energy.
To show this quantitatively, let k denote a system of inertial coordinates and let K denote
another such system, with spatially aligned axes, moving with speed v in the positive x
direction relative to k. If a particle P is moving with speed U (in the same direction as v)
relative to K, then the speed u of P relative to the original k coordinates is given by the
composition law for parallel velocities (as derived at the end of Section 1.8)
Differentiating with respect to U gives
Hence, at the instant when P is momentarily co-moving with the K coordinates (i.e.,
when U = 0, so P is at rest in K, and u = v), we have
If we let t and denote the time coordinates of k and K respectively, then from the metric
(d)2 = c2(dt)2 (dx)2 and the fact that v2 = (dx/dt)2 along the worldline of P at this
moment, it follows that the incremental lapse of proper time d along the worldline of P
as it advances from t to t+dt is
expression by this quantity to give
, so we can divide the above
The quantity a = du/dt is the acceleration of P with respect to the k coordinates, whereas
a0 = dU / d is the acceleration of P with respect to the K coordinates (relative to which it
is momentarily at rest). Now, by symmetry, a force F exerted along the axis of motion
between a particle at rest in k on an identical particle P at rest in K must be of equal and
opposite magnitude with respect to both frames of reference. (This is consistent with the
transformation of electromagnetic force derived at the end of Section 2.2.) Also, by
definition, a force of magnitude F applied to a particle of “rest mass” m0 will result in an
acceleration a0 = F/m0 with respect to the reference frame in which the particle is
momentarily at rest. Therefore, using the preceding relation between the accelerations
with respect to the k and K coordinates, we have
By analogy with the Newtonian equation F = ma, the coefficient of “a” in this expression
is sometimes called the “longitudinal mass”, since it represents the ratio of force to
acceleration along the direction of motion. However, in Newtonian mechanics, force is
also equal to the time derivative of momentum p = mv, and we note that equation (1) can
be written as
The coefficient of v inside the square brackets is the inertial mass m (also called
relativistic mass) of the particle relative to the system k. This turns out to be a more
meaningful measure of the inertial content of an object. Since the quantity in the brackets
equals mv, this equation signifies that the momentum of the particle is the integral of Fdt
over an interval in which the particle is accelerated by a force F from rest to velocity v.
We also know that the work done on the particle is the integral of Fds, and this is a
reversible process, i.e., after we accelerate the particle by doing work on it, the particle
can then do an equal amount of work on its surroundings and thereby be decelerated back
to its initial state. Hence the integral of Fds from rest to velocity v is a state variable, and
we will call it the kinetic energy, denoted by E.
For both p and E the results of the integrations are independent of the pattern of
acceleration, so to evaluate these variables for any given v we can assume constant
acceleration “a” throughout the interval. Therefore the integral of Fdt is evaluated from t
= 0 to t = v/a, and since s = (1/2)at2, the integral of Fds is evaluated from s = 0 to s =
v2/(2a). Letting the symbol m (without subscript) denote the inertial mass of the particle
given by the ratio p/v, if follows that the inertial mass and the kinetic energy of the
particle at any speed v are given by
If the force F were equal to m0a (as in Newtonian mechanics) these two quantities would
equal m0 and (1/2)m0v2 respectively. However, we’ve seen that consistency with
relativistic kinematics requires the force to be given by equation (1). As a result, the
inertial mass is given by m = m0/
(in agreement with equation (1a)), so it
exceeds the rest mass whenever the particle has non-zero velocity. This increase in
inertial mass is exactly proportional to the kinetic energy of the particle, as shown by
The exact proportionality between the extra inertia and the extra energy of a moving
particle naturally suggests that the energy itself has contributed the inertia, and this in
turn suggests that all of the particle’s inertia (including its rest inertia m0) corresponds to
some form of energy. This leads to the hypothesis of a very general and important
relation, E = mc2, which signifies a fundamental equivalence between energy and inertial
mass. From this we might imagine that all inertial mass is potentially convertible to
energy, although it's worth noting that this does not follow rigorously from the principles
of special relativity. It is just a hypothesis suggested by special relativity (as it is also
suggested by Maxwell's equations). In 1905 the only experimental test that Einstein could
imagine was to see if a lump of "radium salt" loses weight as it gives off radiation, but of
course that would never be a complete test, because the radium doesn't decay down to
nothing. The same is true with an nuclear bomb, i.e., it's really only the binding energy of
the nucleus that is being converted, so it doesn't demonstrate an entire proton (for
example) being converted into energy. However, today we can observe electrons and
positrons annihilating each other completely, and yielding amounts of energy precisely in
accord with the predictions of special relativity.
In the preceding discussion we focused on a particle subjected to a force parallel to the
particle’s direction of motion. As noted above, the symmetry of this situation ensures that
the applied force in terms of the relatively moving coordinates equals the force in terms
of the rest frame of the particle. A similar analysis can be performed for the application
of a force perpendicular to the direction of motion of a particle, although in this case the
force is not symmetrical with respect to the two frames. Indeed we saw in Section 2.2 that
if an electromagnetic force in the rest frame of the particle is F0, then it is F = (1v2)1/2 F0
in terms of the inertial coordinates in which the particle is moving with speed v in a
direction perpendicular to the force. We also noted that all kinds of forces must transform
in this same way, because otherwise the deviation from electromagnetic forces could be
used to determine an absolute speed. So, analogously to the longitudinal case, we begin
by writing the composition law for perpendicular velocities (see Section 1.8)
Differentiating with respect to Uy gives
when Ux = Uy = 0, so P is at rest in K, and u = v), we have
If we again let t and denote the time coordinates of k and K respectively, then from the
metric (d)2 = c2(dt)2 (dx)2 and the fact that v2 = (dx/dt)2 it follows that the incremental
lapse of proper time d along the worldline of P as it advances from t to t+dt is
, so we can divide the above expression by this quantity to give
The quantity a = duy/dt is the acceleration of P with respect to the k coordinates,
whereas a0 = dUy / d is the acceleration of P with respect to the K coordinates (relative
to which it is momentarily at rest). Therefore, the equation F0 = m0a0 becomes
where we have made use of the fact that forces perpendicular to the direction of motion
transform according to F = (1v2)1/2 F0 as discussed above. The coefficient of the
acceleration “a” in this equation is sometimes called the “transverse mass”. Comparison
with equation (1) shows that this differs from the “longitudinal mass”, so in general the
ratio of force to acceleration is not a simple scalar. However, if we again evaluate the
inertial mass, this time in the transverse direction, we get
At the instant when ux = v and uy = 0, this reduces to
which is consistent with (2). So again we find that the inertial mass (i.e., the momentum
divided by the velocity) is the same as in the longitudinal case, and hence inertial mass is
a scalar. It’s worth emphasizing that this works only because all forces transform in the
same way as electromagnetic forces.
The preceding discussion represents one of the historical lines of thought that led to a
satisfactory basis for relativistic mechanics, but in hindsight the subject can be developed
in a more efficient way. A typical modern approach begins with the definition of
momentum as the product of rest mass and velocity. One formal motivation for this
definition is that the resulting 3-vector is well-behaved under Lorentz transformations, in
the sense that if this quantity is conserved with respect to one inertial frame, it is
automatically conserved with respect to all inertial frames (which would not be true if we
defined momentum in terms of, say, longitudinal mass). Of course, this definition also
agrees with non-relativistic momentum in the limit of low velocities. (The heuristic
technique of deducing the appropriate observable parameters of a theory from the
requirement that they match classical observables in the classical limit was used
extensively in early development of relativity, and later served the same purpose in the
development of quantum mechanics, where it is known as the "Correspondence
Based on this definition, the modern approach then simply postulates that momentum is
conserved, and defines relativistic force as the rate of change of momentum with respect
to the proper time of the object. This is essentially Newton's Second Law, motivated
largely by the fact that this definition of "force", together with conservation of
momentum, implies Newton's Third Law (at least in the case of contact forces).
However, from a purely relativistic standpoint, the definition of momentum as a 3-vector
seems incomplete. Its three components are proportional to the derivatives of the three
spatial coordinates x,y,z of the object with respect to the proper time of the object, but
what about the coordinate time t? If we let xj, j = 0, 1, 2, 3 denote the coordinates t,x,y,z,
then it seems natural to consider the 4-vector
where m now denotes the rest mass. We then define the relativistic force 4-vector as the
proper rate of change of momentum, i.e.,
Our correspondence principle easily enables us to identify the three components p1, p2, p3
as just our original momentum 3-vector, but now we have an additional component, p0,
equal to m(dt/d), which we will find corresponds to the "energy" E of the object. In full
four-dimensional spacetime, the coordinate time t is related to the object's proper time
according to
In geometric units (c = 1) the quantity in the square brackets is just v2. Substituting back
into our energy definition, we have
Notice that this is identical to what we previously called the inertial mass, but now we see
that it represents the total energy of the particle. The first term on the right side is simply
m (or mc2 in normal units), so we interpret this as the rest energy (and also the rest mass)
of the object. This is sometimes presented as a derivation of mass-energy equivalence,
but at best it's really just a suggestive heuristic argument. The key step in this
"derivation" was when we blithely decided to call p0 the "energy" of the object. Strictly
speaking, we violated our "correspondence principle" by making this definition, because
by correspondence with the low-velocity limit, the energy E of a particle should be
something like (1/2)mv2, and clearly p0 does not reduce to this in the low-speed limit.
Nevertheless, we defined p0 as the "energy" E, and since that component equals m when
v = 0, we essentially just defined our result E = m (or E = mc2 in ordinary units) for a
mass at rest. From this reasoning it isn't clear that this is anything more than a
bookkeeping convention, one that could just as well be applied in classical mechanics
using some arbitrary squared velocity to convert from units of mass to units of energy.
The assertion of physical equivalence between inertial mass and energy has significance
only if it is actually possible for the entire mass of an object, including its rest mass, to
manifestly exhibit the qualities of energy. Lacking this, the only equivalence between
inertial mass and energy that special relativity strictly entails is the "extra" inertia that
bodies exhibit when they acquire kinetic energy (either by being subjected to a
mechanical force or by absorbing radiative energy).
As mentioned above, even the fact that nuclear reactors give off huge amounts of energy
does not really substantiate the complete equivalence of energy and inertial mass,
because the energy given off in such reactions represents just the binding energy holding
the nucleons (protons and neutrons) together. The binding energy is the amount of energy
required to pull a nuclei apart. (The terminology is slightly inapt, because a configuration
with high binding energy is actually a low energy configuration, and vice versa.) Of
course, protons are all positively charged, so they repel each other by the Coulomb force,
but at very small distances the strong nuclear force binds them together. Since each
nucleon is attracted to every other nucleon, we might expect the total binding energy of a
nucleus comprised of N nucleons to be proportional to N(N-1)/2, which would imply that
the binding energy per nucleon would increase linearly with N. However, saturation
effects cause the binding energy per nucleon to reach a maximum at for nuclei with N
60 (e.g., iron), then to decrease slightly as N increases further. As a result, if an atom with
(say) N = 230 is split into two atoms, each with N=115, the total binding energy per
nucleon is increased, which means the resulting configuration is in a lower energy state
than the original configuration. In such circumstances, the two small atoms have slightly
less total rest mass than the original large atom, but at the instant of the split the overall
"mass-like" quality is conserved, because those two smaller atoms have enormous
velocities, precisely such that the total relativistic mass is conserved. (This physical
conservation is the main reason the old concept of relativistic mass has never been
completely discarded.) If we then slow down those two smaller atoms by absorbing their
energy, we end up with two atoms at rest, at which point a little bit of apparent rest mass
has disappeared from the universe. On the other hand, it is also possible to fuse two light
nuclei (e.g., N = 2) together to give a larger atom with more binding energy, in which
case the rest mass of the resulting atom is less than the combined rest masses of the two
original atoms. In either case (fission or fusion), a net reduction in rest mass occurs,
accompanied by the appearance of an equivalent amount of kinetic energy and radiation.
(The actual detailed mechanism by which binding energy, originally a "rest property"
with isotropic inertia, becomes a kinetic property representing what we may call
relativistic mass with anisotropic inertia, is not well understood.)
It may appear that equation (3) fails to account for the energy of light, because it gives E
proportional to the rest mass m, which is zero for a photon. However, the denominator of
(3) is also zero for a photon (because v = 1), so we need to evaluate the expression in the
limit as m goes to zero and v goes to 1. We know from the study of electro-magnetic
radiation that although a photon has no rest mass, it does (according to Maxwell's
equations) have momentum, equal to |p| = E (or E/c in conventional units). This suggests
that we try to isolate the momentum component from the rest mass component of the
energy. To do this, we square equation (2) and expand the simple geometric series as
Excluding the first term, which is purely rest mass, all the remaining terms are divisible
by (mv)2, so we can write this is
The right-most term is simply the squared magnitude of the momentum, so we have the
apparently fundamental relation
consistent with our premise that the E (or E/c in conventional units) equals the magnitude
of the momentum |p| for a photon. Of course, electromagnetic waves are classically
regarded as linear, meaning that photons don't ordinarily interfere with each other
(directly). As Dirac said, "each photon interferes only with itself... interference between
two different photons never occurs". However, the non-linear field equations of general
relativity enable photons to interact gravitationally with each other. Wheeler coined the
word "geon" to denote a swarm of massless particles bound together by the gravitational
field associated with their energy, although he noted that such a configuration would be
inherently unstable, viz., it would very rapidly either dissipate or shrink into complete
gravitational collapse. Also, it's not clear that any physically realistic situation would lead
to such a configuration in the first place, since it would require concentrating an amount
of electromagnetic energy equivalent to the mass m within a radius of about r = Gm/c2.
For example, to make a geon from the energy equivalent of one electron, it would be
necessary to concentrate that energy within a radius of about (6.7)10-58 meters.
An interesting alternative approach to deducing (4) is based directly on the Minkowski
This is applicable both to massive timelike particles and to light. In the case of light we
know that the proper time d and the rest mass m are both zero, but we may postulate that
the ratio m/d remains meaningful even when m and d individually vanish. Multiplying
both sides of the Minkowski line element by the square of this ratio gives immediately
The first term on the right side is E2 and the remaining three terms are px2, py2, and pz2, so
this equation can be written as
Hence this expression is nothing but the Minkowski spacetime metric multiplied through
by (m/d)2, as illustrated in the figure below.
The kinetic energy of the particle with rest mass m along the indicated worldline is
represented in this figure by the portion of the total energy E in excess of the rest energy.
Returning to the question of how mass and energy can be regarded as different
expressions of the same thing, recall that the energy of a particle with rest mass m0 and
speed V is m0/(1V2)1/2. We can also determine the energy of a particle whose motion is
defined as the composition of two orthogonal speeds. Let t,x,y,z denote the inertial
coordinates of system S, and let T,X,Y,Z denote the (aligned) inertial coordinates of
system S'. In S the particle is moving with speed vy in the positive y direction so its
coordinates are
The Lorentz transformation for a coordinate system S' whose spatial origin is moving
with the speed v in the positive x (and X) direction with respect to system S is
so the coordinates of the particle with respect to the S' system are
The first of these equations implies t = T(1 vx2)1/2, so we can substitute for t in the
expressions for X and Y to give
The total squared speed V2 with respect to these coordinates is given by
Subtracting 1 from both sides and factoring the right hand side, this relativistic
composition rule for orthogonal speeds vx and vy can be written in the form
It follows that the total energy (neglecting stress and other forms of potential energy) of a
ring of matter with a rest mass m0 spinning with an intrinsic circumferential speed u and
translating with a speed v in the axial direction is
A similar argument applies to translatory motions of the ring in any direction, not just the
axial direction. For example, consider motions in the plane of the ring, and focus on the
contributions of two diametrically opposed particles (each of rest mass m0/2) on the ring,
as illustrated below.
If the circumferential motion of the two particles happens to be perpendicular to the
translatory motion of the ring, as shown in the left-hand figure, then the preceding
formula for E is applicable, and represents the total energy of the two particles. If, on the
other hand, the circumferential motion of the two particles is parallel to the motion of the
ring's center, as shown in the right-hand figure, then the two particles have the speeds
(v+u)/(1+vu) and (vu)/(1vu) respectively, so the combined total energy (i.e., the
relativistic mass) of the two particles is given by the sum
Thus each pair of diametrically opposed particles with equal and opposite intrinsic
motions parallel to the extrinsic translatory motion contribute the same total amount of
energy as if their intrinsic motions were both perpendicular to the extrinsic motion. Every
bound system of particles can be decomposed into pairs of particles with equal and
opposite intrinsic motions, and these motions are either parallel or perpendicular or some
combination relative to the extrinsic motion of the system, so the preceding analysis
shows that the relativistic mass of the bound system of particles is isotropic, and the
system behaves just like an object whose rest mass equals the sum of the intrinsic
relativistic masses of the constituent particles. (Note again that we are not considering
internal stresses and other kinds of potential energy.)
This nicely illustrates how, if the spinning ring was mounted inside a box, we would
simply regard the angular kinetic energy of the ring as part of the rest mass M0 of the box
with speed v, i.e.,
where the "rest mass" of the box is now explicitly dependent on its energy content. This
naturally leads to the idea that each original particle might also be regarded as a "box"
whose contents are in an excited energy state via some kinetic mode (possibly rotational),
and so the "rest mass" m0 of the particle is actually just the relativistic mass of a lesser
amount of "true" rest mass, leading to an infinite regress, and the idea that perhaps all
matter is really some form of energy.
But does it really make sense to imagine that all the mass (i.e., inertial resistance) is
really just energy, and that there is no irreducible rest mass at all? If there is no original
kernel of irreducible matter, then what ultimately possesses the energy? To picture how
an aggregate of massless energy can have non-zero rest mass, first consider two identical
massive particles connected by a massless spring, as illustrated below.
Suppose these particles are oscillating in a simple harmonic motion about their common
center of mass, alternately expanding and compressing the spring. The total energy of the
system is conserved, but part of the energy oscillates between kinetic energy of the
moving particles and potential (stress) energy of the spring. At the point in the cycle
when the spring has no tension, the speed of the particles (relative to their common center
of mass) is a maximum. At this point the particles have equal and opposite speeds +u and
-u, and we've seen that the combined rest mass of this configuration (corresponding to the
amount of energy required to accelerate it to a given speed v) is m0/(1u2)1/2. At other
points in the cycle, the particles are at rest with respect to their common center of mass,
but the total amount of energy in the system with respect to any given inertial frame is
constant, so the effective rest mass of the configuration is constant over the entire cycle.
Since the combined rest mass of the two particles themselves (at this point in the cycle) is
just m0, the additional rest mass to bring the total configuration up to m0/(1u2)1/2 must be
contributed by the stress energy stored in the "massless" spring. This is one example of a
massless entity acquiring rest mass by virtue of its stored energy.
Recall that the energy-momentum vector of a particle is defined as [E, px, py, pz] where E
is the total energy and px, py, pz are the components of the momentum, all with respect to
some fixed system of inertial coordinates t,x,y,z. The rest mass m0 of the particle is then
defined as the Minkowskian "norm" of the energy-momentum vector, i.e.,
If the particle has rest mass m0, then the components of its energy-momentum vector are
If the object is moving with speed u, then dt/d = = 1/(1u2)1/2, so the energy
component is equal to the transverse relativistic mass. The rest mass of a configuration of
arbitrarily moving particles is simply the norm of the sum of their individual energymomentum vectors. The energy-momentum vectors of two particles with individual rest
masses m0 moving with speeds dx/dt = u and dx/dt = u are [m0, m0u, 0, 0] and
[m0, m0u, 0, 0], so the sum is [2m0, 0, 0, 0], which has the norm 2m0. This is
consistent with the previous result, i.e., the rest mass of two particles in equal and
opposite motion about the center of the configuration is simply the sum of their
(transverse) relativistic masses, i.e., the sum of their energies.
A photon has no rest mass, which implies that the Minkowskian norm of its energymomentum vector is zero. However, it does not follow that the components of its energymomentum vector are all zero, because the Minkowskian norm is not positive-definite.
For a photon we have E2 px2 py2 pz2 = 0 (where E = h, so the energy-momentum
vectors of two photons, one moving in the positive x direction and the other moving in
the negative x direction, are of the form [E, E, 0, 0] and [E, E, 0, 0] respectively. The
Minkowski norms of each of these vectors individually are zero, but the sum of these two
vectors is [2E, 0, 0, 0], which has a Minkowski norm of 2E. This shows that the rest mass
of two identical photons moving in opposite directions is m0 = 2E = 2h, even though the
individual photons have no rest mass.
If we could imagine a means of binding the two photons together, like the two particles
attached to the massless spring, then we could conceive of a bound system with positive
rest mass whose constituents have no rest mass. As mentioned previously, in normal
circumstances photons do not interact with each other (i.e., they can be superimposed
without affecting each other), but we can, in principle, imagine photons bound together
by the gravitational field of their energy (geons). The ability of electrons and antielectrons (positrons) to completely annihilate each other in a release of energy suggests
that these actual massive particles are also, in some sense, bound states of pure energy,
but the mechanisms or processes that hold an electron together, and that determine its
characteristic mass, charge, etc., are not known.
It's worth noting that the definition of "rest mass" is somewhat context-dependent when
applied to complex accelerating configurations of entities, because the momentum of
such entities depends on the space and time scales on which they are evaluated. For
example, we may ask whether the rest mass of a spinning disk should include the kinetic
energy associated with its spin. For another example, if the Earth is considered over just a
small portion of its orbit around the Sun, we can say that it has linear momentum (with
respect to the Sun's inertial rest frame), so the energy of its circumferential motion is
excluded from the definition of its rest mass. However, if the Earth is considered as a
bound particle during many complete orbits around the Sun, it has no net momentum
with respect to the Sun's frame, and in this context the Earth's orbital kinetic energy is
included in its "rest mass".
Similarly the atoms comprising a "stationary" block of lead are not microscopically
stationary, but in the aggregate, averaged over the characteristic time scale of the mean
free oscillation time of the atoms, the block is stationary, and is treated as such. The
temperature of the lead actually represents changes in the states of motion of the
constituent particles, but over a suitable length of time the particles are still stationary.
We can continue to smaller scales, down to sub-atomic particles comprising individual
atoms, and we find that the position and momentum of a particle cannot even be precisely
stipulated simultaneously. In each case we must choose a context in order to apply the
definition of rest mass. In general, physical entities possess multiple modes of excitation
(kinetic energy), and some of these modes we may choose (or be forced) to absorb into
the definition of the object's "rest mass", because they do not vanish with respect to any
inertial reference frame, whereas other modes we may choose (and be able) to exclude
from the "rest mass". In order to assess the momentum of complex physical entities in
various states of excitation, we must first decide how finely to decompose the entities,
and the time intervals over which to make the assessment. The "rest mass" of an entity
invariably includes some of what would be called energy or "relativistic mass" if we were
working on a lower level of detail.
2.4 Doppler Shift for Sound and Light
I was much further out than you thought
And not waving but drowning.
Stevie Smith, 1957
For historical reasons, some older text books present two different versions of the
Doppler shift equations, one for acoustic phenomena based on traditional Newtonian
kinematics, and another for optical and electromagnetic phenomena based on relativistic
kinematics. This sometimes gives the impression that relativity requires us to apply a
different set of kinematical rules to the propagation of sound than to the propagation of
light, but of course that is not the case. The kinematics of relativity apply uniformly to
the propagation of all kinds of signals, provided we give the exact formulae. The
traditional acoustic formulas are inexact, tacitly based on Newtonian approximations, but
when they are expressed exactly we find that they are perfectly consistent with the
relativistic formulas.
Consider a frame of reference in which the medium of signal propagation is assumed to
be at rest, and suppose an emitter and absorber are located on the x axis, with the emitter
moving to the left at a speed of ve and the absorber moving to the right, directly away
from the emitter, at a speed of va. Let cs denote the speed at which the signal propagates
with respect to the medium. Then, according to the classical (non-relativistic) treatment,
the Doppler frequency shift is
(It's assumed here that va and ve are less than cs, because otherwise there may be shock
waves and/or lack of communication between transmitter and receiver, in which case the
Doppler effect does not apply.) The above formula is often quoted as the Doppler effect
for sound, and then another formula is given for light, suggesting that relativity arbitrarily
treats sound and light signals differently. In truth, relativity has just a single formula for
the Doppler shift, which applies equally to both sound and light. This formula can
basically be read directly off the spacetime diagram shown below
If an emitter on worldline OA turns a signal ON at event O and OFF at event A, the
proper duration of the signal is the magnitude of OA, and if the signal propagates with
the speed of the worldline AB, then the proper duration of the pulse for a receiver on OB
will equal the magnitude of OB. Thus we have
Substituting xA = vetA and xB = vatB into the equation for cs and re-arranging terms gives
from which we get
Substituting this into the ratio of |OA| / |OB| gives the ratio of proper times for the signal,
which is the inverse of the ratio of frequencies:
Now, if va and ve are both small compared to c, it's clear that the relativistic correction
factor (the square root quantity) will be indistinguishable from unity, and we can simply
use the leading factor, which is the classical Doppler formula for both sound and light.
However, if va and/or ve are fairly large (i.e., on the same order as c) we can't neglect the
relativistic correction.
It may seem surprising that the formula for sound waves in a fixed medium with absolute
speeds for the emitter and absorber is also applicable to light, but notice that as the signal
propagation speed cs goes to c, the above Doppler formula smoothly evolves into
which is very nice, because we immediately recognize the quantity inside the square root
as the multiplicative form of the relativistic composition law for velocities (discussed in
section 1.8). In other words, letting u denote the composition of the speeds va and ve
given by the formula
it follows that
Consequently, as cs increases to c, the absolute speeds ve and va of the emitter and
absorber relative to the fixed medium merge into a single relative speed u between the
emitter and absorber, independent of any reference to a fixed medium, and we arrive at
the relativistic Doppler formula for waves propagating at c for an emitter and absorber
with a relative velocity of u:
To clarify the relation between the classical and relativistic Doppler shift equations, recall
that for a classical treatment of a wave with characteristic speed cs in a material medium
the Doppler frequency shift depends on whether the emitter or the absorber is moving
relative to the fixed medium. If the absorber is stationary and the emitter is receding at a
speed of v (normalized so cs = 1), then the frequency shift is given by
whereas if the emitter is stationary and the absorber is receding the frequency shift is
To the first order these are the same, but they obviously differ significantly if v is close to
1. In contrast, the relativistic Doppler shift for light, with cs = c, does not distinguish
between emitter and absorber motion, but simply predicts a frequency shift equal to the
geometric mean of the two classical formulas, i.e.,
Naturally to first order this is the same as the classical Doppler formulas, but it differs
from both of them in the second order, so we should be able to check for this difference,
provided we can arrange for emitters and/or absorbers to be moving with significant
speeds. The Doppler effect has in fact been tested at speeds high enough to distinguish
between these two formulas. The possibility of such a test, based on observing the
Doppler shift for “canal rays” emitted from high-speed ions, had been considered by
Stark in 1906, and Einstein published a short paper in 1907 deriving the relativistic
prediction for such an experiment. However, it wasn’t until 1938 that the experiment was
actually performed with enough precision to discern the second order effect. In that year,
Ives and Stilwell shot hydrogen atoms down a tube, with velocities (relative to the lab)
ranging from about 0.8 to 1.3 times 106 m/sec. As the hydrogen atoms were in flight they
emitted light in all directions. Looking into the end of the tube (with the atoms coming
toward them), Ives and Stilwell measured a prominent characteristic spectral line in the
light coming forward from the hydrogen. This characteristic frequency was Doppler
shifted toward the blue by some amount dapproach because the source was approaching
them. They also placed a mirror at the opposite end of the tube, behind the hydrogen
atoms, so they could look at the same light from behind, i.e., as the source was effectively
moving away from them, red-shifted by some amount dreceed. The following is a table of
results from the original 1938 experiment for four different velocities of the hydrogen
Ironically, although the results of their experiment brilliantly confirmed Einstein’s
prediction based on the special theory of relativity, Ives and Stillwell were not advocates
of relativity, and in fact gave a completely different theoretical model to account for their
experimental results and the deviation from the classical prediction. This illustrates the
fact that the results of an experiment can never uniquely identify the explanation. They
can only split the range of available models into two groups, those that are consistent
with the results and those that aren't. In this case it's clear that any model yielding the
classical prediction is ruled out, while the Lorentz/Einstein model is found to be
consistent with the observed results.
All the above was based on the assumption that the emitter and absorber are moving
relative to each other directly along their "line of sight". More generally, we can give the
Doppler shift for the case when the (inertial) motions of the emitter and absorber are at
any specified angles relative to the "line of sight". Without loss of generality we can
assume the absorber is stationary at the origin of inertial coordinates and the emitter is
moving at a speed v and at an angle relative to the direct line of sight, as illustrated
For two pulses of light emitted at coordinate times differing by te, arrival times at the
receiver will differ by ta = (1 vr)t where vr = v cos() is the radial component of the
emitter’s velocity. Also, the proper time interval along the emitter’s worldline between
the two emissions is e = te (1 – v2)1/2. Therefore, since the frequency of the
transmissions with respect to the emitter’s rest frame is proportional to 1/e, and the
frequency of receptions with respect to the absorber’s rest frame is proportional to 1/ta,
the full frequency shift is
This differs in appearance from the Doppler shift equation given in Einstein’s 1905
paper, but only because, in Einstein’s equation, the angle is evaluated with respect to
the emitter’s rest frame, whereas in our equation the angle is evaluated with respect to the
absorber’s rest frame. These two angles differ because of the effect of aberration. If we
let ' denote the angle with respect to the emitter's rest frame, then ' is related to by the
aberration equation
(See Section 2.5 for a derivation of this expression.) Substituting for cos() into the
previous equation gives Einstein’s equation for the Doppler shift, i.e.,
Naturally for the "linear" cases, when = ' = 0 or = ' = we have
respectively. This highlights the symmetry between emitter and absorber that is so
characteristic of relativistic physics.
Even more generally, consider an emitter moving with constant velocity u, an absorber
moving with constant velocity v, and a signal propagating with velocity C in terms of an
inertial coordinate system in which the signal’s speed |C| is independent of direction.
This would apply to a system of coordinates at rest with respect to the medium of the
signal, and it would apply to any inertial coordinate system if the signal is light in a
vacuum. It would also apply to the case of a signal emitted at a fixed speed relative to the
emitter, but only if we take u = 0, because in this case the speed of the signal is
independent of direction only in terms of the rest frame of the emitter. We immediately
have the relation
where re and ra are the position vectors of the emission and absorption events at the
times te and ta respectively. Differentiating both sides with respect to ta and dividing
through by 2(ta te), and noting that (ra – re)/(ta – te) = C, we get
where u and v are the velocity vectors of the emitter and absorber respectively. Solving
for the ratio dte/dta, we arrive at the relation
Making use of the dot product identity r∙s = |r||s|cos(r,s) where r,s is the angle between
the r and s vectors, these can be re-written as
The frequency of any process is inversely proportional to the duration of the period, so
the frequency at the absorber relative to the emitter, projected by means of the signal, is
given by a/e = dte/dta. Therefore, the above expressions represent the classical Doppler
effect for arbitrarily moving emitter and receiver. However, the elapsed proper time along
a worldline moving with speed v in terms of any given inertial coordinate system differs
from the elapsed coordinate time by the factor
where c is the speed of light in vacuum. Consequently, the actual ratio of proper times –
and therefore proper frequencies – for the emitter and absorber is
The leading ratio is the classical Doppler effect, and the square root factor is the
relativistic correction.
2.5 Stellar Aberration
It was chiefly therefore Curiosity that tempted me (being then at Kew,
where the Instrument was fixed) to prepare for observing the Star on
December 17th, when having adjusted the Instrument as usual, I perceived
that it passed a little more Southerly this Day than when it was observed
Bradley, 1727
The aberration of starlight was discovered in 1727 by the astronomer James Bradley
while he was searching for evidence of stellar parallax, which in principle ought to be
observable if the Copernican theory of the solar system is correct. He succeeded in
detecting an annual variation in the apparent positions of stars, but the variation was not
consistent with parallax. The observed displacement was greatest for stars in the direction
perpendicular to the orbital plane of the Earth, and most puzzling was the fact that the
displacement was exactly three months (i.e., 90 degrees) out of phase with the effect that
would result from parallax due to the annual change in the Earth’s position in orbit
around the Sun. It was as if he was expecting a sine function, but found instead a cosine
function. Now, the cosine is the derivative of the sine, so this suggests that the effect he
was seeing was not due to changes in the earth’s position, but to changes in the Earth’s
(directional) velocity. Indeed Bradley was able to interpret the observed shift in the
incident angle of starlight relative to the Earth’s frame of reference as being due to the
transverse velocity of the Earth relative to the incoming corpuscles of light, assuming the
latter to be moving with a finite speed c. The velocity of the corpuscles relative to the
Earth equals their velocity vector c with respect to the Sun’s frame of reference plus the
negative of the orbital velocity vector v of the Earth, as shown below.
In this figure, is the apparent elevation of a star above the Earth’s orbital plane when
the Earth’s velocity is most directly toward the star (say, in January), and 2 is the
apparent elevation six months later when the Earth’s velocity is in the opposite direction.
The law of sines gives
Since the aberration angles are quite small, we can closely approximate sin() with just
. Therefore, the apparent position of a star that is roughly above the ecliptic ought to
describe a small circle (or ellipse) around its true position, and the “radius” of this path
should be sin()(v/c) where v is the Earth’s orbital speed and c is the speed of light.
When Bradley made his discovery he was examining the star Draconis, which has a
declination of about 51.5 degrees above the Earth’s equatorial plane, and about 75
degrees above the ecliptic plane. Incidentally, most historical accounts say Bradley chose
this star simply because it passes directly overhead in Greenwich England, the site of his
observatory, which happens to be at about 51.5 degrees latitude. Vertical observations
minimize the effects of atmospheric refraction, but surely this is an incomplete
explanation for choosing Draconis, because stars with this same declination range from
28 to 75 degrees above the ecliptic, due to the Earth’s tilt of 23.5 degrees. Was it just a
lucky coincidence that he chose (as Leibniz had previously) Draconis, a star with the
maximum possible elevation above the ecliptic among stars that pass directly over
Greenwich? Accidental or not, he focused on nearly the ideal star for detecting
aberration. The orbital speed of the Earth is roughly v = (2.98)104 m/sec, and the speed of
light is c = (3.0)108 m/sec, so the magnitude of the aberration for Draconis is
(v/c)sin(75 deg) = (9.59)10-5 radians = 19.8 seconds of arc. Bradley subsequently
confirmed the expected aberration for stars at other declinations.
Ironically, although it was not the effect Bradley had been seeking, the existence of
stellar aberration was, after all, conclusive observational proof of the Earth’s motion, and
hence of the Copernican theory, which had been his underlying objective. Furthermore,
the discovery of stellar aberration not only provided the first empirical proof of the
Copernican theory, it also furnished a new and independent proof of the finite speed of
light, and even enabled that speed to be estimated from knowledge of the orbital speed of
the Earth. The result was consistent with the earlier estimate of the speed of light by
Roemer based on observations of Jupiter’s moons (see Section 3.3).
Bradley’s interpretation, based on the Newtonian corpuscular concept of light, accounted
quite well for the basic phenomenon of stellar aberration. However, if light consists of
ballistic corpuscles their speeds ought to depend on the relative motion between the
source and observer, and these differences in speed ought to be detectable, whereas no
such differences were found. For example, early in the 19th century Arago compared the
focal length of light from a particular star at six-month intervals, when the Earth’s motion
should alternately add and subtract a velocity component equal to the Earth’s orbital
speed to the speed of light. According to the corpuscle theory, this should result in a
slightly different focal length through the system of lenses, but Arago observed no
difference at all. In another experiment he viewed the aberration of starlight through a
normal lens and through a thick prism with a very different index of refraction, which
ought to give a slightly different aberration angle according to the Newtonian corpuscular
model, but he found no difference. Both these experiments suggest that the speed of light
is independent of the motion of the source, so they tended to support the wave theory of
light, rather than the corpuscular theory.
Unfortunately, the phenomenon of stellar aberration is somewhat problematic for theories
that regard electromagnetic radiation as waves propagating in a luminiferous ether. It’s
worthwhile to examine the situation in some detail, because it is a nice illustration of the
clash between mechanical and electromagnetic phenomena within the context of Galilean
relativity. If we conceive of the light emanating from a distant star reaching the Earth’s
location as a set of essentially parallel streams of particles normal to the Earth’s orbit (as
Bradley did), then we have the situation shown in the left-hand figure below, and if we
apply the Galilean transformation to a system of coordinates moving with the Earth (in
the positive x direction) we get the situation shown in the right-hand figure.
According to this model the aberration arises because each corpuscle has equations of
motion of the form y = -ct and x = x0, so the Galilean transformation x = x’+vt, y = y’, t =
t’ leads to y’ = ct’ and x’+vt = x0, which gives (after eliminating t) the path x’ – v(y’/c)
= x0. Thus we have dx’/dy’ = v/c = tan(). In contrast, if we conceive of the light as
essentially a plane wave, the sequence of wave crests is as shown below.
In this case each wavecrest has the equation y = ct, with no x specification, because the
wave is uniform over the entire wavefront. Applying the same Galilean transformation as
before, we get simply y’ = ct’, so the plane wave looks the same in terms of both
systems of coordinates. We might try to argue that the flow of energy follows definite
streamlines, and if these streamlines are vertical with respect to the unprimed coordinates
they would transform into slanted streamlines in the primed coordinates, but this would
imply that the direction of propagation of the wave energy is not exactly normal to the
wave fronts, in conflict with Maxwell’s equations. This highlights the incompatibility
between Maxwell’s equations and Galilean relativity, because if we regard the primed
coordinates as stationary and the distant star as moving transversely with speed –v, then
the waves reaching the Earth at this moment should have the same form as if they were
emitted from the star when it was to the right of its current position, and therefore the
wave fronts ought to be slanted by an angle of v/c. Of course, we do actually observe
aberration of this amount, so the wave fronts really must be tilted with respect to the
primed coordinates, and we can fairly easily explain this in terms of the wave model, but
the explanation leads to a new complication.
According to the early 19th century wave model with a stationary ether, an observation of
a distant star consists of focusing a set of parallel rays from that star down to a point, and
this necessarily involves some propagation of light in the transverse direction (in order to
bring the incoming rays together). Taking the focal point to be midway between two rays,
and assuming the light propagates transversely at the same speed in both directions, we
will align our optical device normal to the plane wave fronts. However, suppose the
effective speed of light is slightly different in the two transverse directions. If that were
the case, we would need to tilt our optical device, and this would introduce a time skew
in our evaluation of the wave front, because our optical image would associate rays from
different points on the wave front at slightly different times. As a result, what we regard
as the wave front would actually be slanted. The proponents of the wave model argued
that the speed of light is indeed different in the two transverse directions relative to a
telescope on the Earth pointed up at a star, because the Earth is moving sideways
(through the ether) with respect to the incoming rays. Assuming light always propagates
at the fixed speed c relative to the ether, and assuming the Earth is moving at a speed v
relative to the ether, we could argue that the transverse speed of light inside our telescope
is c+v in one direction and cv in the other. To assess the effect of this asymmetry,
consider for simplicity just two mirror elements of a reflecting telescope, focusing
incoming rays as illustrated below.
The two incoming rays shown in this figure are from the same wavecrest, but they are not
brought into focus at the midpoint of the telescope, due to the (putative) fact that the
telescope is moving sideways through the ether with a speed v. Both pulses strike the
mirrors at the same time, but the left hand pulse goes a distance proportional to c+v in the
time it takes the right hand pulse to go a distance proportional to cv. In order to bring
the wave crest into focus, we need to increase the path length of the left hand ray by a
distance proportional to v, and decrease the right hand path length by the same distance.
This is done by tilting the telescope through a small angle whose tangent is roughly v/c,
as shown below.
Thus the apparent optical wavefront is tilted by an angle given by tan() = v/c, which is
the same as the aberration angle for the rays, and also in agreement with the corpuscle
model. However, this simple explanation assumes a total vacuum, and it raises questions
about what would happen if the telescope was filled with some material medium such as
air or water. It was already accepted in Fresnel’s day, for both the wave and the corpuscle
models of light, that light propagates more slowly in a dense medium than in vacuum.
Specifically, the speed of light in a medium with index of refraction n is c/n. Hence if we
fill our reflecting telescope with such a medium, then the speed of light in the two
transverse directions would be c/n + v and c/n – v, and the above analysis would lead us
to expect an aberration angle given by tan() = nv/c. The index of refraction of air is just
1.0003, so this doesn’t significantly affect the observed aberration angle for telescopes in
air. However, the index of refraction of water is 1.33, so if we fill a telescope with water,
we ought to observe (according to this theory) significantly more stellar aberration. Such
experiments have actually been carried out, but no effect on the aberration angle is
In 1818 Fresnel suggested a way around this problem. His hypothesis, which he admitted
appeared extraordinary at first sight, was that although the luminiferous ether through
which light propagates is nearly immobile, it is dragged along slightly by material
objects, and the higher the refractive index of the object, the more it drags the ether along
with its motion. If an object with refractive index n moves with speed v relative to the
nominal rest frame of the ether, Fresnel hypothesized that the ether inside the object is
dragged forward at a speed (1 – 1/n2)v. Thus for objects with n = 1 there is no dragging at
all, but for n greater than 1 the ether is pulled along slightly. Fresnel gave a plausibility
argument based on the relation between density and refractivity, making his hypothesis
seem at least slightly less contrived, although it was soon pointed out that since the index
of refraction of a given medium varies with frequency, Fresnel’s model evidently
requires a different ether for each frequency. Neglecting this second-order effect of
chromatic dispersion, Fresnel was able on the basis of his partial dragging hypothesis to
account for the absence of any change in stellar aberration for different media. He
pointed out that, in the above analysis, the speed of light in the two directions has the
For the vacuum we have n = 1, and these expressions are the same as before. In the
presence of a material medium with n greater than 1, the optical device must now be
tilted through an angle whose tangent is approximately
It might seem as if Fresnel’s hypothesis has simply resulted in exchanging one problem
for another, but recall that our telescope is aligned normal to the apparent wave front,
whereas it is at an angle of v/c to the normal of the actual wave front, so the wave will be
refracted slightly (assuming n is not equal to 1). According to Snell’s law (which for
small angles is n11 = n22), the refracted angle will be less than the incident angle by the
factor 1/n. Hence we must orient our telescope at an angle of v/c in order for the rays
within the medium to be at the required angle.
This is how, on the basis of somewhat adventuresome hypotheses and assumptions,
physicists of the 19th century were able to account for stellar aberration on the basis of
the wave model of light. (Accommodating the lack of effect of differing indices of
refraction proved to be even more challenging for the corpuscular model.) Fresnel’s
remarkable hypothesis was directly confirmed (many years later) by Fizeau, and it is now
recognized as a first-order approximation of the relativistic velocity addition law,
composing the speed of light in a medium with the speed of the medium
It’s worth noting that all the “speeds” discussed here are phase speeds, corresponding to
the time parameter for a given wave. Lorentz later showed that Fresnel’s formula could
also be interpreted in the context of a perfectly immobile ether along with the assumption
of phase shifts in the incoming wave fronts so that the effective time parameter
transformation was not the Galilean t’ = t but rather t’ = t – vx/c2.
Despite the success of Fresnel’s hypothesis in matching all optical observations to the
first order in v/c, many physicists considered his partially dragged ether model to be ad
hoc and unphysical (especially the apparent need for a different ether for each frequency
of light), so they sought other explanations for stellar aberration that would be consistent
with a more mechanistically realistic wave model. As an alternative to Fresnel’s
hypothesis, Lorentz evaluated a proposal of Stokes, who in 1846 had suggested that the
ether is totally dragged along by material bodies (so the ether is co-moving with the body
at the body’s surface), and is irrotational, incompressible, and inviscid, so that it supports
a velocity potential. Under these assumptions it can be shown that the normal of a light
wave incident on the Earth undergoes a total deflection during its approach such that (to
first order) the apparent shift in the star’s position agrees with observation. Unfortunately,
as Lorentz pointed out, the assumptions of Stokes’ theory are mutually contradictory,
because the potential flow field around a sphere does not give zero velocity on the
sphere’s surface. Instead, the velocity of the ether wind on the Earth’s surface would vary
with position, and so too would the aberration of starlight. Planck suggested a way
around this objection by supposing the luminiferous ether was compressible, and
accumulated with greatly increased density around large objects. Lorentz admitted that
this was conceivable, but only if we also assume the speed of light propagating through
the ether is unaffected by the changes in density of the ether, an assumption that plainly
contradicts the behavior of wave propagation in ordinary substances. He concluded
In this branch of physics, in which we can make no progress without some
hypothesis that looks somewhat startling at first sight, we must be careful not to
rashly reject a new idea… yet I dare say that this assumption of an enormously
condensed ether, combined, as it must be, with the hypothesis that the velocity of
light is not in the least altered by it, is not very satisfactory.
With the failure of Stoke’s theory, the only known way of reconciling stellar aberration
with a wave theory of light was Fresnel’s “extraordinary” hypothesis of partial dragging,
or Lorentz’s equivalent interpretation in terms of the effective phase time parameter t’.
However, the Fresnel-Lorentz theory predicted a non-null result for the MichelsonMorley experiment, which was the first experiment accurate to the second order in v/c.
To remedy this, Lorentz ultimately incorporated Fitzgerald’s length contraction into his
theory, which amounts to replacing the Galilean transformation x’ = x vt with the
relation x’ = (x – vt)/ (1 – (v/c)2)1/2, and then for consistency applying this same secondorder correction to the time transformation, giving t’ = (t – vx/c2)/(1 – (v/c)2)1/2, thereby
arriving at the full Lorentz transformation. By this point the posited luminiferous ether
had lost all of its mechanistic properties.
Meanwhile, Einstein's 1905 paper on the electrodynamics of moving bodies included a
greatly simplified derivation of the full Lorentz transformation, dispensing with the ether
altogether, and analyzing a variety of phenomena, including stellar aberration, from a
purely kinematical point of view. If a photon is emitted from object A at the origin of the
xyt coordinates and an angle relative to the x axis, then at time t1 it will have reached
the point
(Notice that the units have been scaled to make c = 1, so the Minkowski metric for a null
interval gives x12 + y12 = t12.) Now consider an object B moving in the positive x
direction with velocity v, and being struck by the photon at time t1 as shown below.
Naturally an observer riding along with B will not see the light ray arriving at an angle
from the x axis, because according to the system of coordinates co-moving with B the
source object A has moved in the x direction (but not in the y direction) between the
times of transmission and reception of the photon. Since the angle is just the arctangent of
the ratio of y to x of the photon's path, and since value of x is different with respect to
B's co-moving inertial coordinates whereas y is the same, it's clear that the angle of the
photon's path is different with respect to B's co-moving coordinates than with respect to
A's co-moving coordinates. In general the transformation of the angles of the paths of
moving objects from one system of inertial coordinates to another is called aberration.
To determine the angle of the incoming ray with respect to the co-moving inertial
coordinates of B, let x'y't' be an orthogonal coordinate system aligned with the xyt
coordinates but moving in the positive x direction with velocity v, so that B is at rest in
the primed coordinate system. Without loss of generality we can co-locate the origins of
the primed and unprimed coordinates systems, so in both systems the photon is emitted at
(0,0,0). The endpoint of the photon's path in the primed coordinates can be computed
from the unprimed coordinates using the standard Lorentz transformation for a boost in
the positive x direction:
Just as we have cos() = x1/t1, we also have cos(') = x1'/t1', and so
which is the general relativistic aberration formula relating the angles of light rays with
respect to relatively moving coordinate systems. Likewise we have sin(') = y1'/t1', from
which we get
Using these expressions for the sine and cosine of ' it follows that
Recalling the trigonometric identity tan(z) = sin(2z)/[1+cos(2z)] this gives
which immediately shows that aberration can be represented by stereographic projection
from a sphere to the tangent plane. (This is discussed more fully in Section 2.6.)
To see the effect of equation (3), suppose that, with respect to the inertial rest frame of a
given particle, the rays of starlight incident on the particle are uniformly distributed in all
directions. Then suppose the particle is given some speed v in the positive x direction
relative to this original isotropic frame, and we evaluate the angles of incidence of those
same rays of starlight with respect to the particle's new rest frame. The results, for speeds
ranging from 0 to 0.999, are shown in the figure below. (Note that the angles in equation
(3) are evaluated between the positive x or x' axis and the positive direction of the light
The preceding derivation applies to the case when the light is emitted from the unprimed
coordinate system at a certain angle and evaluated with respect to the primed coordinate
system, which is moving relative to the unprimed system. If instead the light was emitted
from B and received at A, we can repeat the above derivation, except that the direction of
the light ray is reversed, going now from B to A. The spatial coordinates are all the same
but the emission event now occurs at -t1, because it is in the past of event (0,0,0). The
result is simply to replace each occurrence of v in the above expressions with -v. Of
course, we could reach the same result simply by transposing the primed and unprimed
angles in the above expressions.
Incidentally, the aberration formula used by astronomers to evaluate the shift in the
apparent positions of stars resulting from the Earth's orbital motion is often expressed in
terms of angles with respect to the y axis (instead of the x axis), as shown below
This configuration corresponds to a distant star at A sending starlight to the Earth at B,
which is moving nearly perpendicular to the incoming ray. This gives the greatest
aberration effect, which explains why the stars furthest from the ecliptic plane experience
the greatest aberration. The formula can be found simply by making the substitution =
in equation (1), and noting the trigonometric identity tan(acos(/2 x)) =
. This gives the equivalent form
Another interesting aspect of aberration is illustrated by considering two separate light
sources S1 and S2, and two momentarily coincident observers A and B as shown below
If observer A is stationary with respect to the sources of light, he will see the incoming
rays of light striking him from the negative x direction. Thus, the light will impart a small
amount of momentum to observer A in the positive x direction. On the other hand,
suppose observer B is moving to the right (away from the sources of light) at nearly the
speed of light. According to our aberration formula, if B is traveling with a sufficiently
great speed, he will see the light from S1 and S2 approaching from the positive x
direction, which means that the photons are imparting momentum to B in the negative x
direction - even though the light sources are "behind" B. This may seem paradoxical, but
the explanation becomes clear when we realize that the x component of the velocities of
the incoming light rays is less than c (because (vx)2 = c2 (vy)2), which means that it's
possible for observer B to be moving to the right faster than the incoming photons are
moving to the right.
Of course, this effect relies only on the relative motion of the observer and the source, so
it works just as well if we regard B as motionless and the light sources S1,S2 moving to
the left at near the speed of light. Thus, it might seem that we could use light rays to
"pull" an object from behind, and in a sense this is true. However, since the light rays are
moving to the right more slowly than the object, they clearly cannot catch up with the
object from behind, so they must have been emitted when the object was still to the left of
the sources. This illustrates how careful one must be to correctly account for the effective
aberration of non-uniformly moving objects, because the simple aberration formulas are
based on the assumption that the light source has been in uniform motion for an indefinite
period of time. To correctly describe the aberration of non-uniformly moving light
sources it is necessary to return to the basic metrical relations.
For example, consider a binary star system in which one large central star is roughly
stationary (relative to our Sun), and a smaller companion star is orbiting around the
central star with a large angular velocity in a plane normal to the direction to our Sun, as
illustrated below.
It might seem that the periodic variations in the velocity of the smaller star relative to our
Sun would result in significantly different amounts of aberration as viewed from the
Earth, causing the two components of the binary star system to appear in separate
locations in the sky - which of course is not what is observed. Fortunately, it's easy to
show that the correct application of the principles of special relativity, accounting for the
non-uniform variations in the orbiting star's velocity, leads to prediction that agree
perfectly with observation of binary star systems.
At any moment of observation on Earth we can consider ourselves to be at rest at the
point P0 in the momentarily co-moving inertial frame, with respect to which our
coordinates are
Suppose the large central star of a binary pair is at point P1 at a distance L from the Earth
with the coordinates
The fundamental assertion of special relativity is that light travels along null paths, so if a
pulse of light is emitted from the star at time t = T and arrives at Earth at time t = 0, we
and so
from which it follows that x1/z1 at time T is
have the aberration angle
. Thus, for the central star we
Now, what about the aberration of the other star in the binary pair, the one that is
assumed to be much smaller and revolving at a radius R and angular speed w around the
larger star in a plane perpendicular to the Earth? The coordinates of that revolving star at
point P2 are
where = wt is the angular position of the smaller star in its orbit. Again, since light
travels along null paths, a pulse of light arriving on Earth at time t = 0 was emitted at time
t = T satisfying the relation
Solving this quadratic for T (and noting that the phase depends entirely on the arbitrary
initial conditions of the orbit) gives
If the radius R of the binary star's orbit is extremely small in comparison with the
distance L from those stars to the Earth, and assuming v is not very close to the speed of
light, then the quantity inside the square root is essentially equal to 1. Therefore, the
tangents of the angles of incidence in the x and y directions are
These expressions make it clear why Einstein emphasized in his 1905 treatment of
aberration that the light source was at infinite distance, i.e., L goes to infinity, so all but
the middle term of the x tangent vanish. Of course, the leading terms in these tangents are
obviously just the inherent "static" angular separation between the two stars viewed from
the Earth, and the last term in the x tangent is completely negligible assuming R/L and/or
v are sufficiently small compared with 1, so the aberration angle is essentially
which of course is the same as the aberration of the central star. Indeed, binary stars have
been carefully studied for over a century, and the aberrations of the components are
consistent with the relativistic predictions for reasonable Keplerian orbits. (Incidentally,
recall that Bradley's original formula for aberration was tan() = v, whereas the
corresponding relativistic equation is sin() = v. The actual aberration angles for stars
seen from Earth are small enough that the sine and tangent are virtually
The experimental results of Michelson and Morley, based on beams of light pointed in
various directions with respect to the Earth's motion around the Sun, can also be treated
as aberration effects. Let the arm of Michelson's interferometer be of length L, and let it
make an angle with the direction of motion in the rest frame of the arm. We can
establish inertial coordinates t,x,y in this frame, in terms of which the light pulse is
emitted at t1 = 0, x1 = 0, y1 = 0, reflected at t2 = L, x2 = Lcos(), y2 = Lsin(), and arrives
back at the origin at t3 = 2L, x3 = 0, y3 = 0. The Lorentz transformation to a system x',y',t'
moving with velocity v in the x direction is x' = (xvt)/, y' = y, t' = (tvx)/ where 2 =
(1v2), so the coordinates of the three events are x1' = 0, y1' = 0, t1' = 0, and x2' =
L(cos()v)/, y2' = Lsin(), t2' = L[1vcos()]/, and x3' = -2vL/, y3' = 0, t3' = 2L/.
Hence the total elapsed time in the primed coordinates is 2L/. Also, the total spatial
distance traveled is the sum of the outward distance
and the return distance
so the total distance is 2L/, giving a light speed of 1 regardless of the values of v and .
Of course, the angle of the interferometer arm cannot be with respect to the primed
coordinates. The tangent of the angle equals the arm's y extent divided by its x extent,
which gives tan() = Lsin()/[L(cos()] in the arm's rest coordinates. In the primed
coordinates the y' extent of the arm is the same as the y extent, Lsin(), but the x' extent
is Lcos(), so the tangent of the arm's angle is tan(') = tan()/. However, this should
not be confused with the angle (in the primed coordinates) of the light pulse as it travels
along the arm, because the arm is in motion with respect to the primed coordinates. The
outward direction of motion of the light pulse is given by evaluating the primed
coordinates of the emission and absorption events at x1,y1 and x2,y2 respectively.
Likewise the inward direction of the light pulse is based on the interval from x2,y2 to
x3,y3. These give the tangents of the outward and inward angles
Naturally these are consistent with the result of taking the ratio of equations (1) and (2).
2.6 Mobius Transformations of The Night Sky
So take this night,
Wrap it around me like a sheet.
I know I'm not forgiven
But I need a place to sleep...
Black Lab
Any proper orthochronous Lorentz transformation (including ordinary rotations and
relativistic boosts) can be represented by
and Q* is the transposed conjugate of Q. The coefficients a,b,c,d of Q are allowed to be
complex numbers, normalized so that ad bc = 1. Just to be explicit, this implies that if
we define
then the Lorentz transformation (1) is
Two observers at the same point in spacetime but with different orientations and
velocities will "see" incoming light rays arriving from different relative directions with
respect to their own frames of reference, due partly to ordinary rotation, and partly to the
aberration effect described in the previous section. This leads to the remarkable fact that
the combined effect of any proper orthochronous (and homogeneous) Lorentz
transformation on the incidence angles of light rays at a point corresponds precisely to the
effect of a particular linear fractional transformation on the Riemann sphere via ordinary
stereographic projection from the extended complex plane. The latter is illustrated below:
Roger Penrose described this “the first step of a powerful correspondence between the
spacetime geometry of relativity and the holomorphic geometry of complex spaces”. The
complex number p in the extended complex plane is identified with the point p' on the
unit sphere that is struck by a line from the "North Pole" through p. In this way we can
identify each complex number uniquely with a point on the sphere, and vice versa. (The
North Pole is identified with the "point at infinity" of the extended complex plane, for
Relative to an observer located at the center of the Riemann sphere, each point of the
sphere lies in a certain direction, and these directions can be identified with the directions
of incoming light rays at a point in spacetime. If we apply a Lorentz transformation of
the form (1) to this observer, specified by the four complex coefficients a,b,c,d, the
resulting change in the directions of the incoming rays of light is given exactly by
applying the linear fractional transformation (also known as a Mobius transformation)
to the points of the extended complex plane. Of course, our normalization ad bc = 1
implies the two conditions
so of the eight coefficients needed to specify the four complex numbers a,b,c,d, these two
constraints reduce the degrees of freedom to six, which is precisely the number of
degrees of freedom of Lorentz transformations (namely, three velocity components
vx,vy,vz, and three angular specifications for the longitude and latitude of our line of sight
and orientation about that line).
To illustrate this correspondence, first consider the "identity" Mobius transformation
w w. In this case we have
so our Lorentz transformation reduces to t' = t, x' = x, y' = y, z' = z as expected. None of
the points move on the complex plane, so none move on the Riemann sphere under
stereographic projection, and nothing changes in the sky's appearance. Now let's consider
the Mobius transformation w 1/w. In this case we have
and so the corresponding Lorentz transformation is
t' = t, x' = x, y' = y, z' = z .
Thus the x and z coordinates have been reflected. This is certainly a proper
orthochronous Lorentz transformation, because the determinant is +1 and the coefficient
of t is positive. But does reflecting the x and z coordinates agree with the stereographic
effect on the Riemann sphere of the transformation w 1/w? Note that the point w =
r + 0i maps to 1/r + 0i. There's a nice little geometric demonstration that the
stereographic projections of these points have coordinates (x,0,z) and (x,0,z)
respectively, noting that the two projection lines have negative inverse slopes and so are
perpendicular in the xz plane, which implies that they must strike the sphere on a
common diameter (by Pythagoras' theorem). A similar analysis shows that points off the
real axis with projected coordinates (x,y,z) in general map to points with projections
(x,y,z) points.
The two examples just covered were both trivial in the sense that they left t unchanged.
For a more interesting example, consider the Mobius transformation w w + p, which
corresponds to the Lorentz transformation
If we denote our spacetime coordinates by the column vector X with components x0 = t,
x1 = x, x2 = y, x3 = z, then the transformation can be written as
To analyze this transformation it's worthwhile to note that we can decompose any
Lorentz transformation into the product of a simple boost and a simple rotation. For a
given relative velocity with magnitude |v| and components v1, v2, v3, let denote the
"boost factor"
It's clear that
Thus, these four components of L are fixed purely by the boost. The remaining
components depend on the rotational part of the transformation. If we define a "pure
boost" as a Lorentz transformation such that the two frames see each other moving with
velocities (v1,v2,v3) and (v1,v2,v3) respectively, then there is a unique pure boost for
any given relative velocity vector v1,v2,v3. This boost has the components
where Q = (1)/|v|2. From our expression for L we can identify the components to give
the boost velocity in terms of the Mobius parameter p
From these we write the pure boost part of L as follows
We know that our Lorentz transformation L can be written as the product of this pure
boost B times a pure rotation R, i.e., L = BR, so we can determine the rotation
which in this case gives
In terms of Euler angles, this represents a rotation about the y axis through an angle of
The correspondence between the coefficients of the Mobius transformation and the
Lorentz transformation described above assumes stereographic projection from the North
pole to the equatorial plane. More generally, if we're projecting from the North Pole of
the Riemann sphere to a complex plane parallel to (but not necessarily on) the equator,
and if the North Pole is at a height h above the plane, then every point in the plane is a
factor of h further away from the origin than in the case of equatorial projection (h=1), so
the Mobius transformation corresponding to the above Lorentz transformation is w
(Aw+B)/(Cw+D) where
It's also worth noting that the instantaneous aberration observed by an accelerating
observer does not differ from that observed by a momentarily co-moving inertial
observer. We're referring here to the null (light-like) rays incident on a point of zero
extent, so this is not like a finite spinning body whose outer edges have significant
velocities relative to their centers. We're just referring to different coordinate systems
whose origins coincide at a given point in spacetime, and describing how the light rays
pass through that point in terms of the different coordinate systems at that instant. In this
context the acceleration (or spinning) of the systems make no difference to the answer.
In other words, as long as our inertial coordinate system has the same velocity and
orientation as the (ideal point-like) observer at the moment of the observation, it doesn't
matter if the observer is in the process of changing his orientation or velocity. (This is a
corollary of the "clock hypothesis" of special relativity, which asserts that a traveler's
time dilation at a given instant depends only on his velocity and not his acceleration at
that instant.)
In general, the effect of the finite Mobius transformation
for complex constants a,b,c,d can be classified according to the value of the "squared
We call this the "conjugacy parameter", because two linear fractional transformations are
conjugate if and only if they have the same value of . The different kinds of
transformations are listed below:
0 <4
< 0 or not real
We note that pure rotations (a special case of elliptic transformations) have the form
where an overbar denotes complex conjugation.
Iteration of the function f(z) generates the discrete sequence f1(z) = f(z), f2(z) = f(f(z)),
f3(z) = f(f(f(z))), and so on for all fn(z) where n is a positive integer. It's not difficult to
show that these iterates are cyclical with a period m if and only if = 4cos(2k/m)2 for
some integer k. We can also give an explicit expression for fp(z) where p is any complex
number. This effectively gives us the infinitesimal generator of the finite transformation.
To accomplish this we must (in general) first map the discrete generator f(z) to a domain
in which it has some convenient exponential form, then apply the pth-order
transformation, and then map back to the original domain. There are several cases to
consider, depending on the character of the discrete generator.
In the degenerate case when ad = bc with c 0, the pth iterate of f(z) is simply the
constant fp(z) = a/c. On the other hand, if c = 0 and a = d 0, then fp(z) = z + (b/d)p. The
third case is with c = 0 and a d. The pth iterate of f(z) in this case is
Notice that the second and third cases are really linear transformations, since c = 0. The
fourth case is with c 0 and (a+d)2/(ad-bc) = 4, which leads to the following closed form
expression for the pth iterate
This corresponds to the case when the two fixed points of the Mobius transformation are
co-incident. In this "parabolic" case, if a+d = 0 then the Mobius transformation reduces
to the first case with adbc = 0.
Finally, in the most general case we have c 0 and (a+d)2 /(ad-bc) 4, and the pth iterate
of f(z) is given by
This is the general case with two distinct fixed points. (If a+d = 0 then = 0 and K =
1.) The parameters A and B are the coefficients of the linear transformation that maps
real line to the locus of points with real part equal to 1/2. Notice that the pth composition
of f satisfies the relation
so we have
, which shows that f(z) is conjugate to the simple function Kz.
Since A+B is the complex conjugate of B, we see that h(z) can be expressed as
This enables us to express the pth composition of any linear fractional transformation
with two fixed points, and therefore any corresponding Lorentz transformation, in the
This shows that there is a particular oriented frame of reference (i.e., an orientation as
well as velocity boost) represented by h(z), with respect to which the relation between the
oriented frames z and f(z) is purely exponential.
2.7 The Sagnac Effect
Blind unbelief is sure to err,
And scan his work in vain;
God is his own interpreter,
And he will make it plain.
William Cowper, 1780
If two pulses of light are sent in opposite directions around a stationary circular loop of
radius R, they will traveled the same inertial distance at the same speed, so they will
arrive at the end point simultaneously. This is illustrated in the left-hand figure below.
The figure on the right indicates what happens if the loop itself is rotating during this
procedure. The symbol denotes the angular displacement of the loop during the time
required for the pulses to travel once around the loop. For any positive value of , the
pulse traveling in the same direction as the rotation of the loop must travel a slightly
greater distance than the pulse traveling in the opposite direction. As a result, the counterrotating pulse arrives at the "end" point slightly earlier than the co-rotating pulse.
Quantitatively, if we let denote the angular speed of the loop, then the circumferential
tangent speed of the end point is v = R, and the sum of the speeds of the wave front and
the receiver at the "end" point is cv in the co-rotating direction and c+v in the counterrotating direction. Both pulses begin with an initial separation of 2R from the end point,
so the difference between the travel times is
where A = R2 is the area enclosed by the loop. This analysis is perfectly valid in both
the classical and the relativistic contexts. Of course, the result represents the time
difference with respect to the axis-centered inertial frame. A clock attached to the
perimeter of the ring would, according to special relativity, record a lesser time, by the
factor = (1(v/c)2)1/2, so the Sagnac delay with respect to such a clock would be
[4A/c2]/(1(v/c)2)1/2. However, the characteristic frequency of a given light source comoving with this clock would be greater, compared to its reduced value in terms of the
axis-centered frame, by precisely the same factor, so the actual phase difference of the
beams arriving at the receiver is invariant. (It's also worth noting that there is no Doppler
shift involved in a Sagnac device, because each successive wave crest in a given direction
travels the same distance from transmitter to receiver, and clocks at those points show the
same lapse of proper time, both classically and in the context of special relativity.)
This phenomenon applies to any closed loop, not necessarily circular. For example,
suppose a beam of light is split by a half-silvered mirror into two beams, and those beams
are directed in a square path around a set of mirrors in opposite directions as shown
Just as in the case of the circular loop, if the apparatus is unaccelerated, the two beams
will travel equal distances around the loop, and arrive at the detector simultaneously and
in phase. However, if the entire device (including source and detector) is rotating, the
beam traveling around the loop in the direction of rotation will have farther to go than the
beam traveling counter to the direction of rotation, because during the period of travel the
mirrors and detector will all move (slightly) toward the counter-rotating beam and away
from the co-rotating beam. Consequently the beams will reach the detector at slightly
different times, and slightly out of phase, producing optical interference "fringes" that can
be observed and measured.
Michelson had proposed constructing such a device in 1904, but did not pursue it at the
time, since he realized it would show only the absolute rotation of the device. The effect
was first demonstrated in 1911 by Harress (unwittingly) and in 1913 by Georges Sagnac,
who published two brief notes in the Comptes Rendus describing his apparatus and
summarizing the results. He wrote
The result of measurements shows that, in ambient space, the light is propagated
with a speed V0, independent of the overall movement of the source of light O and
optical system.
This rules out the ballistic theory of light propagation (as advocated by Ritz in 1909),
according to which the speed of light is the vector sum of the velocity of the source plus a
vector of magnitude c. Ironically, the original Michelson-Morley experiment was
consistent with the ballistic theory, but inconsistent with the naïve ether theory, whereas
the Sagnac effect is consistent with the naïve ether theory but inconsistent with the
ballistic theory. Of course, both results are consistent with fully relativistic theories of
Lorentz and Einstein, since according to both theories light is propagated at a speed
independent of the state of motion of the source.
Because of the incredible precision of interferometric techniques, devices like this are
capable of detecting and measuring extremely small amounts of absolute rotation. One of
the first applications of this phenomenon was an experiment performed by Michelson and
Gale in 1925 to measure the absolute rotation rate of the Earth by means of a rectangular
optical loop 2/5 mile long and 1/5 mile wide. (See below for Michelson’s comments on
this experiment.) More recently, the invention of lasers around 1963 has led to practical
small-scale devices for measuring rotation by exploiting the Sagnac effect. There are two
classes of such devices, namely, ring interometers and ring lasers. A ring interferometer
typically consists of many windings of fiber optic lines, conducting light (of a fixed
frequency) in opposite directions around a loop, and then recombining them to measure
the phase difference, just as in the original Sagnac apparatus, but with greater efficiency
and sensitivity. A ring laser, on the other hand, consists of a laser cavity in the shape of a
ring, which allows light to circulate in both directions, producing two standing waves
with the same number of nodes in each direction. Since the optical path lengths in the two
directions are different, the resonant frequencies of the two standing waves are also
different. (In practice it is typically necessary to “dither” the ring to prevent phase
locking of the two modes.) The “beat” between the two frequencies is measured, giving a
result proportional to the rotation rate of the device. Incidentally, it isn’t necessary for the
actual laser cavity to circumscribe the entire loop; longitudinal pumping can be used,
driven by feedback carried in opposite directions around the loop in ordinary optical
fibers. (Needless to say, the difference in resonant frequency of the two stand waves in a
ring laser due to the different optical path lengths is not to be confused with a Doppler
shift.) Today such devices are routinely used in guidance and navigation systems for
commercial airliners, nautical ships, spacecraft, and in many other applications, and are
capable of detecting rotation rates as slight as 0.00001 degree per hour.
We saw previously that the time delay (and therefore the difference in the optical path
lengths) for a circular loop is proportional to the area enclosed by the loop. This
interesting fact actually applies to arbitrary closed loops. To prove this, we will derive the
difference in arrival times of the two pulses of light for an arbitrary polygonal loop
inscribed in a circle. Let the (inertial) coordinates of two consecutive mirrors separated
by a subtended angle be
where is the angular velocity of the device. Since light rays travel along null intervals,
we have c2(dt)2 = (dx)2 + (dy)2, so the coordinate time T required for a light pulse to
travel from one mirror to the next in the forward and reverse directions satisfies the
Typically T is extremely small, i.e., the polygon doesn't rotate through a very large
angle in the time it takes light to go from one mirror to the next, so we can expand these
equations in T (up to second order) and collect powers of T to give the quadratic
The two roots of this polynomial are the values of T, one positive and one negative, for
the co-rotating and counter-rotating solutions, so the difference in the absolute times is
the sum of these roots. Hence we have
This is the net contribution of this edge to the total time increment. Recalling that the area
of a regular n-sided polygon of radius R is nR2sin(2/n)/2, the area of the triangle formed
by the hub and the two mirrors is R2sin()/2. It follows that each edge of an arbitrary
polygonal loop inscribed in a circle contributes 4Ai/(c2 v2cos()) to the total time
discrepancy, where Ai is the area of the ith triangular slice of the loop and v = R is the
tangential speed of the mirrors. Therefore, the total discrepancy in travel times for the corotating and counter-rotating beams around the entire loop is simply
where A is the total area enclosed in the loop. This applies to polygons with any number
of sides, including the limiting case of circular fiber-optic loops with virtually infinitely
many edges (where the "mirrors" are simply the inner reflective lining of the fiber-optic
cable), in which case goes to zero and the denominator of the phase difference is simply
c2 v2. For realistic values of v (i.e., very small compared with c), the phase difference
reduces to the well-known result 4A/c2. It's worth noting that nothing in this derivation
is unique to special relativity, because the Sagnac effect is a purely "classical" effect. The
apparatus is set up as a differential device, so the relativistic effects apply equally in both
directions, and hence the higher-order corrections of special relativity cancel out of the
phase difference.
Despite the ease and clarity with which special relativity accounts for the Sagnac effect,
one occasionally sees claims that this effect entails a conflict with the principles of
special relativity. The usual claim is that the Sagnac effect somehow falsifies the
invariance of light speed with respect to all inertial coordinate systems. Of course, it does
no such thing, as is obvious from the fact that the simple description of an arbitrary
Sagnac device given above is based on isotropic light speed with respect to one particular
system of inertial coordinates, and all other inertial coordinate systems are related to this
one by Lorentz transformations, which are defined as the transformations that preserve
light speed. Hence no description of a Sagnac device in terms of any system of inertial
coordinates can possibly entail non-isotropic light speed, nor can any such description
yield physically observable results different from those derived above (which are known
to agree with experiment).
Nevertheless, it remains a seminal tenet of anti-relativityism (for lack of a better term)
that the trivial Sagnac effect somehow "disproves relativity". Those who espouse this
view sometimes claim that the expressions "c+v" and "cv" appearing in the derivation of
the phase shift are prima facie proof that the speed of light is not c with respect to some
inertial coordinate system. When it is pointed out that those quantities do not refer to the
speed of light, but rather to the sum and difference of the speed of light and the speed of
some other object, both with respect to a single inertial coordinate system, which can be
as great as 2c according to special relativity, the anti-relativityists are undaunted, and
merely proceed to construct progressively more convoluted and specious "objections".
For example, they sometimes argue that each point on the perimeter of a rotating circular
Sagnac device is always instantaneously at rest in some inertial coordinate system, and
according to special relativity the speed of light is precisely c in all directions with
respect to any inertial system of coordinates, so (they argue) the speed of light must be
isotropic at every point around the entire circumference of the loop, and hence the light
pulses must take an equal amount of time to traverse the loop in either direction. Needless
to say, this "reasoning" is invalid, because the pulses of light are never (let alone always)
at the same point in the loop at the same time during their respective trips around the loop
in opposite directions. At any given instant the point of the loop where one pulse is
located is necessarily accelerating with respect to the instantaneous inertial rest frame of
the point on the loop where the other pulse is located (and vice versa). As noted above,
it’s self-evident that since the speed of light is isotropic with respect to at least one
particular frame of reference, and since every other frame is related to that frame by a
transformation that explicitly preserves light speed, no inconsistency with the invariance
of the speed of light can arise.
Having accepted that the observable effects predicted by special relativity for a Sagnac
device are correct and entail no logical inconsistency, the dedicated opponents of special
relativity sometimes resort to claims that there is nevertheless an inconsistency in the
relativistic interpretation of what's really happening locally around the device in certain
extreme circumstances. The fundamental fallacy underlying such claims is the idea that
the beams of light are traveling the same, or at least congruent, inertial paths through
space and time as they proceed from the source to the detector. If this were true, their
inertial speeds would indeed need to differ in order for their arrival times at the detector
to differ. However, the two pulses do not traverse congruent paths from emission to
detector (assuming the device is absolutely rotating). The co-rotating beam is traveling
slightly farther than the counter-rotating beam in the inertial sense, because the detector
is moving away from the former and toward the latter while they are in transit. Naturally
the ratio of optical path lengths is the same with respect to any fixed system of inertial
It’s also obvious that the absolute difference in optical path lengths cannot be
"transformed away", e.g., by analyzing the process with respect to coordinates rigidly
attached to and rotating along with the device. We can, of course, define a system of
coordinates in terms of which the position of a point fixed on the disk is independent of
the time coordinate, but such coordinates are necessarily rotating (accelerating), and
special relativity does not entail invariant or isotropic light speed with respect to noninertial coordinates. (In fact, one need only consider the distant stars circumnavigating
the entire galaxy every 24 hours with respect to the Earth's rotating system of reference to
realize that the limiting speed of travel is generally not invariant and isotropic in terms of
accelerating coordinates.) A detailed analysis of a Sagnac device in terms of non-inertial
(i.e., rotating) coordinates is presented in Section 4.8, and discussed from a different
point of view in Section 5.1. For the present, let's confine our attention to inertial
coordinates, and demonstrate how a Sagnac device is described in terms of
instantaneously co-moving inertial frames of an arbitrary point on the perimeter.
Suppose we've sent a sequence of momentary pulses around the loop, at one-second
intervals, in both directions, and we have photo-detectors on each mirror to detect when
they are struck by a co-rotating or counter-rotating pulse. Clearly the pulses will strike
each mirror at one-second intervals from both directions (though not necessarily
synchronized) because if they were arriving more frequently from one direction than
from the other, the secular lag between corresponding pulses would be constantly
increasing, which we know is not the case. So each mirror is receiving one pulse per
second from both directions. Furthermore, a local measurement of light speed performed
(over a sufficiently short period of time) by an observer riding along at a point on the
perimeter will necessarily show the speed of light to be c in all direction with respect to
his instantaneously co-moving inertial coordinates. However, this system of coordinates
is co-moving with only one particular point on the rim. At other points on the rim these
coordinates are not co-moving, and so the speed of light is not c at other points on the rim
with respect to these coordinates.
To describe this in detail, let's first analyze the Sagnac device from the hub-centered
inertial frame. Throughout this discussion we assume an n-sided polygonal loop where n
is very large, so the segment between any two adjacent mirrors subtends only a very
small angle. With respect to the hub-centered frame each segment is moving with a
velocity v parallel to the direction of travel of the light beams, so the situation on each
segment is as plotted below in terms of hub-frame coordinates:
In this drawing, tf is the time required for light to cross this segment in the co-rotating
direction, and tr is the time required for light to cross in the counter-rotating direction.
The difference between these two times, denoted by dt, is the incremental Sagnac effect
for a segment of length dp on the perimeter.
Now, the ratio of dt/dp as a function of the rim velocity v can easily be read off this
diagram, and we find that
This can be taken as a measure of the anisotropy over an incremental segment with
respect to the hub frame. (Notice that this anisotropy with respect to the conventional
relativistic spacetime decomposition for any inertial frame is actually in the distance
traveled, not the speed of travel.) All the segments are symmetrical in this frame, so they
all have this same anisotropy. Therefore, we can determine the total difference in travel
times for co-rotating and counter-rotating beams of light making a complete trip around
the loop by integrating dt around the perimeter. Thus we have
Substituting r in place of v in the numerator, and noting that the enclosed area is A =
r2, we again arrive at the result T = 4A/(c2 v2).
Now let's analyze the loop with respect to one of our tangential frames of reference, i.e.,
an inertial frame that is momentarily co-moving with one of the segments on the rim. If
we examine the situation on that particular segment in terms of its own co-moving
inertial frame we find, not surprisingly, the situation shown below:
This shows that dt/dp = 0, meaning no anisotropy at all. Nevertheless, if the light beams
are allowed to go all the way around the loop, their total travel times will differ by T as
computed above, so how does that difference arise with respect to this tangential frame?
Notice that although dt/dp equals zero at this tangent point with respect to the tangent
frame, segments 90 degrees away from this point have the same anisotropy as we found
for all the segments relative to the hub frame, namely, dt/dp = 2v/(c2 v2), because the
velocity of those two segments relative to our tangential frame is exactly v along the
direction of the light rays, just as it was with respect to the hub frame. Furthermore, the
segment 180 degrees away from our tangent segment has twice the anisotropy as it has
with respect to the original hub-frame inertial coordinates, because that segment has a
velocity of 2v with respect to our tangential frame.
In general, the anisotropy dt/dp can be computed for any segment on the loop simply by
determining the projection of that segment's velocity (with respect our tangential frame)
onto the axis of the light rays. This gives the results illustrated below, showing the ratio
of the tangential frame anisotropy to the hub frame anisotropy:
It's easy to show that
where is the angle relative to the tangent point. To assess the total difference in arrival
times for light rays going around the loop in opposite directions, we need to integrate dt
by dp around the perimeter. Noting that equals p/r, we have
which again equals 4A/(c2 v2), in agreement with the hub frame analysis. Thus,
although the anisotropy is zero at each point on the rim's surface when evaluated with
respect to that point's co-moving inertial frame, we always arrive at the same overall nonzero anisotropy for the entire loop. This was to be expected, because the absolute
physical situation and intervals are the same for all inertial frames. We're simply
decomposing those absolute intervals into space and time components in different ways.
The union of all the "present" time slices of the sequence of instantaneous co-moving
inertial coordinate systems for a point fixed on the rim of a rotating disk, with each time
slice assigned a time coordinate equal to the proper time of the fixed point, constitutes a
coherent and unambiguous coordinate system over a region of spacetime that includes the
entire perimeter of the disk. The general relation for mapping the proper time of one
worldline into another by means of the co-moving planes of simultaneity of the former is
derived at the end of Section 2.9, where it is shown that the derivative of the mapped time
from a point fixed on the rim to a point at the same radius fixed in the hub frame is
positive provided the rim speed is less than c. Of course, for locations further from the
center of rotation the planes of simultaneity of a revolving point fixed on the rim will be
become "retrograde", i.e., will backtrack, making the coordinate system ambiguous. This
occurs for locations at a distance greater than 1/a from the hub, where a is the
acceleration of the point fixed on the rim.
It's also worth noting that the amount of angular travel of the device during the time it
takes for one pair of light pulses to circumnavigate a circular loop is directly proportional
to the net "anisotropy" in the travel times. To prove this, note that in a circular Sagnac
device of radius R the beam of light in the direction of rotation travels a distance of (2
t1)R and the other beam goes a distance of (2 + t2)R where t1 and t2 are the travel
times of the two beams, and is the angular velocity of the loop. The travel times of the
beams are just these distances divided by c, so we have
Solving for the times gives
so the difference in times is
where A = 2R2 and v = R. The "anisotropic ratio" is the ratio of the travel times,
which is
Solving this for R gives
Letting denote the angular travel of the loop during the travel of the two light beams,
we have
Substituting for R this reduces to
Therefore, the amount by which the ratio of travel times differs from 1 is exactly
proportional to the angle through which the loop rotates during the transit of light, and
this is true independent of R. (Of course, increasing the radius has the effect of increasing
the difference between the travel times, but it doesn't alter the ratio.)
It's worth emphasizing that the Sagnac effect is purely a classical, not a relativistic
phenomenon, because it's a "differential device", i.e., by running the light rays around the
loop in opposite directions and measuring the time difference, it effectively cancels out
the "transverse" effects that characterize relativistic phenomena. For example, the length
of each incremental segment around the perimeter is shorter by a factor of [1(v/c)2]1/2 in
the hub based frame than in it's co-moving tangential frame, but this factor applies in
both directions around the loop, so it doesn't affect the differential time. Likewise a clock
on the perimeter moving at the speed v runs slow, in accord with special relativity, but
the frequency of the light source is correspondingly slow, and this applies equally in both
directions, so this does not affect the phase difference at the receiver. Thus, a pure Sagnac
apparatus does not discriminate between relativistic and pre-relativistic theories (although
it does rule out ballistic theories, ala Ritz). Ironically, this is the main reason it comes up
so often in discussions of relativity, because the effect can easily be computed on a nonrelativistic basis and treating light as a wave propagating in a stationary medium (with
index of refraction equal to 1) at a fixed speed. Of course, if the light traveling around the
loop passes through moving media with indices of refraction differing significantly from
unity, then the Fizeau effect must also be taken into account, and in this case the results,
while again perfectly consistent with special relativity, are quite problematic for any nonrelativistic ether-based interpretation.
As mentioned above, as early as 1904 Michelson had proposed using such a device to
measure the rotation of the earth, but he hadn't pursued the idea, since measurements of
absolute rotation are fairly commonplace (e.g. Focault’s pendulum). Nevertheless, he
(along with Gale) agreed to perform the experiment in 1925 (at considerable cost) at the
urging of "relativists", who wished him to verify the shift of 236/1000 of a fringe
predicted by special relativity. This was intended mainly to refute the ballistic theory of
light propagation, which predicts zero phase shift (for a circular device). Michelson was
not enthusiastic, since classical optics on the assumption of a stationary ether predicted
exactly the same shift does special relativity (as explained above). He said
We will undertake this, although my conviction is strong that we shall prove only
that the earth rotates on its axis, a conclusion which I think we may be said to be
sure of already.
As Harvey Lemon wrote in his biographical sketch of Michelson, "The experiment,
performed on the prairies west of Chicago, showed a displacement of 230/1000, in very
close agreement with the prediction. The rotation of the Earth received another
independent proof, the theory of relativity another verification. But neither fact had much
significance." Michelson himself wrote that "this result may be considered as an
additional evidence in favor of relativity - or equally as evidence of a stationary ether".
The only significance of the Sagnac effect for special relativity (aside from providing
another refutation of ballistic theories) is that although the effect itself is of the first order
in v/c, the qualitative description of the local conditions on the disk in terms of inertial
coordinates depends on second-order effects. These effects have been confirmed
empirically by, for example, the Michelson-Morley experiment. Considering the Earth as
a particle on a large Sagnac device as it orbits around the Sun, the ether drift experiments
demonstrate these second-order effects, confirming that the speed of light is indeed
invariant with respect to relatively moving systems of inertial coordinates.
2.8 Refraction At A Plane Boundary Between Moving Media
Mathematicians usually consider the Rays of Light to be Lines reaching
from the luminous Body to the Body illuminated, and the refraction of
those Rays to be the bending or breaking of those lines in their passing out
of one Medium into another. And thus may Rays and Refractions be
considered, if Light be propagated in an instant. But by an Argument
taken from the Equations of the times of the Eclipses of Jupiter's Satellites,
it seems that Light is propagated in time, spending in its passage from the
Sun to us about seven Minutes of time: And therefore I have chosen to
define Rays and Refractions in such general terms as may agree to Light
in both cases.
Isaac Newton
(Opticks), 1704
The ray angles 1 and 2 for incident and refracted optical rays at a plane boundary
between regions of constant indices of refraction n1 and n2 are related according to
Snell’s law
However, this formula applies only if the media (which are assumed to have isotropic
index of refraction with respect to their rest frames) are at rest relative to each other. If
the media are in relative transverse motion, it is necessary to account for the effect of
aberration on the ray angles relative to the rest frames of the respective media. The result
is that the effective refraction is a function of the relative transverse velocity of the
media. Thus, measurements of the optical refraction could (in principle) be used to
determine the velocity of a moving volume of fluid. Unlike Doppler shift measurement
techniques, this approach does not rely on the presence of discrete particles in the fluid,
and involves only measurements of direct, rather than reflected, light signals.
Since the amount of refraction at a boundary depends on the angle of incidence with
respect to the rest frames of the media, it follows that if the media have different rest
frames the simple form of Snell’s law does not apply directly, because it will be
necessary to account for aberration. To derive the law of refraction for transversely
moving media, consider the arrangement shown in Figure 1, drawn with respect to a
system of coordinates (x,y,t) relative to which the medium with refractive index n1 is at
In these coordinates the medium with index n2 is moving transversely with a speed v. By
both Fermat’s principle of “least time” and the principles of quantum electrodynamics,
we know that the path of light from point P0 to point P2 is such that the travel time is
stationary (which, in this case, means minimized), so if we express the total travel time as
a function of the x coordinate of the “corner point” P1, we can differentiate to find the
position that minimizes the time, and from this we can infer the angles of incidence and
With respect to the xyt coordinates in which the n1 medium is at rest, the squared spatial
distance from P0 to P1 is x12 + y12, so the time required for light to traverse that distance
On the other hand, for the trip from point P1 to point P2 we need to know the distance
traveled with respect to the coordinates x'y't' in which the n2 medium is at rest. If we
then the Lorentz transformation gives us the corresponding increments in the primed
Therefore, the squared spatial and temporal distances from P1 to P2 in the n2 rest
coordinates are given by
Since the ratio of these increments equals the square of the speed of light in the n2
medium, which is 1/n22, we have
Solving this quadratic for t, which equals tC tB, gives
Differentiating with respect to x, and noting that d(x)/dx1 = 1, we can minimize the
total travel time t2 t0 by adding the derivatives of t and t1 t0 with respect to x1, and
setting the result to zero. This leads to the condition
Making the substitutions
we arrive at the equation for refraction at the plane boundary between transversely
moving media
As expected, this reduces to Snell’s law for stationary media if we set v = 0. Also, if the
moving medium has a refractive index of n2 = 1, this equation again reduces to Snell’s
law, regardless of the velocity, because the concept of speed doesn’t apply to the vacuum.
If we define the parameter
then the refraction equation can be written more compactly as
This can be solved explicitly for sin(2) to give the result
with the appropriate sign for the square root. Taking n1 = 1.2 and n2 = 1.5, the figure
below shows the angle of refraction 2 as a function of the transverse speed v of the
medium with various angles of incidence 1 ranging from -3/8 to +3/8 radians.
Incidentally, when plotting these lines it is necessary to take the positive root when v is
above the zero-crossing speed, and the negative root when v is below. The zero-crossing
speed (i.e., the speed v when the refracted angle is zero) is
The figure shows that at high relative speeds and high angle of incidence we can achieve
total internal reflection, even though the downstream medium is more dense than the
upstream medium. The critical conditions occur when the squared quantity in parentheses
in the preceding equation reaches 1, which implies
Solving these two quadratics for v (remembering that 2 is a function of v), we have the
four distinguished speeds
The two speeds given by 1/n2 (which are just the speeds of light in the moving medium)
generally correspond to removable singularities, because both the numerator and
denominator of the expression for sin(2) vanish. At these speeds the values of 2 can be
assigned continuously as
It isn’t clear what, if any, optical effects would appear at these two removable
singularities. The other two distinguished speeds represent the onset of total internal
reflection if their values fall in the range from -1 to +1. For example, the figure above
shows that total internal reflection for an incident angle of 1 = 3/8 with n1 = 1.2 and
n2=1.5 begins when the speed v exceeds
Notice that for an incidence angle of zero, this speed is simply n2, which is ordinarily
greater than 1, and thus outside the range of achievable speeds (since we assume the
medium itself is moving through a vacuum). However, for non-zero angles of incidence it
is possible for one of these two critical speeds to lie in the achievable range. In fact, for
certain values of n1, n2, and 1, it is possible for all four of the critical speeds to lie within
the achievable range, leading to some interesting phenomena. For example, with n1 = n2 =
2.5 and with 1 = 45 degrees, the refracted angle as a function of medium speed is as
shown below.
In this case the distinguished speeds are -0.4, +0.203, +0.4, and +0.783. This suggests
that as the transverse speed of the medium increases from 0, the refracted ray becomes
steeper until reaching 90 degrees at v = +0.203, at which point there is total internal
reflection. This remains the case until achieving a speed of +0.783, at which point some
refraction is re-introduced, and the refracted angle sweeps back from +90 to about +80
degrees (relative to the stationary frame), and then back to +90 degrees as speed
continues to increase to 1. This can be explained in terms of the variations in the effective
critical angle and the aberration angle. As speed increases, the effective critical angle for
total internal reflection initially increases faster than the aberration angle, pushing the ray
into total internal reflection. However, eventually (at close to the speed of light) the
aberration effect brings the incident ray back into the refractive range.
For an alternative derivation that leads to a different, but equivalent, relation, suppose the
index of refraction of the stationary region is n1 = 1, which implies this region is a
vacuum. If we let d1 denote the spatial distance from P0 to P1 with respect to the rest
frame, then we have
These are the components of the interval P0 to P1 with respect to the rest frame of n1, and
they can be converted to the frame of n2 (denoted by upper case letters) using the Lorentz
Letting 1 denote the angle 1 with respect to the moving n2 coordinate system, we can
express the tangent of this angle as
Taking the sine of the inverse tangent of both sides gives the familiar aberration formula
Since we are assuming the n1 medium is a vacuum, we are free to treat the entire
configuration as being at rest in the n2 coordinates, with the angle of incidence as defined
above. Therefore, Snell’s law for stationary media can be applied to give the refracted
angle relative to these coordinates
Now, if D2 is the spatial distance from P1 to P2 with respect to the moving coordinates,
we have
Also, the Lorentz transformation gives the coordinates of points P1 and P2 in the rest
frame in terms of the coordinates in the moving frame as follows:
From these we can construct the tangent of 2 with respect to the rest coordinates
Substituting for the coordinate differences gives
We saw previously that
so we can explicitly compute 2 from 1. It can be shown that this solution is identical to
the solution (with n1 = 1) derived previously on the basis of Fermat's principle.
Furthermore, we can solve these equations for sin(1) as a function of 2 and then by
equating this sin(1) with n3 sin(3) for a stationary medium neighboring the vacuum
region, we again have the general solution for two refractive media in relative transverse
motion. A plot of 2 from 1 for various values of v is shown below:
2.9 Accelerated Travels
This yields the following peculiar consequence: If there are two
synchronous clocks, and one of them is moved along a closed curve with
constant [speed] until it has returned, then this clock will lag on its arrival
behind the clock that has not been moved.
Einstein, 1905
Suppose a particle accelerates in such a way that it is subjected to a constant proper
acceleration a0 for some period of time. The proper acceleration of a particle is defined
as the acceleration with respect to the particle's momentarily co-moving inertial
coordinates at any given instant. The particle's velocity is v = 0 at the time t = 0, when it
is located at x = 0, and at some infinitesimal time t later its velocity is t a0 and its
location is (1/2) a0 t2. The slope of its line of simultaneity is the inverse of the slope 1/v
of its worldline, so its locus of simultaneity at t = t is the line given by
This line intersects the particle's original locus of simultaneity at the point (x,0) where
At each instant the particle is accelerating relative to its current instantaneous frame of
reference, so in the limit as t goes to zero we see that its locus of simultaneity
constantly passes through the point (-1/a0, 0), and it maintains a constant absolute
spacelike distance of -1/a0 from that point, as illustrated in the figure below.
This can be compared to a particle moving with a speed v tangentially to a center of
attraction toward which it is drawn with a constant acceleration a0. The path of such a
particle is a circle in space of radius v2/a0. Likewise in spacetime a particle moving with a
speed c tangentially to a center of "repulsion" with a constant acceleration a0 traces out a
hyperbola with a "radius" of c2/a0. (In this discussion we are using units with c=1, so the
"radius" shown in the above figure is written as 1/a0.)
Since the worldline of a particle with constant proper acceleration is a branch of a
hyperbola with "radius" 1/a0, we can shift the x axis by 1/a0 to place the origin at the
center of the hyperbola, and then write the equation of the worldline as
Differentiating both sides with respect to t gives
which shows that the velocity of the worldline at any point (x,t) is given by v = t/x.
Consequently the line from the origin through any point on the hyperbolic path represents
the space axis for the co-moving inertial coordinates of the accelerating worldline at that
point. The same applies to any other hyperbolic path asymptotic to the same lightlines, so
a line from the origin intersects any two such hyperbolas at points that are mutually
simultaneous and separated by a constant proper distance (since they are both a fixed
proper distance from the origin along their mutual space axis). It follows that in order for
a slender "rigid" rod accelerating along its axis to maintain a constant proper length (with
respect to its co-moving inertial frames), the parts of the rod must accelerate along a
family of hyperbolas asymptotic to the same lightlines, as illustrated below.
The x',t' axes represent the mutual co-moving inertial frame of the hyperbolic worldlines
where they intersect with the x' axis. All the worldlines have constant proper distances
from each other along this axis, and all have the same speed. The latter implies that they
have each been accelerated by the same total amount at any instant of their mutual comoving inertial frame, but the accelerations have been distributed differently. The "innermost" worldline (i.e., the trailing end of the rod) has been subjected to a higher level of
instantaneous acceleration but for a shorter time, whereas the "outer-most" worldline (i.e.,
the leading end of the rod) has been accelerated more mildly, but for a longer time. It's
worth noting that this form of "coherent" acceleration would not occur if the rod were
accelerated simply by pushing on one end. It would require the precisely coordinated
application of distinct force profiles to each individual particle of the rod. Any deviation
from these profiles would result in internal stresses of one part of the rod on another, and
hence the rest length would not remain fixed. Furthermore, even if the coherent
acceleration profiles are perfectly applied, there is still a sense in which the rod has not
remained in complete physical equilibrium, because the elapsed proper times along the
different hyperbolic worldlines as the rod is accelerated from a rest state in x,t to a rest
state in some x',t' differ, and hence the quantum phases of the two ends of the rod are
shifted with respect to each other. Thus we must assume memorylessness (as mentioned
in Section 1.6) in order to assert the equivalence of the equilibrium states for two
different frames of reference.
We can then determine the lapse of proper time along any given hyperbolic worldline
using the relation
, which leads (for the hyperbola of unit "radius") to
Integrating this relation gives
Solving this for t and substituting into the equation of the hyperbola to give x, we have
the parametric equation of the hyperbola as a function of the proper time along the
worldline. If we subtract 1/a0 from x to return to our original x coordinate (such that x =
0 at t = 0) these equations are
Differentiating the above expressions gives
so the particle's velocity relative to the original inertial coordinates is
We're using "time units" throughout this section, which means that all times and distances
are expressed in units of time. For example, if the proper acceleration of the particle is 1g
(the acceleration of gravity at the Earth's surface), then
g =
(3.27)10-8 sec-1
= 1.031 years-1
and all distances are in units of light-seconds.
To show the implications of these formulas, suppose a space traveler moves away from
the Earth with a constant proper acceleration of 1g for a period of T years as measured on
Earth. He then reverses his acceleration, coming to rest after another T years has passed
on Earth, and then continues his constant Earthward acceleration for another T Earthyears, at which point he reverses his acceleration again and comes to rest back at the
Earth in another T Earth-years. The total journey is completed in 4T Earth-years, and it
consists of 4 similar hyperbolic segments as illustrated below.
There are several questions we might ask about this journey. First, how far away from
Earth does the traveler reach at his furthest point? This occurs at point C, which is at 2T
according to Earth time, when the traveler's acceleration brings him momentarily to rest
with respect to the Earth. To answer this question, recall that can be expressed as a
function of t by
Now, the maximum distance from Earth is twice the distance at point B, when t = T, so
we have
The maximum speed of the traveler in terms of the Earth's inertial coordinates occurs at
point B, where t = T (and again at point D, where t = 3T), and so is given by
The total elapsed proper time for the traveler during the entire journey out and back,
which takes 4T years according to Earth time, is 4 times the lapse of proper time to point
B at t = T, so it is given by
So far we have focused mainly on a description of events in terms of the Earth's inertial
coordinates x and t, but we can also describe the same events in terms of coordinate
systems associated with the accelerating traveler. At any given instant the traveler is
momentarily at rest with respect to a system of inertial coordinates, so we can define
"proper" time and space measurements in terms of these coordinates. However, when we
differentiate these time and space intervals as the traveler progresses along his worldline,
we will find that new effects appear, due to the fact that the coordinate system itself is
changing. As the traveler accelerates he continuously progresses from one system of
momentarily co-moving inertial coordinates to another, and the effect of this change in
the coordinates will show up in any derivatives that we take with respect to the time and
space components.
For example, suppose we ask how fast the Earth is moving relative to the traveler. This
question can be interpreted in different ways. With respect to the traveler's momentarily
co-moving inertial coordinates, the Earth's velocity is equal and opposite to the traveler's
velocity with respect to the Earth's inertial coordinates. However, this quantity does not
equal the derivative of the proper distance with respect to the proper time. The proper
distance s from the Earth in terms of the traveler's momentarily co-moving inertial
coordinates at the proper time is
which shows that the proper distance approaches a constant 1/g (about 1 light-year) as
increases. This shouldn't be surprising, because we've already seen that the traveler's
proper distance from a fixed point on the other side of the Earth actually is constant and
equal to 1/g throughout the period of constant proper acceleration.
The derivative of the proper distance of the Earth with respect to the proper time is
This can be regarded as a kind of velocity, since it represents the proper rate of change of
the proper distance from the Earth as the traveler accelerates away. A plot of this function
as varies from 0 to 6 years is shown below.
Initially the proper distance from the Earth increases as the traveler accelerates away, but
eventually (if the constant proper acceleration is maintained for a sufficiently long time)
the "length contraction" effect of his increasing velocity becomes great enough to cause
the derivative to drop off to zero as the proper distance approaches a constant 1/g. To find
the point of maximum ds/d we differentiate again with respect to to give
Setting this to zero, we see that the maximum occurs at
, and
substituting this into the expression for ds/d gives the maximum value of 1/2. Thus the
derivative of proper distance from Earth with respect to proper time during a constant 1g
acceleration away from the Earth reaches a maximum of half the speed of light at a
proper time of about 0.856 years, after which is drops to zero.
Similarly, the traveler's proper distance S from the turnaround point is given by
The derivative of this with respect to the traveler's proper time is
A plot of this "velocity" is shown below for the first quartile leg of a journey as described
above with T = 20 years.
The magnitude of this "velocity" increases rapidly at the start of the acceleration, due to
the combined effects of the traveler's motion and the onset of "length contraction", but if
allowed to continue long enough the "velocity" drops off and approaches 2 (i.e., twice the
speed of light) at the point where the traveler reverses his acceleration. Of course, the fact
that this derivative exceeds c does not conflict with the fact that c is an upper limit on
velocities with respect to inertial coordinate systems, because S and do not constitute
inertial coordinates.
To find the extreme point on this curve we differentiate again with respect to , which
Consequently we see that the extreme value occurs (assuming the journey is long enough
and the acceleration is great enough) at the proper time
value of dS/d is
, where the
By symmetry, these same two characteristics apply to all four of the "quadrants" of the
traveler's journey, with the appropriate changes of sign and direction. The figure below
shows the proper distances s(t) and S(t) (i.e., the distances from the origin and the
destination respectively) during the first two quadrants of a journey with T = 6.
By symmetry we see that the portions of these curves to the right of the mid-point can be
generated from the relation s() = S(C ). Also, it's obvious that
If we consider journeys with non-constant proper accelerations, it's possible to construct
some slightly peculiar-sounding scenarios. For example, suppose the traveler accelerates
in such a way that his velocity is 1 exp(-kt) for some constant k. It follows that the
distance in the Earth's frame at time t is [kt + exp(-kt) 1]/k, so the distance in the
traveler's frame is
This function initially increases, then reaches a maximum, and then asymptotically
approaches zero. With k = 1 year-1 the maximum occurs at roughly 3 years and a distance
of about 0.65 light-years (relative to the traveler's frame). Thus we have the seemingly
paradoxical situation that the Earth "becomes closer" to the traveler as he moves further
This is not as strange as it may sound at first. Suppose we leave home and drive for 1
hour at a constant speed of 20 mph. We could then say that we are "1 hour from home".
Now suppose we suddenly accelerate to 40 mph. How far (in time) are we away from
home? If we extrapolate our current worldline back in time, we are only 1/2 hour from
home. If we speed up some more, our "distance" (in terms of time) from home becomes
less and less. Of course, we have to speed up at a rate that more than compensates for the
increasing road distance, but that's not hard to do (in theory). The only difference
between this scenario and the relativistic one is that when we accelerate to relativistic
speeds both our time and our space axes are affected, so when we extrapolate our current
frame of reference back to Earth we find that both the time and the distance are
Another interesting acceleration profile is the one that results from a constant nozzle
velocity u and constant exhaust mass flow rate w = dm0/d, where is the proper time of
the rocket, the effective force is uw throughout the acceleration. This does not result in
constant proper acceleration, because the rest mass of the rocket is being reduced while
the applied proper force remains constant. In this case we have
where t is the time of the initial coordinates and v is the velocity of the rocket with
respect to those coordinates. Also, we have m0() = m0(0) w , so we can integrate to
get the speed
Letting () denote the ratio [m(0) w ]/m(0), which is the ratio of rest masses at the
start of the acceleration to the rest mass at proper time , the result is
so we have
Also, since dt = d /
, we can integrate this to get the coordinate time t as a
function of the rocket's proper time
In the limit as the nozzle velocity u approaches 1, this expression reduces to
It's interesting that for photonic propulsion (u=1) the mass ratio r is identical to the
Doppler frequency shift of the exhaust photons relative to the original rest frame, i.e., we
Thus if the rocket continues to convert its own mass to energy and eject it as photons of a
fixed frequency, the energy of each photon as seen from the fixed point of origin is
exactly proportional to the rest mass of the rocket at the moment when the photon was
ejected. Also, since r(t) is the current rest mass m0(t) divided by the original rest mass
m0(0), and since the inertial mass m(t) is related to the rest mass m0(t) by the equation
m(t) = m0(t) /
, we find that the inertial mass m(t) of the rocket is given as a
function of the rocket's velocity v by the equation
Thus we find that as the rocket's velocity goes to 1 at the moment when it is converting
the last of its rest mass into energy, so its rest mass is going to zero, its inertial mass goes
to m0(0)/2, i.e., exactly half of the rocket's original rest mass. This is to be expected,
because momentum must be conserved, and all the photons except that very last have
been ejected in the rearward direction at the speed of light, leaving only the last
remaining photon (which has nothing to react against) moving in the forward direction,
so it must have momentum equal to all the rearward momentum of the ejected photons.
The momentum of a photon is p = h/c = E/c, so in units with c = 1 we have p = E. The
original energy content of the rocket was it's rest mass, m0(0), which has been entirely
converted to energy, half in the forward direction (in the last remaining super-energetic
photon) and half in the rearward direction (the progressively more redshifted stream of
exhaust photons).
The preceding discussion focused on purely linear motion, but we can just as well
consider arbitrary accelerated paths. It's trivial to determine the lapse of proper time along
any given timelike path as a function of an inertial time coordinate simply by integrating
d over the path, but it's a bit more challenging to express the lapse of proper time along
one arbitrary worldline with respect to the lapse of proper time along another, because the
appropriate correspondence is ambiguous. Perhaps the most natural correspondence is
given by mapping the proper time along the reference worldline to the proper time along
the subject worldline by means of the instantaneously co-moving planes of inertial
simultaneity of the reference worldline. In other words, to each point along the reference
worldline we can assign a locus of simultaneous points based on co-moving inertial
coordinates at that point, and we can then find the intersections of these loci with the
subject worldline.
Quantitatively, suppose the reference worldline W1 is given parametrically by the
functions x1(t), y1(t), z1(t) where x,y,z,t are inertial coordinates. From this we can
determine the derivatives = dx1/dt,
= dy1/dt, and
= dz1/dt. These also represent
the components of the gradient of the space of simultaneity of the instantaneously comoving inertial frame of the object. In other words, the spaces of simultaneity for W1
have the partial derivatives
These enable us to express the total differential time as a function of the differentials of
the spatial coordinates
If the subject worldline W2 is expressed parametrically by the functions x2(t), y2(t), z2(t),
and if the inertial plane of simultaneity of the event at coordinate time t1 on W1 is
intersected by W2 at the coordinate time t2, then the difference in coordinate times
between these two events can be expressed in terms of the differences in their spatial
coordinates by substituting into the above total differential the quantities dt = t2t1, dx =
x2(t2)x1(t1) and so on. The result is
where the derivatives of x1, y1, and z1 are evaluated at t1. Rearranging terms and omitting
the indications of functional dependence for the W1 coordinates, this can be written in the
This is an implicit formula for the value of t2 on W2 corresponding to t1 on W1 based on
the instantaneous inertial simultaneity of W1. Every quantity in this equation is an explicit
function of either t1 or t2, so we can solve for t2 to give a function F1 such that t2 = F1(t1).
We can also integrate the absolute intervals along the two worldlines to give the functions
f1 and f2 which relate the proper times along W1 and W2 to the coordinate time, i.e., we
have 1 = f1(t) and 2 = f2(t). With these substitutions we arrive at the general form of the
expression for 2 with respect to 1:
To illustrate, suppose W1 is the worldline of a particle moving along some arbitrary path
and W2 is just the worldline of the spatial origin of the inertial coordinates. In this case
we have x2 = y2 = z2 = 0 and 2 = t2, so the above formula reduces to
where r and v are the position and velocity vectors of W1 with respect to the inertial rest
coordinates of W2. Differentiating with respect to t1, and multiplying through by dt1/d1 =
(1v2)-1/2, we get
where a is the acceleration vector and is the angle between the r and a vectors. Thus if
the acceleration of W1 is zero, we have d2/d1 = (1v2)1/2. On the other hand, if W2 is
moving around W1 in a circle at constant speed, we have a = -v2/r and the position and
acceleration vectors are perpendicular, giving the result d2/d1 = (1v2)-1/2. This is
consistent with the fact that, if the object is moving tangentially, the plane of simultaneity
for its instantaneously co-moving inertial coordinate system intersects with the constant-t
plane along the line from the object to the origin, and hence the time difference is entirely
due to the transverse dilation (i.e., the square root of 1v2 factor).
If the speed v of W1 is constant, then we have the explicit equation
To illustrate, suppose the object whose worldline is W2 begins at the origin at t = 0 and
thereafter moves counter-clockwise in a circle tangent to the origin in the xy plane with a
constant angular velocity as illustrated below.
In this case the object's spatial coordinates and their derivatives as a function of
coordinate time are
Substituting into the equation for 2 and replacing each appearance of t with
gives the result
This is the proper time of the spatial origin according to the instantaneous time slices of
the moving object's proper time. This function is plotted below with R = 1 and v = 0.8.
Also shown is the stable component
Naturally if the circle radius R goes to infinity the value of the sine function approaches
the argument, and so the above expression reduces to
This confirms the reciprocity between the two worldlines when both are inertial. We can
also differentiate the full expression for 2 as a function of to give the relation between
the differentials
This relation is plotted in the figure below, again for R = 1 and v = 0.8.
It's also clear from this expression that as R goes to infinity the cosine approaches 1, and
we again have
Incidentally, the above equation shows that the ratio of time rates equals 1 when the
moving object is a circumferential distance of
from the point of tangency. Hence, for small velocities v the configuration of "equal time
rates" occurs when the moving object is at /3 radians from the point of tangency. On
the other hand, as v approaches 1, the configuration of equal time rates occurs when the
moving object approaches the point of tangency. This may seem surprising at first,
because we might expect the proper time of the origin to be dilated with respect to the
proper time of the tangentially moving object. However, the planes of simultaneity of the
moving object are tilting very rapidly in this condition, and this offsets the usual time
dilation factor. As v approaches 1, these two effects approach equal magnitude, and
cancel out for a location approaching the point of tangency.
2.10 The Starry Messenger
“Let God look and judge!”
Cardinal Humbert, 1054 AD
Maxwell's equations are very successful at describing the propagation of light based on
the model of electromagnetic waves, not only in material media but also in a vacuum,
which is considered to be a region free of material substances. According to this model,
light propagates in vacuum at a speed
, where 0 is the permeability
constant and 0 is the permittivity of the vacuum, defined in terms of Coulombs law for
electrostatic force
The SI system of units is defined so that the permeability constant takes on the value 0 =
410-7 tesla meter per ampere, and we can measure the value of the permittivity
(typically by measuring the capacitance C between parallel plates of area A separated by
a distance d, using the relation 0 = Cd/A) to have the value 0 = (8.854187818)10-12
coulombs2 per newton meters2. This leads to the familiar value
for the speed of light in a vacuum. Of course, if we place some substance between our
capacitors when determining 0 we will generally get a different value, so the speed of
light is different in various media. This leads to the index of refraction of various
transparent media, defined as n = cvacuum / cmedium. Thus Maxwell's theory of electromagnetism seems to clearly imply that the speed of propagation of such electromagnetic
waves depends only on the medium, and is independent of the speed of the source.
On the other hand, it also suggests that the speed of light depends on the motion of the
medium, which is easy to imagine in the case of a material medium like glass, but not so
easy if the "medium" is the vacuum of empty space. How can we even assign a state of
motion to the vacuum? In struggling to answer this question, people tried to imagine that
even the vacuum is permeated with some material-like substance, the ether, to which a
definite state of motion could be assigned. On this basis it was natural to suppose that
Maxwell's equations were strictly applicable (and the speed of light was exactly c) only
with respect to the absolute rest frame of the ether. With respect to other frames of
reference they expected to find that the speed of light differed, depending on the direction
of travel. Likewise we would expect to find corresponding differences and anisotropies in
the capacitance of the vacuum when measured with plates moving at high speed relative
to the ether.
However, when extremely precise interferometer measurements were carried out to find a
directional variation in the speed of light on the Earth's surface (presumably moving
through the ether at fairly high speed due to the Earth's rotation and its orbital motion
around the Sun), essentially no directional variation in light speed was found that could
be attributed to the motion of the apparatus through the ether. Of course, it had occurred
to people that the ether might be "dragged along" by the Earth, so that objects on the
Earth's surface are essentially at rest in the local ether. However, these "convection"
hypotheses are inconsistent with other observed phenomena, notably the aberration of
starlight, which can only be explained in an ether theory if it is assumed that an observer
on the Earth's surface is not at rest with respect to the local ether. Also, careful terrestrial
measurements of the paths of light near rapidly moving massive objects showed no sign
of any "convection". Considering all this, the situation was considered to be quite
There is a completely different approach that could be taken to modeling the phenomena
of light, provided we're willing to reject Maxwell's theory of electromagnetic waves, and
adopt instead a model similar to the one that Newton often seemed to have in mind,
namely, an "emission theory". One advocate of such a theory early in the early 1900's
was Walter Ritz, who rejected Maxwell's equations on the grounds that the advanced
potentials allowed by those equations were unrealistic. Ritz debated this point with Albert
Einstein, who argued that the observed asymmetry between advanced and retarded waves
is essentially statistical in origin, due to the improbability of conditions needed to
produce coherent advanced waves. Neither man persuaded the other. (Ironically, Einstein
himself had already posited that Maxwell's equations were inadequate to fully represent
the behavior of light, and suggested a model that contains certain attributes of an
emission theory to account for the photo-electric effect, but this challenge to Maxwell's
equations was on a more subtle and profound level than Ritz's objection to advanced
In place of Maxwell's equations and the electromagnetic wave model of light, the
advocates of "emission theories" generally assume a Galilean or Newtonian spacetime,
and postulate that light is emitted and propagates away from the source (perhaps like
Newtonian corpuscles) at a speed of c relative to the source. Thus, according to
emission theories, if the source is moving directly toward or away from us with a speed v,
then the light from that source is approaching us with a speed c+v or cv respectively.
Naturally this class of theories is compatible with experiments such as the one performed
by Michelson and Morley, since the source of the light is moving along with the rest of
the apparatus, so we wouldn't expect to find any directional variation in the speed of light
in such experiments. Also, an emission theory of light is compatible with stellar
aberration, at least up to the limits of observational resolution. In fact, James Bradley (the
discoverer of aberration) originally explained it on this very basis.
Of course, even an emission theory must account for the variations in light speed in
different media, which means it can't simply say that the speed of light depends only on
the speed of the source. It must also be dependent on the medium through which it is
traveling, and presumably it must have a "terminal velocity" in each medium, i.e., a
certain characteristic speed that it can maintain indefinitely as it propagates through the
medium. (Obviously we never see light come to rest, nor even do we observe noticeable
"slowing" of light in a given medium, so it must always exhibit a characteristic speed.)
Furthermore, based on the principles of an emission theory, the medium-dependent speed
must be defined relative to the rest frame of the medium.
For example, if the characteristic speed of light in water is cw, and a body of water is
moving relative to us with a speed v, then (according to an emission theory) the light
must move with a speed cw + v relative to us when it travels for some significant distance
through that water, so that it has reached its "steady-state" speed in the water. In optics
this distance is called the "extinction distance", and it is known to be proportional to
1/(), where is the density of the medium and is the wavelength of light. The
extinction distance for most common media for optical light is extremely small, so
essentially the light reaches its steady-state speed as soon as it enters the medium.
An experiment performed by Fizeau in 1851 to test for optical "convection" also sheds
light on the viability of emission theories. Fizeau sent beams of light in both directions
through a pipe of rapidly moving water to determine if the light was "dragged along" by
the water. Since the refractive index of water is about n = c/cw = 1.33 where cw is the
speed of light in water, we know that cw equals c/1.33, which is about 75% of the speed
of light in a vacuum. The question is, if the water is in motion relative to us, what is the
speed (relative to us) of the light in the water?
If light propagates in an absolutely fixed background ether, and isn't dragged along by the
water at all, we would expect the light speed to still be cw relative to the fixed ether,
regardless of how the water moves. This is admittedly a rather odd hypothesis (i.e., that
light has a characteristic speed in water, but that this speed is relative to a fixed
background ether, independent of the speed of the water), but it is one possibility that
can't be ruled out a priori. In this case the difference in travel times for the two directions
would be proportional to
which implies no phase shift in the interferometer. On the other hand, if emission theories
are right, the speed of the light in the water (which is moving at the speed v) should be
cw+v in the direction of the water's motion, and cwv in the opposite direction. On this
basis the difference in travel times would be proportional to
This is a very small amount (remembering that cw is about 75% of the speed of light in a
vacuum), but it is large enough that it would be measurable with delicate interferometry
The results of Fizeau's experiment turned out to be consistent with neither of the above
predictions. Instead, he found that the time difference (proportional to the phase shift)
was a bit less than 43.5% of the prediction for an emission theory (i.e., 43.5% of the
prediction based on the assumption of complete convection). By varying the density of
the fluid we can vary the refractive index and therefore cw, and we find that the measured
phase shift always indicates a time difference of (1cw2) times the prediction of the
emission theory. For water we have cw = 0.7518, so the time lag is (1cw2) = 0.4346 of
the emission theory prediction.
This implies that if we let S(cw,v) and S(cw,v) denote the speeds of light in the two
directions, we have
By partial fraction decomposition this can be written in the form
Also, in view of the symmetry S(u,v) = S(v,u), we can swap cw with v to give
Solving these last two equations for A and B gives A = 1 vcw and B = 1 + vcw, so the
function S is
which of course is the relativistic formula for the composition of velocities. So, even if
we rejected Maxwell's equations, it still appears that emission theories cannot be
reconciled with Fizeau's experimental results.
More evidence ruling out simple emission theories comes from observations of a
supernova made by Chinese astronomers in the year 1054 AD. When a star explodes as a
supernova, the initial shock wave moves outward through the star's interior in just
seconds, and elevates the temperature of the material to such a high level that fusion is
initiated, and much of the lighter elements are fused into heavier elements, including
some even heavier than iron. (This process yields most of the interesting elements that we
find in the world around us.) Material is flung out at high speeds in all directions, and this
material emits enormous amounts of radiation over a wide range of frequencies,
including x-rays and gamma rays. Based on the broad range of spectral shifts (resulting
from the Doppler effect), it's clear that the sources of this radiation have a range of speeds
relative to the Earth of over 10000 km/sec. This is because we are receiving light emitted
by some material that was flung out from the supernova in the direction away from the
Earth, and by other material that was flung out in the direction toward the Earth.
If the supernova was located a distance D from us, then the time for the "light" (i.e., EM
radiation of all frequencies) to reach us should be roughly D/c, where c is the speed of
light. However, if we postulate that the actual speed of the light as it travels through
interstellar space is affected by the speed of the source, and if the source was moving
with a speed v relative to the Earth at the time of emission, then we would conclude that
the light traveled at a speed of c+v on it's journey to the Earth. Therefore, if the sources
of light have velocities ranging from -v to +v, the first light from the initial explosion to
reach the Earth would arrive at the time D/(c+v), whereas the last light from the initial
explosion to reach the Earth would arrive at D/(c-v). This is illustrated in the figure
Hence the arrival times for light from the initial explosion event would be spread out over
an interval of length D/(cv) D/(c+v), which equals (D/c)(2v/c) / (1(v/c)2). The
denominator is virtually 1, so we can say the interval of arrival times for the light from
the explosion event of a supernova at a distance D is about (D/c)(2v/c), where v is the
maximum speed at which radiating material is flung out from the supernova.
However, in actual observations of supernovae we do not see this "spreading out" of the
event. For example, the Crab supernova was about 6000 light years away, so we had D/c
= 6000 years, and with a range of source speeds of 10000 km/sec (meaning v = 5000)
we would expect a range of arrival times of 200 years, whereas in fact the Crab was only
bright for less than a year, according to the observations recorded by Chinese
astronomers in July of 1054 AD. For a few weeks the "guest star", as they called it, in the
constellation Taurus was the brightest star in the sky, and was even visible in the daytime
for twenty-six days. Within two years it had disappeared completely to the naked eye. (It
was not visible in Europe or the Islamic countries, since Taurus is below the horizon of
the night sky in July for northern latitudes.) In the time since the star went supernova the
debris has expanded to it's present dimensions of about 3 light years, which implies that
this material was moving at only (!) about 1/300 the speed of light. Still, even with this
value of v, the bright explosion event should have been visible on Earth for about 40
years (if the light really moved through space at c v). Hence we can conclude that the
light actually propagated through space at a speed essentially independent of the speed of
the sources.
However, although this source independence of light speed is obviously consistent with
Maxwell's equations and special relativity, we should be careful not to read too much into
it. In particular, this isn't direct proof that the speed of light in a vacuum is independent of
the speed of the source, because for visible light (which is all that was noted on Earth in
July of 1054 AD) the extinction distance in the gas and dust of interstellar space is much
less than the 6000 light year distance of the Crab nebula. In other words, for visible light,
interstellar space is not a vacuum, at least not over distances of many light years. Hence
it's possible to argue that even if the initial speed of light in a vacuum was c+v, it would
have slowed to c for most of its journey to Earth. Admittedly, the details of such a
counter-factual argument are lacking (because we don't really know the laws of
propagation of light in a universe where the speed of light is dependent on the speed of
the source, nor how the frequency and wavelength would be altered by interaction with a
medium, so we don't know if the extinction distance is even relevant), but it's not totally
implausible that the static interstellar dust might affect the propagation of light in such a
way as to obscure the source dependence, and the extinction distance seems a reasonable
way of quantifying this potential effect.
A better test of the source-independence of light speed based on astronomical
observations is to use light from the high-energy end of the spectrum. As noted above,
the extinction distance is proportional to 1/(). For some frequencies of x-rays and
gamma rays the extinction distance in interstellar space is about 60000 light years, much
greater than the distances to many supernova events, as well as binary stars and other
configurations with identifiable properties. By observing these events and objects it has
been found that the arrival times of light are essentially independent of frequency, e.g.,
the x-rays associated with a particular identifiable event arrive at the same time as the
visible light for that event, even though the distance to the event is much less than the
extinction distance for x-rays. This gives strong evidence that the speed of light in a
vacuum is actually invariant and independent of the motion of the source.
With the aid of modern spectroscopy we can now examine supernovae events in detail,
and it has been found that they exhibit several characteristic emission lines, particularly
the signature of atomic hydrogen at 6563 angstroms. Using this as a marker we can
determine the Doppler shift of the radiation, from which we can infer the speed of the
source. The energy emitted by a star going supernova is comparable to all the energy that
it emitted during millions or even billions of years of stable evolution. Three main
categories of supernovae have been identified, depending on the mass of the original star
and how much of its "nuclear fuel" remains. In all cases the maximum luminosity occurs
within just the first few days, and drops by 2 or 3 magnitudes within a month, and by 5 or
6 magnitudes within a year. Hence we can conclude that the light actually propagated
through empty space at a speed essentially independent of the speed of the sources.
Another interesting observation involving the propagation of light was first proposed in
1913 by DeSitter. He wondered whether, if we assume the speed of light in a vacuum is
always c with respect to the source, and if we assume a Galilean spacetime, we would
notice anything different in the appearances of things. He considered the appearance of
binary star systems, i.e., two stars that orbit around each other. More than half of all the
visible stars in the night sky are actually double stars, i.e., two stars orbiting each other,
and the elements of their orbits may be inferred from spectroscopic measurements of
their radial speeds as seen from the Earth. DeSitter's basic idea was that if two stars are
orbiting each other and we are observing them from the plane of their mutual orbit, the
stars will be sometimes moving toward the Earth rapidly, and sometimes away.
According to an emission theory this orbital component of velocity should be added to or
subtracted from the speed of light. As a result, over the hundreds or thousands of years
that it takes the light to reach the Earth, the arrival times of the light from approaching
and receding sources would be very different.
Now, before we go any further, we should point out a potential difficulty for this kind of
observation. The problem (again) is that the "vacuum" of empty space is not really a
perfect vacuum, but contains small and sparse particles of dust and gas. Consequently it
acts as a material and, as noted above, light will reach it's steady-state velocity with
respect to that interstellar dust after having traveled beyond the extinction distance. Since
the extinction distance for visible light in interstellar space is quite short, the light will be
moving at essentially c for almost its entire travel time, regardless of the original speed.
For this reason, it's questionable whether visual observations of celestial objects can
provide good tests of emission theory predictions. However, once again we can make use
of the high-frequency end of the spectrum to strengthen the tests. If we focus on light in
the frequency range of, say, x-rays and gamma rays, the extinction distance is much
larger than the distances to many binary star systems, so we can carry out DeSitter's
proposed observation (in principle) if we use x-rays, and this has actually been done by
Brecher in 1977.
With the proviso that we will be focusing on light whose extinction distance is much
greater than the distance from the binary star system to Earth (making the speed of the
light simply c plus the speed of the star at the time of emission), how should we expect a
binary star system to appear? Let's consider one of the stars in the binary system, and
write its coordinates and their derivatives as
where D is the distance from the Earth to the center of the binary star system, R is the
radius of the star's orbit about the system's center, and w is the angular speed of the star.
We also have the components of the emissive light speed
c2 = cx2 + cy2
In these terms we can write the components of the absolute speed of the light emitted
from the star at time t:
Now, in order to reach the Earth at time T the light emitted at time t must travel in the x
direction from x(t) to 0 at a speed of for a time t = Tt, and similarly for the y
direction. Hence we have
Substituting for x, y, and the light speed derivatives
, we have
Squaring both sides of both equations, and adding the resulting equations together, gives
Re-arranging terms gives the quadratic in t
If we define the normalized parameters
then the quadratic in t becomes
Solving this quadratic for t = Tt and then adding t to both sides gives the arrival
time T on Earth as a function of the emission time t on the star
If the star's speed v is much less than the speed of light, this can be expressed very nearly
The derivative of T with respect to t is
and this takes it's minimum value when t = 0, where we have
Consequently we find the DeSitter effect, i.e., dT/dt goes negative if d > r / v2. Now, we
know from Kepler's third law (which also applies in relativistic gravity with the
appropriate choice of coordinates) that m = r3 w2 = r v2, so we can substitute m/r for v2
in our inequality to give the condition d > r2 / m. Thus if the distance of the binary star
system from Earth exceeds the square of the system's orbital radius divided by the
system's mass (in geometric units) we would expect DeSitter's apparitions - assuming the
speed of light is c v.
As an example, for a binary star system a distance of d = 20000 light-years away, with an
orbital radius of r = 0.00001 light-years, and an orbital speed of v = 0.00005, the arrival
time of the light as a function of the emission time is as shown below:
This corresponds to a star system with only about 1/6 solar mass, and an orbital radius of
about 1.5 million kilometers. At any given reception time on Earth we can typically "see"
at least three separate emission events from the same star at different points in its orbit.
These ghostly apparitions are the effect that DeSitter tried to find in photographs of many
binary star systems, but none exhibited this effect. He wrote
The observed velocities of spectroscipic doubles are as a matter of fact
satisfactorily represented by a Keplerian motion. Moreover in many cases the
orbit derived from the radial velocities is confirmed by visual observations (as for
Equuli, Herculis, etc.) or by eclipse observations (as in Algol variables). We
can thus not avoid the conclusion [that] the velocity of light is independent of the
motion of the source. Ritz’s theory would force us to assume that the motion of
the double stars is governed not by Newton’s law, but by a much more
complicated law, depending on the star’s distance from the earth, which is
evidently absurd.
Of course, he was looking in the frequency range of visible light, which we've noted is
subject to extinction. However, in the x-ray range we can (in principle) perform the same
basic test, and yet we still find no traces of these ghostly apparitions in binary stars, nor
do we ever see the stellar components going in "reverse time" as we would according to
the above profile. (Needless to say, for star systems at great distances it is not possible to
distinguish the changes in transverse positions but, as noted above, by examining the
Doppler shift of the radial components of their motions we can infer the motions of the
individual bodies.) Hence these observations support the proposition that the speed of
light in empty space is essentially independent of the speed of the source.
In comparison, if we take the relativistic approach with constant light speed c,
independent of the speed of the source, an analysis similar to the above gives the
approximate result
whose derivative is
which is always positive for any v less than 1. This means we can't possibly have images
arriving in reverse time, nor can we have any multiple appearances of the components of
the binary star system.
Regarding this subject, Robert Shankland recalled Einstein telling him (in 1950) that he
had himself considered an emission theory of light, similar to Ritz's theory, during the
years before 1905, but he abandoned it because
he could think of no form of differential equation which could have solutions
representing waves whose velocity depended on the motion of the source. In this
case the emission theory would lead to phase relations such that the propagated
light would be all badly "mixed up" and might even "back up on itself". He asked
me, "Do you understand that?" I said no, and he carefully repeated it all. When he
came to the "mixed up" part, he waved his hands before his face and laughed, an
open hearty laugh at the idea!
2.11 Thomas Precession
At the first turning of the second stair
I turned and saw below
The same shape twisted on the banister
Under the vapour in the fetid air
Struggling with the devil of the stairs who wears
The deceitful face of hope and of despair.
T. S. Eliot, 1930
Consider a slanted rod AB in the xy plane moving at speed u in the positive y direction as
indicated in the left-hand figure below. The A end of the rod crosses the x axis at time t =
0, whereas the B end does not cross until time t = 1. Hence we conclude that the rod is
oriented at some non-zero angle with respect to the xyt coordinate system. However,
suppose we view the same situation with respect to a system of inertial coordinates x'y't'
(with x' parallel to x) moving in the positive x direction with speed v. In accord with
special relativity, the x' and t' axes are skewed with respect to the x and t axes as shown
in the right-hand figure below.
As a result of this skew, the B end of the rod crosses the x' axis at the same instant (i.e.,
the same t') as does the A end of the rod, which implies that the rod is parallel to the x'
axis - and therefore to the x axis - based on the simultaneity of the x'y't' inertial frame.
This implies that if a rod was parallel to the x axis and moving in the positive x direction
with speed v, it would be perfectly aligned with the rod AB as the latter passed through
the x' axis. Thus if a rod is initially aligned with the x axis and moving with speed v in
the positive x direction relative to a given fixed inertial frame, and then at some instant
with respect to the rod's inertial rest frame it instantaneously changes course and begins
to move purely in the positive y direction, without ever changing its orientation, we find
that its orientation does change with respect to the original fixed frame of reference. This
is because the changes in the states of motion of the individual parts of the rod do not
occur simultaneously with respect to the original rest frame.
In general, whenever we transport a vector, always spatially parallel to itself in its own
instantaneous rest frame, over an accelerated path, we find that its orientation changes
relative to any given fixed inertial frame. This is the basic idea behind Thomas
precession, named after Llewellyn Thomas, who first wrote about it in 1927. For a simple
application of this phenomenon, consider a particle moving around a circular path. The
particle undergoes continuous acceleration, but at each instant it is at rest with respect to
the momentarily co-moving inertial frame. If we consider the "parallel transport" of a
vector around the continuous cycle of momentary inertial rest frames of the particle, we
find that the vector does not remain fixed. Instead, it "precesses" as we follow it around
the cycle. This relativistic precession (which has no counter-part in non-relativistic
physics) actually has observable consequences in the behavior of sub-atomic particles
(see below).
To understand how the Thomas precession for simple circular motion can be deduced
from the basic principles of special relativity, we can begin by supposing the circular path
of a particle is approximated by an n-sided polygon, and consider the transition from one
of these sides to the next, as illustrated below.
Let v denote the circumferential speed of the particle in the counter-clockwise direction,
and note that = 2/n for an arbitrary n-sided regular polygon. (In the drawing above we
have set n = 8). The dashed lines represent the loci of positions of the spatial origins of
two inertial frames K' and K" that are co-moving with the particle on consecutive edges.
Now suppose the vector ab at rest in K' makes an angle 1 with respect to the x axis (in
terms of frame K), and suppose the vector AB at rest in K" makes an angle of 2 with
respect to the x axis. The figure below shows the positions of these two vectors at several
consecutive instants of the frame K.
Clearly if 1 is not equal to 2, the two vectors will not coincide at the instant when their
origins coincide. However, this assumes we use the definition of simultaneity associated
with the inertial coordinate system K (i.e., the rest system of the polygon). The system K'
is moving in the positive x direction at the speed v, so its time-slices are skewed relative
to those of the polygon's frame of reference. Because of this skew, it is possible for the
vectors ab and AB to be parallel with respect to K' even though they are not parallel with
respect to K.
The equations of the moving vectors ab and AB are easily seen to be
This confirms that at t = 0 (or at any fixed t) these lines are not parallel unless 1 = 2.
However, if we substitute from the Lorentz transformation between the frames K and K'
, the equations of the moving vectors become
At t' = 0 these equations reduce to
In the limit as the number n of sides of the polygon increases and the angle approaches
zero, the value of cos() approaches 1 (to the second order), and the value of sin()
approaches . Hence the equations of the two moving vectors approach
Setting these equal to each other, multiplying through by /x', and re-arranging, we get
the condition
Recalling the trigonometric identity
and noting that 1 approaches 2 in the limit as goes to zero, the right-hand factor on
the right side can be taken as
where is the limiting value of both 1 and 2 as goes to zero. Making use of these
substitutions, and also noting that tan(2 1) approaches 2 1, the condition for the
two families of lines to be parallel with respect to frame K' (in the limit as goes to zero)
This is the amount by which the two vectors are skewed with respect to the K frame due
to the transition around a single vertex of the polygon, given that the transported vector
makes an angle with the edge leading into the vertex. The total precession resulting
from one complete revolution around the n-sided polygon is n times the mean value of
for each of the n vertices of the polygon. Since n = 2/, we can express the total
precession as
If the circumferential speed v is small compared with 1, the denominator of this
expression is closely approximated by 1, and the transported vector changes its absolute
orientation only very slightly on one revolution. In this case it follows that varies
essentially uniformly from 0 to 2 as the vector is transported around the circle. Hence
for small v the total precession for one revolution is given closely by
On the other hand, if v is not small, we can consider the general situation illustrated
The variable signifies the absolute angular position of the transported vector at any
given time, and signifies the vector's orientation relative to the positive y axis. As
before, denotes the angle of the vector relative to the local tangent "edge". We have the
We also have the following identifications involving the parameters and :
Substituting d+ d for d and re-arranging, we get
This can be integrated explicitly to give as a function of . Since equals + , we can
also give as a function of , leading to the parametric equations
. One complete "branch" is given by allowing to range from /2 to
/2, giving the angle from /2 to /2, and the angles from (/2)(1) to
(/2)(1). This is shown in the figure below.
Consequently, a full cycle of corresponds to 2/ times the above range, and so the
average change in per revolution (i.e., per 2 increase in ) is
This function is plotted in the figure below, along with the "small v" approximation.
For all v less than 1 we can expand the general expression into a series
These expressions represent the average change per revolution, because the cycles of
do not in general coincide with the cycles of . Resonance occurs when the ratio of the
change in to the change in is rational. This is true if and only if there exist integers
M,N such that
Adding 1 to both sides, we can set 1 + (M/N) equal to m/n for integers m and n, and we
can then square both sides and re-arrange to give, we find that the "resonant" values of v
are given by
where m,n are integers with |n| less than |m|.
We previously derived the low-speed approximation of the amount Thomas precession
for a vector subjected to "parallel transport" around a circle with a constant
circumferential speed v in the form v2 radians per revolution. Dividing this by 2 gives
the average precession rate of v2/2 in units of radians per radian (of travel around the
circle). We can also determine the average rate of Thomas precession, with units of
radians per second. Letting denote the orbital angular velocity (i.e., the angular
velocity with which the vector is transported around the circle of radius r), we have v =
or and a = v2/r where a is the centripetal acceleration. Hence we have o = v/r = a/v, so
multiplying v2/2 by o gives the average Thomas precession rate T = va/2 in units of
rad/sec, which represents a frequency of T = (v2/2)o = va/(4 cycles/sec.
Since the magnitude v2 of the Thomas precession is of the second order in v, we might
be tempted to think it is insignificant for ordinary terrestrial phenomena, but the
expression T = (v2/2)o shows that the precession frequency can be quite large in
absolute terms, even if v is small, provided o is sufficiently large. This occurs when the
orbital radius r is very small, giving a very large acceleration for any given orbital
velocity. Consider, for example, the orbit of an electron around the nucleus of an atom.
An electron has intrinsic quantum "spin" which tends to maintain it's absolute orientation
much as does a spinning gyroscope, so it can be regarded as a vector undergoing parallel
transport. Now, according to the original (naive) Bohr model, the classical orbit of an
electron around the nucleus is given by equating the Coulomb and centripetal forces
where e is the charge of an electron, m is the mass, 0 is the permittivity of the vacuum,
and N is the atomic number of the nucleus, so the linear and angular speeds of the
electron are
Bohr hypothesized that the angular momentum L = mvr can only be an integer multiple
of h/(2), so we have for some positive integer n
Therefore, the linear velocity and orbital frequency of an electron (in this simplistic
model) are
where = e2/(2h0) is the dimensionless "fine structure constant", whose value is
approximately 1/137. (Remember that we are using units such that c = 1, so all distances
are expressed in units of seconds.) For the lowest energy state of a hydrogen atom we
have n = N = 1, so the linear speed of the electron is about 1/137. Consequently the
precession frequency is (v2/2) = -0.00002664 times the orbital frequency, which is a
very small fraction, but it is still a very large frequency in absolute terms (1.755E-11
cycles/sec) because the orbital frequency is so large. (Note that these are not the
frequencies of photons emitted from the atom, because those correspond to quanta of
light given off due to transitions from one energy level to another, whereas these are the
theoretical orbital frequencies of the electron itself in Bohr's simple model.)
Incidentally, there is a magnetic interaction between the electron and nucleus of some
atoms that is predicted to cause the electron's spin axis to precess by +v2 radians per
orbital radian, but the actual observed precession rate of the spin axes of electrons in such
atoms is only +(v2/2). For awhile after its discovery, there was no known explanation for
this discrepancy. Only in 1927 did Thomas point out that special relativity implies the
purely kinematic relativistic effect that now bears his name, which (as we've seen) yields
a precession of (v2/2) radians per orbital radian. The sum of this purely kinematic effect
due to special relativity with the predicted effect due to the magnetic interaction yields
the total observed +(v2/2) precession rate.
It's often said that the relativistic effect supplies a "factor of 2" (i.e., divides by 2) to the
electron's precession rate. For example, Uhlenbeck wrote that
...when I first heard about [the Thomas precession], it seemed unbelievable that a
relativistic effect could give a factor of 2 instead of something of order v/c...
Even the cognoscenti of relativity theory (Einstein included!) were quite
(Uhlenbeck also told Pais that he didn't understand a word of Thomas's work when it first
came out.) However, this description is somewhat misleading, because (as we've seen)
the relativistic effect is actually additive, not multiplicative. It just so happens that a
particular magnetic interaction yields a precession of twice the frequency, and the
opposite sign, as the Thomas precession, so the sum of the two effects is half the size of
the magnetic effect alone. Both of the effects are second-order in the linear speed v/c. |
823f77282656b996 | Do all quantum trails inevitably lead to Everett?
I’ve been thinking lately about quantum physics, a topic that seems to attract all sorts of crazy speculation and intense controversy, which seems inevitable. Quantum mechanics challenges our deepest held most cherished beliefs about how reality works. If you study the quantum world and you don’t come away deeply unsettled, then you simply haven’t properly engaged with it. (I originally wrote “understood” in the previous sentence instead of “engaged”, but the ghost of Richard Feymann reminded me that if you think you understand quantum mechanics, you don’t understand quantum mechanics.)
At the heart of the issue are facts such as that quantum particles operate as waves until someone “looks” at them, or more precisely, “measures” them, then they instantly begin behaving like particles with definite positions. There are other quantum properties, such as spin, which show similar dualities. Quantum objects in their pre-measurement states are referred to as being in a superposition. That superposition appears to instantly disappear when the measurement happens, with the object “choosing” a particular path, position, or state.
How do we know that the quantum objects are in this superposition before we look at them? Because in their superposition states, the spread out parts interfere with each other. This is evident in the famous double slit experiment, where single particles shot through the slits one at a time, interfere with themselves to produce the interference pattern that waves normally produce. If you’re not familiar with this experiment and its crazy implications, check out this video:
So, what’s going on here? What happens when the superposition disappears? The mathematics of quantum theory are reportedly rock solid. From a straight calculation standpoint, physicists know what to do. Which leads many of them to decry any attempt to further explain what’s happening. The phrase, “shut up and calculate,” is often exclaimed to pesky students who want to understand what is happening. This seems to be the oldest and most widely accepted attitude toward quantum mechanics in physics.
From what I understand, the original Copenhagen Interpretation was very much an instrumental view of quantum physics. It decried any attempt to explore beyond the observations and mathematics as hopeless speculation. (I say “original” because there are a plethora of views under the Copenhagen label, and many of them make ontological assertions that the original formulation seemed to avoid, such as insisting that there is no other reality than what is described.)
Under this view, the wave of the quantum object evolves under the wave function, a mathematical construct. When a measurement is attempted, the wave function “collapses”, which is just a fancy way of saying it disappears. The superposition becomes a definite state.
What exactly causes the collapse? What does “measurement” or “observation” mean in this context? It isn’t interaction with just another quantum object. Molecules have been held in quantum superposition, including, as a new recent experiment demonstrates, ones with thousands of atoms. For a molecule to hold together, chemical bonds have to form, and for the individual atoms to hold together, the components have to exchange bosons (photons, gluons, etc) with each other. All this happens and apparently fails to cause a collapse in otherwise isolated systems.
One proposal thrown out decades ago, which has long been a favorite of New Age spiritualists and similarly minded people, is that maybe consciousness causes the collapse. In other words, maybe it doesn’t happen until we look at it. However, most physicists don’t give this notion much weight. And the difficulties of engineering a quantum computer, which require that a superposition be maintained to get their processing benefits, seems to show (to the great annoyance of engineers) that systems with no interaction with consciousness still experience collapse.
What appears to cause the collapse is interaction with the environment. But what exactly is “the environment”? For an atom in a molecule, the environment would be the rest of the molecule, but an isolated molecule seems capable of maintaining its superposition. How complex or vast does the interacting system need to be to cause the collapse? The Copenhagen Interpretation merely says a macroscopic object, such as a measuring apparatus, but that’s an imprecise term. At what point do we leave the microscopic realm and enter the classical macroscopic realm? Experiments that succeed at isolating ever larger macromolecules seem able to preserve the quantum superposition.
If we move beyond the Copenhagen Interpretation, we encounter propositions that maybe the collapse doesn’t really happen. The oldest of these is the deBroglie-Bohm Interpretation. In it, there is always a particle that is guided by a pilot wave. The pilot wave appears to disappear on measurement, but what’s really happening is that the wave decoheres, loses its coherence into the environment, causing the particle to behave like a freestanding particle.
The problem is that this interpretation is explicitly non-local in that destroying any part of the wave causes the whole thing to cease any effect on the particle. Non-locality, essentially action at a distance, is considered anathema in physics. (Although it’s often asserted that quantum entanglement makes it unavoidable.)
The most controversial proposition is that maybe the collapse never happens and that the superposition continues, spreading to other systems. The elegance of this interpretation is that it essentially allows the system to continue evolving according to the Schrödinger equation, the central equation in the mathematics of quantum mechanics. From an Occam’s razor standpoint, this looks promising.
Well, except for a pesky detail. We don’t observe the surrounding environment going into a superposition. After a measurement, the measuring apparatus and lab setup seem just as singular as they always have. But this is sloppy thinking. Under this proposition, the measuring apparatus and lab have gone into superposition. We don’t observe it because we ourselves have gone into superposition.
In other words, there’s a version of the measuring apparatus that measures the particle going one way, and a version that measures it going the other way. There’s a version of the scientist that sees the measurement one way, and another version of the scientist that sees it the other way. When they call their colleague to tell them about the results, the colleague goes into superposition. When they publish their results, the journal goes into superposition. When we read the paper, we go into superposition. The superposition spreads ever farther out into spacetime.
We don’t see interference between the branches of superpositions because the waves have decohered, lost their phase with each other. Brian Greene in The Hidden Reality points out that it may be possible in principle to measure some remnant interference from the decohered waves, but it would be extremely difficult. Another physicist compared it to trying to measure the effects of Jupiter’s gravity on a satellite orbiting the Earth: possible in principle but beyond the precision of our current instruments.
Until that becomes possible, we have to consider each path as its own separate causal framework. Each quantum event expands the overall wave function of the universe, making each one its own separate branch of causality, in essence, its own separate universe or world, which is why this proposition is generally known as the Many Worlds Interpretation.
Which interpretation is reality? Obviously there’s a lot more of them than I mentioned here, so this post is unavoidably narrow in its consideration. To me, the (instrumental) Copenhagen Interpretation has the benefit of being epistemically humble. Years ago, I was attracted to the deBroglie-Bohm Interpretation, but it has a lot of problems and is not well regarded by most physicists.
The Many Worlds Interpretation seems absurd, but we need to remember that the interpretation itself isn’t so much absurd, but its implications. Criticizing the interpretation because of those implications, as this Quanta Magazine piece does, seems unproductive, akin to criticizing general relativity because we don’t like the relativity of simultaneity, or evolution because we don’t like what it says about humanity’s place in nature.
With every experiment that increases the maximally observed size of quantum objects, the more likely it seems to me that the whole universe is essentially quantum, and the more inevitable this interpretation seems.
Now, it may be possible that Hugh Everett III, the originator of this interpretation, was right that the wave function never collapses, but that some other factor prevents the unseen parts of the post-measurement wave from actually being real. Referred to as the unreal version of the interpretation, this seems to be the position of a lot of physicists. Since we have no present way of testing the proposition as Brian Greene suggested, we can’t know.
From a scientific perspective then, it seems like the most responsible position is agnosticism. But from an emotional perspective, I have to admit that the elegance of spreading superpositions are appealing to me, even if I’m very aware that there’s no way to test the implications.
What do you think? Am I missing anything? Are there actual physics problems with the Many Worlds Interpretation that should disqualify it? Or other interpretations that we should be considering?
55 thoughts on “Do all quantum trails inevitably lead to Everett?
1. There is no such thing as objectivity. Not all physicist dismiss some interaction with consciousness. Some very prominent physicists at least think its a plausible possibility. I love how folks want to dismiss any quantum connections to consciousnesses as mysticism. One has to wonder what would ever be enough evidence to at least get those who don’t want to give credence to at least say its plausible. They so easily accept Hawking’s many worlds and parallel universe hypotheses even though thus far no real method to test them exists either. But suggests that perhaps consciousness is more than the body and your a mystic. I like Penrose’s Biocentrism ideas. But I like others to include perhaps its an illusion or we live in a real matrix. But I certainly don’t consider myself a mystic.
Liked by 1 person
1. Well Im not as learned as you so I would say what is the evidence for it not causing the collapse? Testable verifiable evidence? Just like several hypotheses in quantum Physics testable, repeatable and verifiable evidence is often out of reach. I’m not saying anything is absolutely true but to dismiss a hypothesis out of hand when your preferred solution is just as untestable and unverifiable is not objective and not fair. In the end all of it might be wrong. If consciousness is merely something that fades after the death of the organism so be it, nothing we can do its natural law in that case. But until we know, if we can know, we should at least aknowledge some very smart Physicist do indeed think consciousness may play a role. Roger Penrose is no crazy person nor is Stuart Hammerhoff an uneducated loon. Those are just two people who have a wide variety of hypotheses that say its plausible that consciousness interacts with exotic quantum particles. Many point out the double slit Experiment among other things as an example of what might be. Nobody knows what is as of yet but to call legitimate scientists mystics for saying maybe is just unfair. In the end you and scientists like you might indeed be right but until its proven please give all legitimate scientists the same respect you gave Hawking when he proposed String Theory and all the craziness, parallel worlds, many copies of me on parallel worlds, and all the other things I watched on The Scifi Channel, that come with it. Now some scientists are saying consciousness is an illusion and that’s funny really. When you cant solve it say it doesn’t exist solves the problem only it doesn’t because it does exist only its nature is a mystery
2. To the very limited extent that I understand the mathematics of decoherence, it does seem to make Everett the most natural interpretation. Why should orthogonal states just vanish when their effect on us diminishes? “Us” meaning the states of observers whose device registered a particle going through the left slit, for example, and “orthogonal ” meaning approximately orthogonal, to within some rounding error.
The fact that decoherence is in principle a smooth process, albeit a fast one, takes a lot of the sting out of the Many Worlds label. It’s kind of a misnomer. It would be equally fair to say there’s one world in Everett, but many superposed states that have extremely weak interactions.
A good resource is the wiki article on decoherence. Another is David Wallace, The Emergent Multiverse .
Liked by 1 person
1. Thanks for the references. I agree on the wiki article. I’ll check out the Wallace one.
Good point about the label. The main reason I described MWI the way I did was to downplay the new universes thing. Dewitt reportedly used it as a selling tool, but I think it makes too many people dismiss it as outlandish without understanding what’s actually being proposed.
3. Nobody knows the source or nature of consciousness. There is evidence you remain conscious after the heart stops and blood flow to the brain ceases. For how long is still being examined. Previously this was not thought possible. Now some adjust there position saying activity continues till clinical brain death. No one as of yet can provide evidence consciousness is not affecting quantum particles or the double slit Experiment because nobody knows the nature, origin, components or make up of consciousness. Hell some just give up altogether and say its not real anyway, its an illusion. So all human beings are, what they have acomplished over millions of years of evolution is an illusion. Anyone who matter of factly claims they can prove consciousnesses is not affecting the quantum relm or vis versa know there wrong. Nobody even knows what consciousness is composed of let alone its origins so they can’t say for sure one way or another. They can dismiss it as woo or mysticism, they can belittle those who at least say maybe but, just like those who subjectively hope consciousness doesn’t die, they cant prove anything one way or the other. I wouldn’t be so harsh if people disparage brilliant scientists like Penrose and others by calling it mysticism. No better way to disparage a scientist than to call his or her hypothesis mysticism. Nobody called Hawking a mystic when ge hypothesized String Theory which is a parallel worlds theory with absolutely no direct evidence of it being true. Honestly parallel universes with my double in them sounds pretty darn mystical to me.
4. Don’t confuse the scientific method with the actual scientists. Scientists are people, human beings, and like all human beings they are almost incapable of objectivity on there own. If you can pick it up, put it in a beaker, and test it using the Scientific Method thats objective. Supposedly if the math works that is a good sign it could be true but even if tte math works it still can be wrong. If you can’t pick it up and test it it could be wrong. Quantum Physics reaches out into a largely untestable area of science. In fact many well known scientists ponder aloud that maybe we have reached or soon will reach all we are capable of knowing leaving infinite amounts of questions unanswered and unknowable.
1. Hi Matthew,
“…I would say what is the evidence for it not causing the collapse? Testable verifiable evidence? ”
I alluded to some in the post: the difficulty in constructing a quantum computer. Quantum computing’s unique value is being able to process possible paths in parallel, which requires maintaining a superposition as long as possible. However, long before any conscious entity becomes aware of what’s happening, the superposition decoheres. This is a serious challenge for QC. If it could be overcome simply by keeping conscious systems from seeing it, it likely would have been solved decades ago. As it is, many QC processors have to operate at near 0 Kelvin to minimize interaction with the environment and even that only keeps the qubit circuits in superposition for a very brief time.
“Nobody knows the source or nature of consciousness.”
I think neuroscience is making steady progress in understanding it. (See the posts in my Mind and AI category for why.) Of course, many people don’t like what’s being found, so the assertion that science is utterly helpless in this area remains a popular one.
“Don’t confuse the scientific method with the actual scientists.”
A crucial part of scientific methods (there isn’t just one) is guarding against human bias. It’s why results must be repeatable, transparent, and subject to peer review. In my experience, the ones that pass this test don’t affirm expansive conceptions of consciousness.
But as you note, there is no unique evidence for any one interpretation of quantum physics. It’s why I said that the responsible position is agnosticism on them. For now.
5. Maybe a little beside the point … Please forgive me.
As someone who could not even bother with elementary school and for several years has not been able to master English … he claims that scientists do not understand the basic processes of the universe.
Well, it can be said, it’s just a stupid Pole.
But I will not be giving hundreds of examples of scientific indolence. Only one.
Just what to think of the state of the scientific mind, when one of the most prominent minds, carries out such a thought experiment … whether it was just a joke or just a word of despair
Throw a book into the black hole. The book carries information. Perhaps that information is about physics, perhaps that information is the plot of a romance novel – it could be any kind of information. But as far as anyone knows, the outgoing Hawking radiation is the same no matter what went into the black hole. The information is apparently lost – where did it go?
Do we see one of the greatest idiocys of quantum physics?
Do we see how beautiful minds are stupidity?
Maybe just a stupid pole is dumber than it would seem?
Liked by 1 person
1. Stan,
From what I understand, information lost to a black hole remains a problem that hasn’t been solved. I’ve read some speculation that maybe it’s smeared across the event horizon as a sort of hologram, which sounds like it could conceivably affect Hawking radiation, but it all sounds highly speculative.
One of the problems with physics today is that too much of the theoretical work happens far outside of testable conditions. On the one hand, this should be fine since we never know when such exploration might turn up something testable. But until it does, we have to be stringent in remembering that it’s informed speculation.
Liked by 1 person
6. Mike.
Only this is not a problem with the information that carries the object that falls into a black hole.
This applies to the information that the object carries about itself.
Is known that information is the basis of the quantum universe.
1. Throw two stones into a black hole. On one we paint the flag US and the second flag of Poland.
Does such information mean something.
2. Now we will fire two cannonballs towards the black hole.
A stone ball from Poland and a ball of uranium from the US.
Is this the sense of information for quantum physics?
Liked by 1 person
7. Mike.
If I didn’t believe in your wonderful reasoning … after all, I read your wise statements.
If something is to blame, it is my tragic English.
Besides, the scientists themselves, although they are so wonderful in quantum physics, admit that they absolutely have no idea why this works.
so I disappear… but not on twitter.
Liked by 1 person
8. My problem with MWI is the same one many have: where do all those new realities come from? What does it suggest about matter and energy?
Tegmarkians can talk about how the square root of 4 is both +2 and -2, and no one worries about where the extra answer came from. But I don’t believe we live in a Tegmarkian universe.
There is also, to me, an issue of reality explosion: Wear a pair of polarizing sunglasses, and each photon that hits them has a chance of passing through or not. So each photon seems to be creating new realities. Billions and billions of new realities. Every instant.
MWI fans have said this doesn’t happen, but I’m not clear on why not.
I have played with the idea that what happens is that the standing wave of the universe becomes more complex with each possible branch such that all possible paths that could have been taken are part of that wave. But there’s only one actual reality that emerges from that wave.
I’ve never found the waveform collapse all that mysterious. A particle in flight is a vibration in the relevant particle field, the energy of that quanta is spread out in the wave. But for that energy to interact with, say, an electron in the wall it hits, that single spread out quanta “drains” into the contact point.
The mystery, if I understand it, has to do with what “selects” that contact point, and how does the energy of the wave “drain” into that point? We have no maths for that.
I suspect the contact point gets selected per the same mechanism that “selects” which atom of a radioactive sample decays next. Or as how the first bird of a flock decides to take to the air. Maybe it is literally random (which it seems to be).
I sure wish someone would discover something new. QFT and GR have been at loggerheads far too long.
Liked by 1 person
1. I have to admit that I wonder about the energy aspect of this as well. If every part of the wave becomes a full particle in its own branch of the superposition, then how is the energy of that wave, and every other wave, not effectively magnified? My understanding is that we still don’t understand at a fundamental level how mass is generated. (The Higgs supposedly only explains a subset of it.) If the non-visible parts of the post-measurement wave aren’t real, then maybe that has something to do with it.
What’s interesting about the explosion of superpositions, is virtually all quantum events average out until the macroscopic deterministic world emerges. To me, that implies that most of the “universes” being generated are virtually identical. (There would have been far more divergence in the early instances of the big bang when quantum events generated patterns that later grew into voids and galactic superclusters.) Today, it seems like it would only be the rare case of quantum indeterminancy “bleeding” through that would lead to divergences. It might be that most of the exploding superpositions end up converging back to one reality, or only a few of them. (I have no idea if the mathematics lend any credence whatsoever to just conjecture.) And I’ve read some variances of the interpretation that, instead of proliferating universes, it’s really just interacting ones.
That actually isn’t my understanding of what happens. As I understand it, the entire wave instantly disappears, replaced by the particle, even if the wave has been spread around and fragmented over vast distances, that there’s no timeline for it to drain. (Which admittedly also makes “collapse” a questionable word for the phenomenon.) That said, decoherence isn’t supposed to be instantaneous either, just very fast, so who knows.
Totally agreed that it would be good to see progress somewhere. I remember many physicists hoping the LHC would provide something, anything, unexpected so they’d have something to work with, but other than failing to confirm supersymmetry, most of what they’ve gotten just seemed to reaffirm the Standard Model.
Liked by 1 person
Yeah, the mass of protons and neutrons, for example, comes mainly from the energy of the quark and gluon interactions, which means most of the mass from matter isn’t due to the Higgs.
Which is why I find it easier to think about in terms of energy, although I usually see mass and energy as two faces of the same thing.
“To me, that implies that most of the “universes” being generated are virtually identical.”
Which I think is how MWI fans respond to the question about sunglasses and photons. My question in return is how identical is “virtually” identical?
Remember Bradbury’s famous short story, The Sound of Thunder? Do worldlines converge and merge, or do even quantum differences ultimately diverge and result in separate realities?
A lot of MWI fans think Occam and parsimony support their position, but I (so far) see it the opposite. MWI doesn’t sound like the simple explanation, and the explosion problem defies parsimony.
But then I’m not sure I truly understand MWI, and I’ve gotten the impression a lot of its fans don’t really understand it, either. Plus, there seem to be multiple versions of the theory since Everett.
Greg Egan has a short story, The Infinite Assassin, in his collection, Axiomatic. It’s about an illegal drug that allows users to interact with parallel universes, which turns out to be a Very Bad Thing. What I really liked about the story was the sense of continuum Egan gives to parallel worlds.
One can’t help but wonder what makes them distinct.
Sean Carroll gave a talk about MWI (which I found unconvincing), and he had an experiment set up remotely that did a photon-half-silver-mirror thing with two detectors. Through a phone app he was able to trigger the experiment and get a (random) result which he used to determine if he should jump to the left or to the right. (The right, in this case, IIRC.)
The claim was that this generated two realities accommodating his jumping both ways. Which generated two different audiences (and sets of video viewers) who remember him jumping both ways. Which led to this comment where I recall him jumping right. Presumably the alternate me remembers it differently.
But I keep wondering about those sunglasses and all the quantum interactions happening all the time. I’ve just never heard anything from MWI that gets me past this key objection.
Yes, agreed. (That’s why I quoted “drains” — best word I could think of but hardly adequate.) I think we’re on the same page here, I’m just trying to imagine an ontology that makes sense of “waveform collapse.”
I’ve been thinking about this a bit as I try to wrap my head around some of the strange variations of the two-slit thing. (Have you see the three-slit experiment? Mind-blowing!)
In a single photon event, the laser emits a “photon” with no location but a wave (with momentum) that expands from the laser into the surrounding environment. It’s a single quanta of energy causing a vibration in the EM field.
Now that energy has to go somewhere, and what we see happening is that waveform somehow interacting with some electron in some atom such that the electron is raised to a new energy level. At that point, the photon does have a location (and presumably we can no longer talk about its momentum).
That interaction requires the full energy of the quanta, so the energy in the field “goes” (or “drains” or some better word) into that interaction.
But this is just me pondering the “waveform collapse” issue and WAG-ing at an ontology.
“I remember many physicists hoping the LHC would provide something, anything,”
Yeah, and now it’s shut down for two years for an upgrade. You’d think not finding SUSY at all would take the wind out of certain sails, but they just keep redefining the target. Part of the problem is that String Theory seems to need it, so no SUSY threatens ST.
There’s also that chart you’ve probably seen showing how the three forces unify at very high energies? Those curves intersect at the same point only if SUSY is true. Without SUSY, they don’t.
So it’s a dream that’s hard to kill.
There was some hope of seeing something new in very esoteric sectors involving (IIRC) weak decay. I can’t recall what it was exactly, and no one is jumping up and down, so whatever they saw may have not survived more analysis. They were seeing bumps in both CMS and ATLAS, I think, and combining the two bumps gave them a nice sigma, but the data weren’t compatible so combining them didn’t really say anything.
Or something like that.
Merry Christmas!
Liked by 1 person
1. “My question in return is how identical is “virtually” identical?”
My conception is that normal events, such as all the deterministic events we see in nature where the quantum events average out, don’t create deviations. It’s only when we tie a macroscopic event to a specific quantum outcome, that a notable divergence happens. As you note, even a minor “meaningless” macroscopic event (such as which way Carroll jumped) might eventually butterfly into major changes.
Of course, we can’t rule out the possibility that quantum indeterminacy doesn’t “bleed” into the macroscopic world outside the precision of our instruments and butterfly all on its own, so the idea of similar universes may not be tenable.
There are definitely lots of versions in the Everettian family of interpretations. One I recently heard about on the Rationally Speaking podcast was relational quantum mechanics, which posits that whether a wave has decohered is relative to an observer. In other words, like the relativity of simultaneity in Einstein’s theories, this holds that where you are in the sequence of events determines when you see the collapse. Schrodinger’s cat sees the collapse as soon as the detection device is triggered, but Schrodinger himself doesn’t see it until he opens the box. However, the relational interpretation is reportedly agnostic about the reality of the other outcomes. (It doesn’t seem agnostic to me, but I probably don’t grasp the full idea.)
I need to look up that Egan story. It sounds interesting.
Ah, ok, I missed the quotes on “drain.” Thanks for the description of the photon. Part of what I find interesting about this is that the electrons are presumably constantly exchanging photons with each other and the nucleus, but despite that exhibit quantum waveness to those of us outside the relationship, which makes me think of the relational interpretation again.
I don’t think I knew that uniting all three forces required SUSY. Interesting. I know the weak and electomagnetic one were already shown to be the same. (Which strikes me as an odd pair.)
All in all, I think I’m happy I’m not a physicist right now.
Merry Christmas!
Liked by 1 person
1. “It’s only when we tie a macroscopic event to a specific quantum outcome, that a notable divergence happens.”
That matches what I’ve heard from MWI fans, but it seems to suffer the same micro/macro issues as many quantum things do. What is a “notable divergence” and what happens? Reality doesn’t diverge at all (why not?), or the diverged lines merge into one (again, why?).
That Egan story is good at pointing out how, if we take MWI at face value, our own reality is a fuzzy continuum of indistinguishable nearby realities. At what point am “I” no longer really me?
Chaos theory suggests (to me) that even minute differences may result in large changes down the road. What if, butterfly fashion, a photon that did pass through my sunglasses accounts for some minute change that ultimately destroys Saturn?
I’ve long wanted to sit down with a working theoretical physisict who’s really into, has really studied, MWI, because I’d like to understand how people like Sean Carroll identify MWI as their preferred interpretation. Some even say it’s the mostly glaringly obvious interpretation!
Doesn’t part of that thinking also come up in Copenhagen? The idea that the cat isn’t superposed to itself, but is to the scientist who hasn’t opened the box. Likewise, the science writer standing outside the lab is superposed until the scientist informs them of the result. And millions of readers are superposed until they read the writer’s article. (And everyone in Andromeda remains superposed probably forever.)
I’m not sure I believe in the idea of macro objects being superposed. What does it mean to suggest I’m superposed? Can experiments demonstrate it? Or is it just that I lack knowledge?
Ugh. We really need some advances in HE physics. We’re just grasping in the dark here.
I think at least some of that is accounted for in the difference between virtual photons and actual photons. I’ve seen some physics videos recently emphasizing the difference between them and how you can’t treat virtual photons as real — they’re almost an accounting device, although obviously something physical is going on. Lamb shift and so forth.
Same here! Electro-weak theory. (And the weak force is the one many books hand-wave on that “has something to do with radioactive decay” … yeah, and making the sun work, too!)
It sure made it seem like unification was a thing though, didn’t it. If two things as seemingly different as EM and weak force are unified, why not the strong force?
Again, we need more information! We don’t even really know if gravity is a force!
Liked by 1 person
2. “At what point am “I” no longer really me?”
Michael and I discussed this as well somewhere else on this thread. It seems like reality likes ruining our clean little categories, such as what is life or non-life (see prions or viriods), what is the border between species (some members of species A can mate with species B, but others can’t), what is computation, or what is a planet. It won’t surprise me too much if it scrambles our ideas of the self.
I told you to stop playing with those glasses Wyrd! Now look at what you’ve done. Who’s going to clean up this mess? We’ve got Saturn all over everything! 🙂
I recently went back and read Sean Carroll’s blog post on the MWI. I’m not sure his instincts on explaining it are the best. He tends to emphasize the multiple universes thing, which I think is a mistake.
Paul Torek above recommended David Wallace’s ‘The Emergent Multiverse’, which I’m thinking about picking up. It looks pretty good in the preview. My only pause is it’s pricey. Of course I’ve often spent more on neuroscience books. I just have to decide if I’m interested enough and willing to invest the work it would require.
I can see why people say the MWI is the most straightforward interpretation though. It does explain a lot. I see it as a candidate for reality. The only question is whether the implications of it in any way falsify it. But as I commented on Carroll’s post, that’s the problem with these interpretations. None of them are uniquely testable.
“— they’re almost an accounting device, although obviously something physical is going on. ”
Didn’t quantum physics start with Max Planck introducing a quanta purely as an accounting device? There was a similar disclaimer on Copernicus’ book. It seems like a lot of physics starts with someone saying, “Don’t worry, this is only for calculating convenience. It’s not it’s real or anything.”
“Again, we need more information! We don’t even really know if gravity is a force!”
Totally agreed on needing more information. Although wouldn’t you say we know gravity is a force? Or did you mean if it’s a force like the others in the Standard Model, with bosons (gravitons) and the like?
Liked by 1 person
3. “It won’t surprise me too much if it scrambles our ideas of the self.”
Yeah. The more I learn and think about “the self” the more complex and puzzling it seems.
“[MWI] does explain a lot.”
That I do realize. I’m confounded by the whole multiple universes thing; that’s pretty much the entirety craw stick.
I vaguely remember reading that Sean Carroll post. Think I’ll go back and re-read it this evening.
The Wallace book sounds kinda interesting… once I read about it. The title put me off, because while I’m open-minded-but-skeptical on MWI, I’m disbelieving (and disinterested) in multiverse theories. I found an online review of the Wallace book that sounds like another read for this evening.
“Didn’t quantum physics start with Max Planck introducing a quanta purely as an accounting device?”
Ha, yes, good point!
“Or did you mean if [gravity is] a force like the others in the Standard Model, with bosons (gravitons) and the like?”
Exactly. I want GR to be essentially correct with some minor correction to accommodate quantum, and I want QFT to turn out to be essentially epicyles — a theory that matches our instruments but is seriously wrong in some key regard.
We know matter/energy is quantized, but the jury is out on time/space. I want them to be smooth (providing yet another duality to reality). And that gravity is due to warped spacetime and there is no such thing as a graviton.
My spacetime wishlist. 😀
Liked by 1 person
4. Wow, that review is 19 pages long. I thought I might sneak a quick read before responding, but I think I’ll just add it to my queue too. Thanks for linking to it!
On GR and QM, I don’t really have preferences on which one wins (assuming they both don’t eventually have to be heavily modified). If spacetime does appear to be smooth, I wonder if we could ever be sure it wasn’t quantized at a size below the level of precision of whatever we were using to measure it.
And an infinitely divisible spacetime seems like it would come with its own potential multiverses. If the space between elementary particles is infinitely divisible, it allows patterns to exist there below our notice, such as entire micro-universes. And entire other universes could have been born, existed, and died in the Planck time at the beginning of the big bang. For that matter, an infinity of universes might have existed during the time you read this reply. (Don’t hit me.)
Liked by 1 person
5. I gave up (for now) on that review once I got to the discussion section. They were a little too glowing in their assessment for me to trust, and there was already a bit of a “yelling at the screen” thing going on here on the material they mentioned to that point.
The book does sound interesting, though. I found myself wondering if Wallace explains some of the stuff that was making me yell.
Continuous spacetime does seem to have the same weird issues the real numbers have. Maybe matter/energy being quantized saves the day?
While space might be infinitely small, matter isn’t, so no micro-galaxies hiding in the dust motes. Quantum limits on energy might also affect the minimum time it takes anything to happen (like c limits causality).
The question might be whether we can trust scale. Atoms have sizes due to their properties, so maybe certain things can only happen on certain scales. (And we use atomic vibrations to define the second.)
Or maybe they’ll find a graviton (or a chronon), and that will end the matter. But until then… well, just say that I look at GR and think, yes, that makes sense, but look at QFT and think, wait, what?!
Obviously the universe is under no obligation to fulfill my sense of how it ought to behave (oh, if only). 🙂
Liked by 1 person
6. “While space might be infinitely small, matter isn’t, so no micro-galaxies hiding in the dust motes.”
I actually wasn’t thinking the micro-universe patterns would be made of any matter/energy as we understand it, but something else, something we never see because it exists too far below the scales we can detect. Call it Mini-Me matter which could have it own smaller Mini-me quanta sizes. Of course, between Mini-Me matter might be Mini-mini-Me matter, and so forth and so on. Turtles all the way down.
Or if in fact there is only the matter/energy we’re familiar with, that means an infinite emptiness between every occurrence of it, which would itself be profound.
Liked by 1 person
7. Yes, as profound as the next real number after zero!
Talk about macro objects in superposition… I’m totally superposed on the real numbers being, in fact, real or, as sure seems sometimes, a fabrication of our imagination.
The thing is: how real is a circle, its diameter, and their ratio? If they are real, so is pi.
Liked by 1 person
9. I don’t get the whole ‘measuring changes quantum particles behavior’ thing. And by ‘not get’ it seems like it doesn’t work or is a simplification that lost important details on the way. For example if ‘measuring’ changes the quantum particles, then at what distance can you measure them? Any distance? If so wow, you’ve invented an instantaneous communication device that’s…faster than light. Nice. Or if the distance actually matters, then ‘measure’ is a term that is a heuristic and lacks the actual details like what distances are involved and where does the effect run out?
Liked by 1 person
1. You’re totally right not to get it. “Measurement” or “observation” is a maddeningly vague aspect of this. It reflects the lived experiences of scientists running experiments on quantum phenomena. Niels Bohr reportedly insisted that the description of this be limited to “ordinary” language, presumably because any attempt at a more precise description would imply knowledge we don’t really have.
It’s called “the measurement problem,” and it’s at the heart of the absurd nature of quantum mechanics. Attempts to solve it have led people down all kinds of bizarre paths.
I sometimes think QM represents the limits of our reality, where that reality emerges from some other underlying meta-reality. It might be that any “interpretation” is simply a vain attempt to map that meta-reality back into our little parochial reality. As patterns in and of the parochial reality, we simply may not be equipped to understand the wider meta-reality.
Liked by 1 person
1. FWIW, I see “measurement” as anything that resolves superposition. For me, the cat was always (obviously) either alive or dead, because the detector monitoring the radioactive sample is the measurement. There is no superposition; there is only a lack of knowledge about the cat.
Liked by 1 person
10. Excellent post, Mike. I enjoy mulling these quantum conundrums around. I am left feeling like an extremely poor sommelier of ideas–I get hints of different flavors but… really I have no idea what I’m tasting. It’s just really, really complex and intriguing. My own opinion is that we just don’t really know what we’re studying, and that at some point there will be a breakthrough in our conception of what reality actually is that will assist us in fitting the pieces of the puzzle we’ve found so far into a more insightful framework. As an example, I think our notions of physical and non-physical have pretty much broken down, and we have only vague ideas as to what consciousness might be, most of them extremely myopic, so that we’re in the position of using pretty poor tools for the job.
Just as one example, in that Quanta article to which you linked, Brian Greene suggests that each copy of you in the MWI is really you, and that the true you is the sum total of these you’s. Something like that. When a scientist says that a “self” might be a superposition of conscious selves occupying subtly related windows of reality, it’s an interesting idea to some folks and frowned upon by others–while when the classic New Age book Seth Speaks posits the same notion it is deemed woo woo foo foo to that crowd, but accepted by the other. This is, in a sense, what I mean about once clear concepts and divisions breaking down. So my own feeling is everyone’s a little bit right, and the answer is somehow a superposition of a great many ideas out there… 🙂
I don’t suspect a ton of physicists are lining up to endorse Brian Greene’s idea of the self. I have no idea, actually. But it’s always interesting to me when these parallels emerge. I think it’s safe to say whatever “models” or “conceptual frameworks” we use to try and organize our phenomenal observations are all wanting right now. What I dislike about the Copenhagen Interpretation is that it seems like a consequential moment in defining the purpose of science–which accepts setting aside questions about what the universe really is, and accepting as complete descriptions of what it does. For me, science is much less interesting when only one of the two questions remains in play…
Happy Holidays, Mike!
Liked by 1 person
1. Thanks Michael, and great hearing from you! Your comments are always thought provoking.
On Brian Greene’s notion of the self spanning multiple copies, I think, much like the notion of additional selves that originate from the idea of mind uploading, it’s a matter of philosophy, in other words, not a fact of the matter, but a personal choice. In both cases, the issue gets blurred as the copies get farther and farther away from the original.
For example, is someone born with my exact genetics, but due to an early quantum branching, lived a radically different life, still me? What about someone who branched away from me before I became a skeptic? Or even before I became interested in science? Or someone who branched away before I broke up with one of my old girlfriends, but instead married her and proceeded to have a large family?
My attitude is that these would all be a sort of sibling, albeit in the case of recent copies, far closer to me than any brother or sister. The only way I might be tempted to ever consider them to be me is if we could somehow share memories, but even then I’d expect difference to arise based on the order in which the various copies received the different memories.
On the Copenhagen Interpretation, I can understand not liking its inherent instrumentalism. I totally agree it’s a lot more inspirational to think of science as the pursuit of truth. The pursuit of models that accurately predict future observations…just doesn’t have the same inspirational resonance.
On the other hand, maybe the idea that the pursuit of truth is anything other than the pursuit of predictive models is an illusion. The real dividing line is whether we want to get into models that make predictions we can’t test. The Copenhagen Interpretation (apparently heavily influenced by the logical positivism in vogue during its formulation), labels that as undesirable.
I think by calling these models that go beyond the mathematics of quantum mechanics “interpretations”, physics has found a way to have its cake and eat it too. It allows us to label the predictive aspects of QM as settled science, but keep trying to figure out what it means.
Although as I’ve noted to you before, and as I did to Callan above, I sometimes wonder if quantum phenomena isn’t right at the edge of the reality we, as a subset of that reality, have any ability to make sense of. It might be a hole we can navigate around mathematically, but can never enter. (Although I hope we never stop trying.)
Happy Holidays to you too Michael!
Liked by 1 person
11. I have some strong opinions about this issue, and have been meaning to bring this up with Sabine Hossenfelder over at So far I’ve been too shy however. This is a woman who I absolutely love! She’d like to help “fix” a physics community that seems to have gotten “lost in the math”. Similarly I’d like to help a science community that attempts to function without generally accepted principles of metaphysics, epistemology, and axiology (or the three elements of “philosophy”). Perhaps if I feel that I’m able to develop my QM ideas here well enough, then I’ll become confident enough to speak with her about this over there some time? Well maybe.
Rather than get caught up in all sorts of higher speculation initially, I like to begin with QM basics. We humans perceive matter in terms of “particles” and in terms of “waves”. Are such perceptions good enough? Apparently they are not. When we try to pin down the exact state of a particle we’re confounded with wave like characteristics. Then when we try to pin down the exact state of a wave we’re confounded by particle like characteristics. So it should instead be better to consider matter to function as both. But apparently we can’t measure matter as some kind of hybrid of the two. Therefore it makes sense to me that we’d witness fundamental uncertainty as expressed by Heisenberg’s uncertainty principle, or an inequality that references Planck’s constant.
So to me there isn’t too much to worry about here. If we must measure particles in one way and waves in another way, though matter ultimately functions as neither but both, then we should expect to be confounded by more exacting measurements in either regard. Given the circumstances, is this not logical?
For example, let’s say that we find a material that’s similar to both rock and wood. So if we assess it as a kind of rock then the harder we look at it from this perspective, the more confounding this stuff should seem to us. Or the same could be said if we assess it as a kind of wood. So that’s essentially what I’m saying is happening with our assessments of matter. If it’s effectively “particle-wave”, though we can only provide measurements in one way or the other, then we should naturally fail as our measurements become more precise. Thus I’m good with quantum mechanics as I understand it. Apparently we’re too stupid or whatever to understand what’s going on.
The controversy however seems to be that most physicists (unlike Einstein) haven’t been content settling for such human epistemic failure. So apparently they’ve decided that no, it’s not that we’re trying to measure something as particle or wave that’s neither. Instead it must be that the uncertainty associated with either variety of measurement reflects an ontological uncertainty which exists in nature itself! So the argument is not that we’re stupid, but rather that nature itself functions outside the bounds of causality, or thus nature functions “stupidly”.
It could be that this view is entirely correct, but what irks me here it is that these physicists also refuse to admit that they thus forfeit their naturalism. Apparently they want to call themselves naturalists, but interpreting QM such that nature functions without causality — well that ain’t natural!
It’s the borderlands of science, such as here, brain study, and so on, that seem most in need of effective principle of philosophy. For this issue I offer my single principle of metaphysics. It reads:
To the extent that causality fails, there’s nothing to figure out anyway.
Unless I’m missing something this “Many worlds” interpretation appears in violation. I interpret it as physicists deciding that reality functions without causality (or “magically”), and then attempt to make sense of this anyway by theorizing “many worlds”. The more that we leave the bounds of causality behind, or thus introduce magical function, explanations should grow obsolete. From here reality should just be what it is. So I consider these sorts of interpretations of quantum mechanics to illustrate category error.
Liked by 1 person
1. A lot of your criticism seems aimed at the more ontological versions of the Copenhagen Interpretation, the ones that say that not only are we faced with an epistemic limit, but that there’s nothing else there, that reality isn’t set until the measurement. That’s usually the version of the CI that critics inveigh against, and I agree with that criticism. The ontological versions of the CI seem excessively pessimistic.
I think Neil Bohr’s version of the CI was closer to your sentiment. Here are the observations, and here are mathematics that can make predictions about those observations, with limitations, but within those limitations predictions are accurate enough to build technologies on top of them, so, “shut up and calculate!” I’ve grown to respect this view more as I’ve continued to learn about quantum physics. It’s not satisfying, but it’s at least epistemically humble.
But I think an MWI enthusiast would respond to you that their interpretation does restore determinism. Unfortunately, it’s determinism for reality overall, not a determinism we can observe. Which of course raises the question, if something is deterministic but not deterministic from any observer’s perspective, is that really deterministic? Who is it deterministic for?
One question I’d have for you is, how do you define naturalism? Is that definition mutable on new evidence? Myself, if I encounter phenomena that doesn’t meet my understanding of naturalism, I would still want to understand the phenomena as much as I could. But naturalism for me is just a set of working assumptions, ones subject to being adjusted as I learn more.
Liked by 1 person
2. If I may interrupt, two quick thoughts:
Firstly, I’m also a big fan of Sabine’s blog, been reading it for years. I highly recommend it. (Peter Woit also has a good blog.)
Secondly, just as (and I very much agree) physicists benefit from philosophy, philosophers can benefit from looking into some of the math involved. Quantum physics is highly mathematical, and the wave-particle duality confusion is, at least in part, a failure of language. At the math level, the confusion essentially goes away.
The way it’s usually put is that matter (as in particles) is something outside our direct experience that has wave-like properties and particle-like properties depending on what aspect of the particle one tests.
Liked by 2 people
3. Wyrd,
I was hoping to hear from you most of all! Perhaps on some level I mentioned Sabine because I recall you mentioning her another time? Anyway it was late 2015 that I became interested in her. Massimo Pigliucci had blogged about her position from a Munich physics conference that he attended.
On philosophers benefiting from math and physics, I certainly agree. I was initially most interested in philosophy as a university student, but didn’t want to become acclimated to accept no generally accepted agreements in the field. And beyond questions what could they teach me without generally accepted positions? Mental and behavioral sciences were next, though I found them far too speculative for comfort. So I looked for a field that could teach me how to learn. Yes physics! But alas, my own mind would not get me through upper division courses. I eventually earned a degree in economics, which I chose somewhat because it corresponded with my own amoral theory of value.
I didn’t mean to imply that modern physicists would improve if they were to become versed in modern philosophy. I actually believe that the field has tremendous problems, though needs improvement in order to better found science.
Regarding language, that’s one of my own main themes. So QM interpretations work pretty well mathematically? But I suppose that natural language explanations are needed most. Mathematics is many orders less descriptive than English. Notice that there’s nothing in mathematics which can’t be described in English, and yet much in English can’t be described in mathematics. Still the English interpretation of the mathematical QM interpretation that you’ve provided seems pretty close to mine.
It’s good to hear that you oppose the ontological version of the Copenhagen Interpretation. Actually I was under the impression that Bohr’s interpretation was more ontological, though perhaps not. Did he ever support Einstein’s “I, at any rate, am convinced that He [God] does not throw dice.”? (Though in practice I support Einstein about that, my own metaphysics is a bit more pragmatic. It’s more like “To the extent that God throws dice, nothing exists to figure out anyway!”)
If Many World enthusiasts are truly causal determinists, then tell me this. Do you think their position holds that all of these worlds actually exist? As in ontologically exist? As a solipsist I can stomach all sorts of crazy notions from a supernatural premise. But in a causal sense that position seems utterly ridiculous. Conversely if these many worlders are simply going epistemological with their position, as in “It can be helpful for us to think about QM this way…” then I could give their position some reasonable consideration.
Yep Mike, it’s deterministic. Who for? All that exists. Once again, I’m a solipsist. Reality is reality regardless of the human’s various idiotic notions.
I define naturalism as a belief that reality functions causally in the end. This definition is a definition, and therefore isn’t mutable to new evidence. Even if I ultimately decide that reality does not function causally, I should still consider this to be a useful definition. Here I’d either be a supernaturalist, or a hypocrite that changes my definition in order to call myself a naturalist.
I understand the desire to understand. This seems quite human and adaptive. Even the most faithful god fearing person should need to use reason in his or her life in order to get along. But to the extent that causality fails, as in ontological interpretations of the uncertainty associated with Heisenberg’s principle, things should not exist to figure out anyway.
Liked by 1 person
1. Eric,
Bohr very much did not support Einstein in his statement about God not playing dice. His response was along the lines of, “Einstein, don’t tell God what to do.” Honestly, while I think his and Heisenberg’s initial strategy was more epistemic, more instrumental, I do get the impression that they crossed the line in later debates. But it’s the instrumental version that I think remains useful.
“Do you think their position holds that all of these worlds actually exist? As in ontologically exist?”
It depends on which ones you talk to. Some are agnostic about whether the other wave function branches continue to exist. Others feel they don’t. But the most vocal proponents tend to think they do exist.
As I mentioned to Wyrd, it’s an old trick in physics to introduce something but then say, “Don’t panic, this is just a useful accounting gimmick. It’s not like this crazy thing is real or anything.” This has been particularly true for quantum mechanics. Max Planck originally introduced quanta purely to make his calculations work. I suspect some Everettians take this tack to side step the ontological debates. The thing is, many things that are mathematically convenient go on to become ontological necessity.
“Reality is reality regardless of the human’s various idiotic notions.”
That may be true, but how do we know whether we know reality? I think the only answer is whether our predictions are accurate. Of course, QM can’t predict a single quantum event, only the probabilities of certain outcomes. But as the numbers of events climb, those probabilities average out to solid predictions.
Given the above, whatever QM is, it has to be isomorphic with the reality in some way, otherwise those predictions would fail. As Wyrd mentioned, this may only be in the sense that epicycles were useful in Ptolemaic cosmology. (Interestingly, epicycles today remain as a useful perspective observational concept, despite the fact that we know they’re an illusion.)
Liked by 1 person
4. Mike,
If it’s the case that Bohr and Heisenberg began with a responsible epistemological position for their Copenhagen Interpretation, then why would they escalate it to ontology? Might I suggest a bit of jealousy? Even then Einstein was “the great one”. How wonderful it would feel to up him! But perhaps Einstein should mainly be blamed for selfishly not realizing that a responsible epistemological position had actually been presented, and so he chose to interprete their interpretation ontologically? Notice that “God doesn’t play dice” is an ontological claim. If he used this to counter the CI then he effectively should have goaded them into an irresponsible ontological position. And apparently they not only accepted, but used it to kick his ass! Today in popular media, and even among physicists, it’s thought that Einstein really blew it regarding QM.
I account for this incidence through a far larger structural problem. Notice that we’re asking physicists to do physics, though without provide them with any effective rules of metaphysics or epistemology to work from. Thus we should need a community of professionals armed with generally accepted rules from which to guide the function of science. Notice that the field of philosophy today has the flavor of “art and culture” rather than “science” to it. I’m not saying that this needs to change however. I’m saying that a new community of professionals must emerge that has a single mission — to straighten out science by means of its own accepted principles of metaphysics, epistemology, and axiology.
And what specifically do I propose to fix this particular mess? I’d mandate that the authors of any given position clearly state whether their proposal is theorized to just be “useful” (epistemology), or to also be “real” (ontology). Then as for those ambitions theorists that insist upon proposing an ontology regarding QM, there would be my single principle of metaphysics to contend with. Theorizing that any given bit of reality is not causally determined to occur exactly as it does occur, takes the theorist beyond the bounds of naturalism. Here there can be nothing to explain because without causal dynamics, no explanation will thus exist. This is the realm of magic. And I’m not saying that this doesn’t effectively occur. I’m saying that the position of Einstein and I, conversely, happens to be “natural”.
Well yes today, though once we have a community of professionals that’s able to effectively regulate the function of science through proven principles, there should only be “epistemic necessity”.
The only reality that I “know” exists, is that I exist in some form or other. If you’re conscious then you could say the same about yourself. And I consider it quite special to be able to truly know even that. Conversely my computer shouldn’t know that it exists (if it does exist), let along anything else.
I consider quantum mechanics to mark an incredible human achievement, though epistemologically rather than ontologically. And I do believe that it’s isomorphic with reality. But if any associated dynamic is not causally determined to occur exactly as it does occur, or “ontological uncertainty”, then the theory should effectively describe the function of magic.
But wait a minute, as I define it no explanation can exist to describe non-causal function, or magic. Right… So the effectiveness of QM theory suggests that all associated dynamics must be causally determined to occur exactly as they do occur. You’re not going to like that bit of circularity! I’ll remind you however that we’re measuring particles and waves here, though apparently matter functions as something associated but different.
Liked by 1 person
1. Eric,
I don’t know if you remember, but I actually think the distinction between instrumentalism and scientific-realism is a false dichotomy. We never have access to reality. We only ever have theories, predictive models about that reality. The “real” is only another more primally felt model. In the end, all we have are the models.
(This actually includes our model of self, as counter-intuitive as that sounds. Psychology has shown that access to our own mind is subject to just as many limitations as the information we get from the outside world.)
The only real distinction is between predictions that are testable and those that aren’t. The ones that are testable, and which have been demonstrated to have some level of accuracy, are “right” to whatever level they meet. But predictions that haven’t or can’t be tested should be regarded as speculative to varying degrees.
An untested or untestable prediction which is tightly bound to a tested prediction has a higher chance of eventually being shown to be accurate. But the more steps beyond observation to get to the prediction, the shakier the ground it rests on.
Under this guideline, the successfully tested predictions we have are the evolution of the wave function according to the Schrodinger equation, until information about it leaks into the environment, then we have the more definite state (position of the particle), etc. This is the instrumental Copenhagen Interpretation.
Everything else: assertions that the Copenhagen Interpretation is the only reality, pilot waves, spreading superpositions continuing under the Schrodinger equation, etc, have to be viewed as speculation, at least until someone can figure out some way to test them.
Still, speculation is fun, and should be fine as long as we acknowledge what we’re doing.
Liked by 1 person
5. Mike,
Well it sounds like we’re generally on the same page with that, though I wouldn’t refer to the distinction between instrumentalism and scientific–realism as a false dichotomy. Even if science only ever has models, we of course need words such as “real” which reference what actually exists beyond our models. And if some of these MWI’ers have decided that the lack of certainty in our measurements mandate “many worlds” in truth rather than simply as an accounting heuristic, then this would seem to be a wonderful example of “scientific realism”. This also strikes me as “the tail wagging the dog”.
Furthermore I don’t mind going ontological myself in some ways. I happen to believe that “God doesn’t throw dice”, which is to say I believe in absolute causality regardless of what we humans are able to figure out. Perhaps a reasonable name for this position would be “extreme naturalist”? So then what shall a person be called who makes the ontological claim that some things under a QM framework aren’t causality determined to occur exactly as they do occur? “Super-naturalist” seems over the top, and even quasi-naturalist”. So I’ll just go with straight “naturalist”, but in addition note that from this distinction “spooky stuff” does ontologically occur in some capacity.
Then there is my logical proposition from last time. My metaphysics holds that if something functions without causality, then nothing exists here to even theoretically figure out. Why? Because it’s the causality that would found any ontological explanation for any given event. The causality would be the vital element regardless of any potential understanding — nothing would otherwise exist to even look for.
I’m fine with how the QM probability distribution produces a macroscopic world which seems to function causally. But how can it be possible for something that is not perfectly caused to do whatever it does, to in the end become a causal constituent for a causal realm? I see that as a contradiction. Non-causal function, where by definition nothing exists to potentially figure out, should have no potential to produce causal function. (I suspect that there’s a simple way for this to be illustrated mathematically.) Thus if we notice that quantum function does produce causal function, then from here it must only be possible that all elements of quantum function occur causally in the end, and even if things continue to seem random to us humans.
Yes speculation is fun! Furthermore once science has better rules from which to work, it should also become more productive than today. (I see you’ve now put up a post on Sean Carrol. Sweet!)
Liked by 1 person
1. Interesting observation Steve! I’ve noticed a couple of interpretations for the Law of Large Numbers. One is that with enough trials, all sorts of implausible things eventually occur. The other seems more relevant however. It’s that the more times that you run a given experiment, the more statistically verified a given result will be. It’s essentially that all of these “random” results end up building a stronger and stronger case for a given figure. Is that what you meant?
I can see how it seems appropriate to apply this principle to quantum mechanics given that we’re discussing probability distributions for matter rather than exact states of being. But then again, my sense is that the LLN was set up to address every day causal events rather than quantum events that are theorized to not function causally. Does it address quantum strangeness as well? Have you found an infinitely better challenge to Einstein than the utterly pathetic “Don’t tell God what to do”? Is this a true answer, as in “God’s dice create order”? This deserves some academic consideration!
I’d be surprised if something fully beyond causality in an ontological sense is able to then go on to construct the causal function observed in nature. Causality is kind of my thing. But I’d love for this theory to get out there as a challenge to us causalists.
Liked by 1 person
1. While it’s true that a large number of random events will yield some rare outliers as part of the ensemble, when taken as a whole, it leads to highly predictable results. It’s the basis of statistical mechanics. Even in classical statistical mechanics, individual particles are assumed to behave randomly, but when the ensemble contains 10^23 particles, the values of pressure, temperature, etc are entirely deterministic. My statistical mechanics lecturer at university joked that when very large numbers are involved, “it is better to gamble than to count.”
Causality may be an illusion, as well as ontological fact.
Liked by 2 people
2. I agree entirely with your former professor’s observations Steve, and indeed, the Law of Large Numbers as I believe it’s traditionally been used. This is to say that if you do a single experiment a large number of times, it will continue to validate the same point in the end. And I also agree with that other interpretation. Even though a psychic may get a given prediction right, the LLN shall demonstrate the truth or falsity of this person’s powers over time.
And why does the LLN remain solid? Because of causality itself. Without an ordered world where cause leads to associated effect and the converse, it might be that the exact same experiment would not generally continue to provide the same sort of result. Or it might be that a human could indeed gain psychic powers and all sorts of “spooky” stuff. Causal order is required in order for the LLN to remain valid. Otherwise we’d need to count rather than to gamble.
I suppose that this is why advocates of ontological voids in causality haven’t yet tried to use the LLN to argue their case. Thus we instead get pedigreed snake oil carnival hawkers like Sean Carrol. Apparently people love hearing this sort of thing.
(I haven’t yet found a mathematical proof that causality can’t emerge from non-causality, but perhaps I will.)
I prefer “emergent” to “illusion”, but it’s the same concept.
If it’s true that there is a fundamental uncertainty to QM function, then yes, the causality that we observe must emerge from non-causality. Or it could be that there is a causality which we don’t grasp here given that we erroneously perceive existence in terms of particles and waves.
Causality may not be the fundamental thing we take it to be.
Right. But a better way to say this might be that causality may or may not be absolute. Somehow to me your statement implies that we’d still call something “causal” even if it isn’t. Or perhaps I’m being pedantic? You wouldn’t term something “causal” if it weren’t causally mandated to occur in the exact manner that it does would you?
Liked by 2 people
1. Eric,
If causality is emergent, that is, real but a composite process made up of lower level processes which are not themselves causal, then I would use it in the same manner I use “temperature”, “weather”, or “molecule”. Each of these things objectively exist, but are composed of things which are not that thing, in other words, they are composite phenomena.
The idea that causality is a composite phenomena is very counter-intuitive, but then so are many things in science.
Liked by 3 people
3. All true Mike, so apparently I was being pedantic there. If causality emerges from non-causality then it isn’t the fundamental thing that we take it for, similar to “molecule” and all the rest. But given our flawed perspectives I do still suspect that it’s fundamental in the end.
Liked by 1 person
12. Great post, and a clear summary of the position. I (like most people) have problems with all the proposed solutions, and that is as it should be, since none of them are entirely persuasive. The most unconvincing commentators are those who argue passionately for one particular interpretation.
My gut feeling is that we are still missing a fundamental insight, and I hope this will emerge either through some new observation, or else a new theory. My instinct is that entanglement holds the key to unlocking the answer. Disclaimer – it may be that this is wrong, and that it is just me who is lacking the fundamental insight 🙂
Liked by 1 person
1. Thanks Steve!
In recent decades, decoherence has become the preferred description of what happens when the wave appears to become a particle. Under that description, what actually happens is the wave becomes “entangled” with the environment. So your gut may be on to something!
It feels like all physicists can keep doing is testing the boundaries of this stuff until something unexpected comes up. After all, it was the necessity of dealing with bizarre observations that initially forced them to their current understanding of QM, such as it is. The answer probably lies in continuing to pile up those observations until something new emerges from the data, but that might take decades or centuries.
Liked by 1 person
Leave a Reply to Steve Morris Cancel reply
You are commenting using your account. Log Out / Change )
Google photo
Twitter picture
Facebook photo
Connecting to %s
|
e903b92ebbc45bb9 | Hydrogen atom
From Infogalactic: the planetary knowledge core
Jump to: navigation, search
Hydrogen atom
Hydrogen 1.svg
Hydrogen atom
Name, symbol protium,1H
Neutrons 0
Protons 1
Nuclide data
Natural abundance 99.985%
Isotope mass 1.007825 u
Spin 12+
Excess energy 7288.969± 0.001 keV
Binding energy 0.000± 0.0000 keV
A hydrogen atom is an atom of the chemical element hydrogen. The electrically neutral atom contains a single positively charged proton and a single negatively charged electron bound to the nucleus by the Coulomb force. Atomic hydrogen constitutes about 75% of the elemental (baryonic) mass of the universe.[1]
In everyday life on Earth, isolated hydrogen atoms (usually called "atomic hydrogen" or, more precisely, "monatomic hydrogen") are extremely rare. Instead, hydrogen tends to combine with other atoms in compounds, or with itself to form ordinary (diatomic) hydrogen gas, H2. "Atomic hydrogen" and "hydrogen atom" in ordinary English use have overlapping, yet distinct, meanings. For example, a water molecule contains two hydrogen atoms, but does not contain atomic hydrogen (which would refer to isolated hydrogen atoms).
Attempts to develop a theoretical understanding of the hydrogen atom have been important to the history of quantum mechanics.
The most abundant isotope, hydrogen-1, protium, or light hydrogen, contains no neutrons and is just a proton and an electron. Protium is stable and makes up 99.9885% of naturally occurring hydrogen by absolute number (not mass).
Deuterium contains one neutron and one proton. Deuterium is stable and makes up 0.0115% of naturally occurring hydrogen and is used in industrial processes like nuclear reactors and Nuclear Magnetic Resonance.
Tritium contains two neutrons and one proton and is not stable, decaying with a half-life of 12.32 years. Because of the short half life, Tritium does not exist in nature except in trace amounts.
Hydrogen ion
Hydrogen is not found without its electron in ordinary chemistry (room temperatures and pressures), as ionized hydrogen is highly chemically reactive. When ionized hydrogen is written as "H+" as in the solvation of classical acids such as hydrochloric acid, the hydronium ion, H3O+, is meant, not a literal ionized single hydrogen atom. In that case, the acid transfers the proton to H2O to form H3O+.
Ionized hydrogen without its electron, or free protons, are common in the interstellar medium, and solar wind.
Theoretical analysis
Failed classical description
Experiments by Rutherford in 1909 showed the structure of the atom be a dense, positive nucleus with a light, negative charge orbiting around it. This immediately caused problems on how such a system could be stable. Classical electromagnetism had shown that any accelerating charge radiates energy described through the Larmor formula. If the electron is assumed to orbit in a perfect circle and radiates energy adiabatically, the electron would spiral into the nucleus with a fall time of:[2]
t_{fall} \approx \frac{ a_0^3}{4 r_0^2 c} \approx 1.6 *10^{-11} s
Where a_0 is the Bohr radius and r_0 is the classical electron radius. If this were true, all atoms would instantly collapse, however atoms seem to be stable. Furthermore, the spiral inward would release a smear of electromagnetic frequencies as the orbit got smaller. Instead, atoms were observed to only emit discrete frequencies of light. The resolution would lie in the development of quantum mechanics.
Bohr Model
In 1913, Niels Bohr obtained the energy levels and spectral frequencies of the hydrogen atom after making a number of simplifying assumptions in order to account for the failed Classical model. The assumptions included:
1. Electrons can only be in certain, discrete orbitals, thereby having a discrete radius and energy.
The assumption that angular momentum was quantized can be expressed as:
L = n \hbar where n = 1,2,3,...
and \hbar is Planck constant over 2 \pi. Using this, the force relation between the Centripetal force and Coulomb's force, and energy conservation, Bohr derived the energy of each orbital of the hydrogen atom to be:[3]
E_n = - \frac{ m_e e^4}{2 ( 4 \pi \epsilon_0)^2 \hbar^2 } \frac{1}{n^2} ,
where m_e is the electron mass, e is the electron charge, \epsilon_0 is the electric permeability, and n is the quantum number (now known as the principal quantum number). Bohr's predict matched experiments measuring the hydrogen spectral series to the first order, giving more confidence to a theory that used quantized values.
There were still problems with Bohr's model, it failed to predict other spectral lines such as fine structure and hyperfine structure, it could only predict energy levels for hydrogen like (single electron) atoms with any accuracy and the predicted values were only correct to \alpha^2 \approx 10^{-5} , where \alpha is the fine-structure constant. The Bohr model assumed circular orbits where, as developed by Sommerfeld, elliptial orbits adds other quantum numbers besides n and changes energy values. Furthermore it didn't explain many other observed phenomena such as the Zeeman effect, Stark effect and violates the uncertainty principle.
These issues were resolved with the full development of quantum mechanics and the Schrödinger equation in 1925–1926. The solutions to the Schrödinger equation for hydrogen are analytical, giving a simple expression for the hydrogen energy levels and thus the frequencies of the hydrogen spectral lines and fully reproduced the Bohr model and went beyond it. It also yields two other quantum numbers and the shape of the electron's wave function ("orbital") for the various possible quantum-mechanical states, thus explaining the anisotropic character of atomic bonds.
Solution of Schrödinger equation
Alternatives to the Schrödinger theory
In the language of Heisenberg's matrix mechanics, the hydrogen atom was first solved by Wolfgang Pauli[4] using a rotational symmetry in four dimension [O(4)-symmetry] generated by the angular momentum and the Laplace–Runge–Lenz vector. By extending the symmetry group O(4) to the dynamical group O(4,2), the entire spectrum and all transitions were embedded in a single irreducible group representation.[5]
In 1979 the (non relativistic) hydrogen atom was solved for the first time within Feynman's path integral formulation of quantum mechanics.[6][7] This work greatly extended the range of applicability of Feynman's method.
Mathematical summary of eigenstates of hydrogen atom
In 1928, Paul Dirac found an equation that was fully compatible with Special Relativity, and (as a consequence) made the wave function a 4-component "Dirac spinor" including "up" and "down" spin components, with both positive and "negative" energy (or matter and antimatter). The solution to this equation gave the following results, more accurate than the Schrödinger solution.
Energy levels
\begin{array}{rl} E_{j\,n} & = -m_\text{e}c^2\left[1-\left(1+\left[\dfrac{\alpha}{n-j-\frac{1}{2}+\sqrt{\left(j+\frac{1}{2}\right)^2-\alpha^2}}\right]^2\right)^{-1/2}\right] \\ & \approx -\dfrac{m_\text{e}c^2\alpha^2}{2n^2} \left[1 + \dfrac{\alpha^2}{n^2}\left(\dfrac{n}{j+\frac{1}{2}} - \dfrac{3}{4} \right) \right] , \end{array}
The value
\frac{m_{\text{e}} c^2\alpha^2}{2} = \frac{0.51\,\text{MeV}}{2 \cdot 137^2} = 13.6 \,\text{eV}
is called the Rydberg constant and was first found from the Bohr model as given by
-13.6 \,\text{eV} = -\frac{m_{\text{e}} e^4}{8 h^2 \varepsilon_0^2},
where me is the electron mass, e is the elementary charge, h is the Planck constant, and ε0 is the vacuum permittivity.
This constant is often used in atomic physics in the form of the Rydberg unit of energy:
1 \,\text{Ry} \equiv h c R_\infty = 13.605\;692\;53(30) \,\text{eV}.[9]
The exact value of the Rydberg constant above assumes that the nucleus is infinitely massive with respect to the electron. For hydrogen-1, hydrogen-2 (deuterium), and hydrogen-3 (tritium) the constant must be slightly modified to use the reduced mass of the system, rather than simply the mass of the electron. However, since the nucleus is much heavier than the electron, the values are nearly the same. The Rydberg constant RM for a hydrogen atom (one electron), R is given by: R_M = \frac{R_\infty}{1+m_{\text{e}}/M}, where M is the mass of the atomic nucleus. For hydrogen-1, the quantity m_{\text{e}}/M, is about 1/1836 (i.e. the electron-to-proton mass ratio). For deuterium and tritium, the ratios are about 1/3670 and 1/5497 respectively. These figures, when added to 1 in the denominator, represent very small corrections in the value of R, and thus only small corrections to all energy levels in corresponding hydrogen isotopes.
The normalized position wavefunctions, given in spherical coordinates are:
\psi_{n\ell m}(r,\vartheta,\varphi) = \sqrt {{\left ( \frac{2}{n a_0} \right )}^3 \frac{(n-\ell-1)!}{2n[(n+\ell)!]^3}} e^{- \rho / 2} \rho^{\ell} L_{n-\ell-1}^{2\ell+1}(\rho) Y_{\ell}^{m}(\vartheta, \varphi )
3D illustration of the eigenstate \psi_{4,3,1}. Electrons in this state are 45% likely to be found within the solid body shown.
\rho = {2r \over {na_0}} ,
a_0 is the Bohr radius,
L_{n-\ell-1}^{2\ell+1}(\rho) is a generalized Laguerre polynomial of degree n − 1, and
Y_{\ell}^{m}(\vartheta, \varphi ) \, is a spherical harmonic function of degree and order m. Note that the generalized Laguerre polynomials are defined differently by different authors. The usage here is consistent with the definitions used by Messiah,[10] and Mathematica.[11] In other places, the Laguerre polynomial includes a factor of (n+\ell)!,[12] or the generalized Laguerre polynomial appearing in the hydrogen wave function is L_{n+\ell}^{2\ell+1}(\rho) instead.[13]
The quantum numbers can take the following values:
\int_0^{\infty} r^2 dr\int_0^{\pi} \sin \vartheta d\vartheta \int_0^{2 \pi} d\varphi\; \psi^*_{n\ell m}(r,\vartheta,\varphi)\psi_{n'\ell'm'}(r,\vartheta,\varphi)=\langle n,\ell, m | n', \ell', m' \rangle = \delta_{nn'} \delta_{\ell\ell'} \delta_{mm'},
where | n, \ell, m \rangle is the state represented by the wavefunction \psi_{n\ell m} in Dirac notation, and \delta is the Kronecker delta function.[14]
\phi(p, \vartheta_p, \varphi_p) = (2\pi\hbar)^{-3/2} \int e^{-i \vec{p} \cdot \vec{r} / \hbar} \psi(r,\vartheta,\varphi) dV,
which, for the bound states, results in [15]
\phi(p, \vartheta_p, \varphi_p) = \sqrt{\frac{2}{\pi} \frac{(n-l-1)!}{(n+l)!}} n^2 2^{2l+2} l! \frac{n^l p^l}{(n^2 p^2 + 1)^{l+2}} C_{n-l-1}^{l+1}\left(\frac{n^2 p^2 - 1}{n^2 p^2 + 1}\right) Y_l^m({\vartheta_p, \varphi_p}),
where C_N^\alpha(x) denotes a Gegenbauer polynomial and p is in units of \hbar/a_0 .
Angular momentum
The eigenvalues for Angular momentum operator:
L^2\, | n, \ell, m\rangle = {\hbar}^2 \ell(\ell+1)\, | n, \ell, m \rangle
L_z\, | n, \ell, m \rangle = \hbar m \,| n, \ell, m \rangle.
Visualizing the hydrogen electron orbitals
An image with more orbitals is also available (up to higher numbers n and ).
The quantum numbers determine the layout of these nodes.[16] There are:
• n-1 total nodes,
• l of which are angular nodes:
• m angular nodes go around the \phi axis (in the xy plane). (The figure above does not show these nodes since it plots cross-sections through the xz-plane.)
• l-m (the remaining angular nodes) occur on the \theta (vertical) axis.
• n - l - 1 (the remaining non-angular nodes) are radial nodes.
Features going beyond the Schrödinger solution
Due to the high precision of the theory also very high precision for the experiments is needed, which utilize a frequency comb.
See also
1. Palmer, D. (13 September 1997). "Hydrogen in the Universe". NASA. Retrieved 5 February 2008.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
2. Olsen, James; McDonald, Kirk (March 7, 2005). "Classical Lifetime of a Bohr Atom" (PDF). Joseph Henry Laboratories, Princeton University. Retrieved 12/10/2015. Check date values in: |accessdate= (help)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
3. "Derivation of Bohr's Equations for the One-electron Atom" (PDF). University of Massachusetts Boston. Retrieved 12/10/2015. Check date values in: |accessdate= (help)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
5. Kleinert H. (1968). "Group Dynamics of the Hydrogen Atom" (PDF). Lectures in Theoretical Physics, edited by W.E. Brittin and A.O. Barut, Gordon and Breach, N.Y. 1968: 427–482. line feed character in |journal= at position 24 (help)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
8. Sommerfeld, Arnold (1919). Atombau und Spektrallinien'. Braunschweig: Friedrich Vieweg und Sohn. ISBN 3-87144-484-7.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> German English
9. P.J. Mohr, B.N. Taylor, and D.B. Newell (2011), "The 2010 CODATA Recommended Values of the Fundamental Physical Constants" (Web Version 6.0). This database was developed by J. Baker, M. Douma, and S. Kotochigova. Available: http://physics.nist.gov/constants. National Institute of Standards and Technology, Gaithersburg, MD 20899. Link to R, Link to hcR
10. Messiah, Albert (1999). Quantum Mechanics. New York: Dover. p. 1136. ISBN 0-486-40924-4.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
11. LaguerreL. Wolfram Mathematica page
12. Griffiths, David (1995). Introduction to Quantum Mechanics. New Jersey: Pearson Education, Inc. p. 152. ISBN 0-13-111892-7.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
13. Condon and Shortley (1963). The Theory of Atomic Spectra. London: Cambridge. p. 441.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
14. Introduction to Quantum Mechanics, Griffiths 4.89
15. Physics of atoms and molecules, B. H. Bransden and C. H. Joachain. Appendix 5
16. Summary of atomic quantum numbers. Lecture notes. 28 July 2006
• Griffiths, David J. (1995). Introduction to Quantum Mechanics. Prentice Hall. ISBN 0-13-111892-7.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> Section 4.2 deals with the hydrogen atom specifically, but all of Chapter 4 is relevant.
• Bransden, B.H.; C.J. Joachain (1983). Physics of Atoms and Molecules. Longman. ISBN 0-582-44401-2.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
• Kleinert, H. (2009). Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 4th edition, Worldscibooks.com, World Scientific, Singapore (also available online physik.fu-berlin.de)
External links
(none, lightest possible)
Hydrogen atom is an
isotope of hydrogen
Decay product of:
Decay chain
of hydrogen atom
Decays to:
pl:Wodór atomowy |
751e9488c66a6dfa | My professor told us that in quantum mechanics a transformation is a symmetry transformation if $$ UH(\psi) = HU(\psi) $$
Can you give me an easy explanation for this definition?
In a context like this, a symmetry is a transformation that converts solutions of the equation(s) of motion to other solutions of the equation(s) of motion.
In this case, the equation of motion is the Schrödinger equation $$ i\hbar\frac{d}{dt}\psi=H\psi. \tag{1} $$ We can multiply both sides of equation (1) by $U$ to get $$ Ui\hbar\frac{d}{dt}\psi=UH\psi. \tag{2} $$ If $UH=HU$ and $U$ is independent of time, then equation (2) may be rewritten as $$ i\hbar\frac{d}{dt}U\psi=HU\psi. \tag{3} $$ which says that if $\psi$ solves equation (1), then so does $U\psi$, so $U$ is a symmetry.
For a more general definition of symmetry in QM, see
Symmetry transformations on a quantum system; Definitions
• 3
$\begingroup$ This is a good answer but it brings to another question, why do we call symmetry this condition? $\endgroup$ – SimoBartz Apr 8 at 13:39
• $\begingroup$ @SimoBartz That's a good question. In a more completely specified model, say with lots of local observables as in quantum field theory, we would require that a symmetry preserve things like the relationships between those observables in space and time. But in the present question, only the Hamiltonian is specified, so there is nothing else to preserve. $\endgroup$ – Chiral Anomaly Apr 8 at 16:58
• 1
$\begingroup$ @SimoBartz, what does the word "symmetry" mean to you? Have you encountered it in other contexts, such as classical mechanics or geometry? $\endgroup$ – Vectornaut Apr 8 at 21:48
• $\begingroup$ @Vectornaut What if they answered yes to any of those? What would you say? $\endgroup$ – opa Apr 9 at 17:01
• $\begingroup$ Actually I'have never seen this concept before, my professor told us that when you have a symmetry transformation the system is invariant respect to that transformation. I imagine it means that nothing changes except the point of view. But if I transform a solution in another one maybe the new solution is completely different $\endgroup$ – SimoBartz Apr 10 at 12:54
What you have written there is nothing but the commutator. Consider for example the time evolution operator \begin{align*} U\left(t-t_{0}\right)=e^{-i\left(t-t_{0}\right) H} \end{align*} If $\psi\left(\xi_{1}, \dots, \xi_{N} ; t_{0}\right)$ is the wave function at time $t_0$ and $U(t−t0)$ is the time evolution operator that for all permutations $P$ satisfies $\left[U\left(t-t_{0}\right), P\right]=0$ then also $$\left(P U\left(t-t_{0}\right) \psi\right)\left(\xi_{1}, \ldots, \xi_{N} ; t_{0}\right)=\left(U\left(t-t_{0}\right) P \psi\right)\left(\xi_{1}, \ldots, \xi_{N} ; t_{0}\right)$$ This means that the permuted time evolved wave function is the same as the time evolved permuted wave function.
Another example would be if you consider identical particles. An arbitrary observable $A$ should be the same under the permutation operator $P$ if one has identical particles. This is to say: \begin{align*} [A, P]=0 \end{align*} for all $P\in S_N$ (in permutation group of $N$ particles).
Your Answer
|
179412214f51c861 | Skip to main content
Chemistry LibreTexts
10.5: The \(\pi\)-Electron Approximation of Conjugation
• Page ID
• Molecular orbital theory has been very successfully applied to large conjugated systems, especially those containing chains of carbon atoms with alternating single and double bonds. An approximation introduced by Hückel in 1931 considers only the delocalized p electrons moving in a framework of \(\pi\)-bonds. This is, in fact, a more sophisticated version of a free-electron model.
The simplest hydrocarbon to consider that exhibits \(\pi\) bonding is ethylene (ethene), which is made up of four hydrogen atoms and two carbon atoms. Experimentally, we know that the H–C–H and H–C–C angles in ethylene are approximately 120°. This angle suggests that the carbon atoms are sp2 hybridized, which means that a singly occupied sp2 orbital on one carbon overlaps with a singly occupied s orbital on each H and a singly occupied sp2 lobe on the other C. Thus each carbon forms a set of three \(\sigma\) bonds: two C–H (sp2 + s) and one C–C (sp2 + sp2) (part (a) of Figure \(\PageIndex{1}\)).
Figure \(\PageIndex{1}\): (a) The σ-bonded framework is formed by the overlap of two sets of singly occupied carbon sp2 hybrid orbitals and four singly occupied hydrogen 1s orbitals to form electron-pair bonds. This uses 10 of the 12 valence electrons to form a total of five σ bonds (four C–H bonds and one C–C bond). (b) One singly occupied unhybridized 2pz orbital remains on each carbon atom to form a carbon–carbon π bond. (Note: by convention, in planar molecules the axis perpendicular to the molecular plane is the z-axis.)
The Hückel approximation is used to determine the energies and shapes of the \(\pi\) molecular orbitals in conjugated systems. Within the Hückel approximation, the covalent bonding in these hydrocarbones can be separated into two independent "frameworks": the \(\sigma\)-bonding framework and the the \(\sigma\)-bonding framework. The wavefunctions used to describe the bonding orbitals in each framework results from different combinations of atomic orbitals. The method limits itself to addressing conjugated hydrocarbons and specifically only \(\pi\) electron molecular orbitals are included because these determine the general properties of these molecules; the sigma electrons are ignored. This is referred to as sigma-pi separability and is justified by the orthogonality of \(\sigma\) and \(\pi\) orbitals in planar molecules. For this reason, the Hückel method is limited to planar systems. Hückel approximation assumes that the electrons in the \(\pi\) bonds “feel” an electrostatic potential due to the entire \(\sigma\)-bonding framework in the molecule (i.e. it focuses only on the formation of \(\pi\) bonds, given that the \(\sigma\) bonding framework has already been formed).
Conjugated Systems
A conjugated system has a region of overlapping p-orbitals, bridging the interjacent single bonds, that allow a delocalization of \(\pi\) electrons across all the adjacent aligned p-orbitals. These \(\pi\) electrons do not belong to a single bond or atom, but rather to a group of atoms.
Before considering the Hückel treatment for ethylene, it is beneficial to review the general bonding picture of the molecule. Bonding in ethylene involves the \(sp^2\) hybridization of the \(2s\), \(2p_x\), and \(2p_y\) atomic orbitals on each carbon atom; leaving the \(2p_z\) orbitals untouched (Figure \(\PageIndex{2}\)).
Figure \(\PageIndex{2}\): Hybridizing of the carbon atomic orbitals to give \(sp^2\) hybrid orbitals for bonding to hydrogen atoms in ethylene. from ChemTube (CC-SA-BY-NC; Nick Greeves).
The use of hybrid orbitals in the molecular orbital approach describe here is merely a convenience and not invoking valence bond theory (directly). An identical description can be extracted using exclusively atomic orbitals on carbon, but the interpretation of the resulting wavefunctions is less intuitive. For example, the ith molecular orbital can be described via hybrid orbitals
\[ | \psi_1\rangle = c_1 | sp^2_1 \rangle + c_2 | 1s_a \rangle \nonumber\]
or via atomic orbitals.
\[ | \psi_1\rangle = a_1 | 2s \rangle + a_1 | 2p_x \rangle + a_1 | 2p_y \rangle + a_4| 1s_a \rangle \nonumber\]
where \(\{a_i\}\) and \(\{c_i\}\) are coefficients of the expansion. Either describe will work and both are identical approaches since
\[| sp^2_1 \rangle = b_1 | 2s \rangle + b_1 | 2p_x \rangle + b_1 | 2p_y \rangle \nonumber\]
where \(\{c_i\}\) are coefficients describing the hybridized orbital.
The bonding occurs via the mixing of the electrons in the \(sp^2\) hybrid orbitals on carbon and the electrons in the \(1s\) atomic orbitals of the four hydrogen atoms (Figure \(\PageIndex{1}\); left) resulting in the \(\sigma\)-bonding framework. The \(\pi\)-bonding framework results from the unhybridized \(2p_z\) orbitals (Figure \(\PageIndex{2}\); right). The independence of these two frameworks is demonstrated in the resulting molecular orbital diagram in Figure \(\PageIndex{3}\); Hückel theory is concerned only with describing the molecular orbitals and energies of the \(\pi\) bonding framework.
Figure \(\PageIndex{3}\): Molecular orbitals demonstrating the sigma-pi separability of the \(\pi\)-bonding framework (blue) and the \(\sigma\)-bonding frameworks (red) of ethylene.
Since Hückel theory is a special consideration of molecular orbital theory, the molecular orbitals \(| \psi_i \rangle\) can be described as a linear combination of the \(2p_z\) atomic orbitals \(\phi\) at carbon with their corresponding \(\{c_i\}\) coefficients:
\[ | \psi_i \rangle =c_1 | \phi_{1} \rangle +c_2 | \phi_2 \rangle \label{LCAO} \]
This equation is substituted in the Schrödinger equation:
\[ \hat{H} | \psi_i \rangle =E_i | \psi_i \rangle \]
with \(\hat{H}\) the Hamiltonian and \(E_i\) the energy corresponding to the molecular orbital to give:
\[ \hat{H} c_{1} | \phi _{1} \rangle +\hat{H} c_{2} | \phi _{2} \rangle =E c_{1} | \phi _{1} \rangle +E c_{2} | \phi _{2} \rangle \label{SEq}\]
If Equation \(\ref{SEq}\) is multiplied by \(\langle \phi _{1}| \) (and integrated), then
\[c_1(H_{11} - ES_{11}) + c_2(H_{12} - ES_{12}) = 0 \label{Eq1}\]
where \( H_{ij}\) are the Hamiltonian matrix elements (see note below)
\[ H_{ij} = \langle \phi_i | \hat{H} | \phi_j \rangle = \int \phi _{i}H\phi _{j}\mathrm {d} v\]
and \( S_{ij} \) are the overlap integrals.
\[ S_{ij}= \langle \phi_i | \phi_j \rangle = \int \phi _{i}\phi _{j}\mathrm {d} v\]
If Equation \(\ref{SEq}\) is multiplied by \( \langle \phi _{2} | \) (and integrated), then
\[c_1(H_{21} - ES_{21}) + c_2(H_{22} - ES_{22}) = 0 \label{Eq2}\]
Both Equations \(\ref{Eq1}\) and \(\ref{Eq2}\) can better represented in matrix notation,
\[ {\begin{bmatrix}c_{1}(H_{11}-ES_{11})+c_{2}(H_{12}-ES_{12})\\c_{1}(H_{21}-ES_{21})+c_{2}(H_{22}-ES_{22})\\\end{bmatrix}}=0\]
or more simply as a product of matrices.
\[\begin{bmatrix} H_{11} - ES_{11} & H_{12} - ES_{12} \\ H_{21} - ES_{21} & H_{22} - ES_{22} \\ \end{bmatrix} \times \begin{bmatrix} c_1 \\ c_2 \\ \end{bmatrix}= 0 \label{master}\]
All diagonal Hamiltonian integrals \( H_{ii}\) are called Coulomb integrals and those of type \(H_{ij}\) are called resonance integrals. Both integrals are negative and the resonance integrals determines the strength of the bonding interactions. The equations described by Equation \(\ref{master}\) are called the secular equations and will also have the trivial solution of
\[ c_1 = c_2 = 0 \]
Within linear algebra, the secular equations in Equation \(\ref{master}\) will also have a non-trivial solution, if and only if, the secular determinant is zero
\[ \left| \begin{array} {cc} H_{11} - ES_{11} & H_{12} - ES_{12} \\ H_{21} - ES_{21} & H_{22} - ES_{22} \\ \end{array}\right| = 0 \label{SecDet}\]
or in shorthand notation
\[ \text{det}(H -ES) =0\]
Everything in Equation \(\ref{SecDet}\) is a known number except \(E\). Since the secular determinant for ethylene is a \(2 \times 2\) matrix, finding \(E\), requires solving a quadratic equation (after expanding the determinant)
\[ ( H_{11} - ES_{11} ) ( H_{22} - ES_{22} ) - ( H_{21} - ES_{21} )( H_{12} - ES_{12} ) = 0\]
There will be two values of \(E\) which satisfy this equation and they are the molecular orbital energies. For ethylene, one will be the bonding energy and the other the antibonding energy for the \(\pi\)-orbitals formed by the combination of the two carbon \(2p_z\) orbitals (Equation \(\ref{LCAO}\)). However, if more than two \(| \phi \rangle\) atomic orbitals were used, e.g., in a bigger molecule, then more energies would be estimated by solving the secular determinant.
Solving the secular determinant is simplified within Hückel method via the following four assumptions:
1. All overlap integrals \(S_{ij}\) are set equal to zero. This is quite reasonable since the \(\pi-\) orbitals are directed perpendicular to the direction of their bonds (Figure \(\PageIndex{1}\)). This assumption is often call neglect of differential overlap (NDO).
2. All resonance integrals \(H_{ij}\) between non-neighboring atoms are set equal to zero.
3. All resonance integrals \(H_{ij}\) between neighboring atoms are equal and set to \(\beta\).
4. All coulomb integrals \(H_{ii}\) are set equal to \(\alpha\).
These assumptions are mathematically expressed as
\[ H_{11}=H_{22}=\alpha\]
\[ H_{12}=H_{21}=\beta\]
Assumptions 1 means that the overlap integral between the two atomic orbitals is 0
\[ S_{11}=S_{22}=1\]
\[ S_{12}=S_{21}=0\]
Matrix Representation of the Hamiltonian
The Coulomb integrals
\[H_{ii}= \langle \phi _i|H| \phi _i \rangle \nonumber\]
and resonance integrals.
\[H_{ij}= \langle \phi _i|H| \phi _j \rangle \,\,\, (i \neq i) \nonumber\]
are often described within the matrix representation of the Hamiltonian (specifically within the \( | \phi \rangle\) basis):
\[ \hat{H} = \begin{bmatrix} H_{11} & H_{12} \\ H_{21} & H_{22} \end {bmatrix} \nonumber\]
or within the Hückel assumptions
\[ \hat{H} = \begin{bmatrix} \alpha & \beta \\ \beta & \alpha \end {bmatrix} \nonumber\]
The Hückel assumptions reduces Equation \(\ref{master}\) in two homogeneous equations:
\[\begin{bmatrix} \alpha - E & \beta \\ \beta & \alpha - E \\ \end{bmatrix} \times \begin{bmatrix} c_1 \\ c_2 \\ \end{bmatrix}= 0 \label{Eq12}\]
if Equation \(\ref{Eq12}\) is divided by \(\beta\):
\[\begin{bmatrix} \dfrac{\alpha - E}{\beta} & 1 \\ 1 & \dfrac{\alpha - E}{\beta} \\ \end{bmatrix} \times \begin{bmatrix} c_1 \\ c_2 \\ \end{bmatrix}= 0\]
and then a new variable \(x\) is defined
\[ x = \dfrac {\alpha -E}{\beta} \label{new}\]
then Equation \(\ref{Eq12}\) simplifies to
\[\begin{bmatrix} x & 1 \\ 1 & x \\ \end{bmatrix} \times \begin{bmatrix} c_1 \\ c_2 \\ \end{bmatrix}= 0 \label{seceq}\]
The trivial solution gives both wavefunction coefficients equal to zero and the other (non-trivial) solution is determined by solving the secular determinant
\[ \begin{vmatrix}x&1\\1&x\\\end{vmatrix}=0\]
which when expanded is
\[ x^{2}-1=0\]
\[ x=\pm 1\]
Knowing that \(E=\alpha -x\beta \) from Equation \(\ref{new}\), the energy levels can be found to be
\[ E=\alpha -\pm 1\times \beta \]
\[ E=\alpha \mp \beta \]
Since \(\beta\) is negative, the two energies are ordered (Figure \(\PageIndex{4}\))
• For \(\pi_1\): \(E_1 =\alpha + \beta\)
• For \(\pi_2\): \(E_2 =\alpha - \beta\)
Figure \(\PageIndex{4}\): \(\pi\) energies of ethylene with occupation.
To extract the coefficients attributed to these energies, the corresponding \(x\) values can be substituted back into the Secular Equations (Equation \(\ref{seceq}\)). For the lower energy state (\(x=-1\))
\[\begin{bmatrix} -1 & 1 \\ 1 & -1 \\ \end{bmatrix} \times \begin{bmatrix} c_1 \\ c_2 \\ \end{bmatrix}= 0 \]
This gives \(c_1=c_2\) and the molecular orbitals attributed to this energy is then (based off of Equation \(\ref{LCAO}\)):
\[ \psi_1 \rangle = N_1 (\phi_1 \rangle + | \phi_2 \rangle ) \label{HOMO}\]
where \(N_1\) is the normalization constant for this molecular orbital; this is the bonding molecular orbital.
For the higher energy molecular orbital (\x=-1\) and then
\[\begin{bmatrix} 1 & 1 \\ 1 & 1 \\ \end{bmatrix} \times \begin{bmatrix} c_1 \\ c_2 \\ \end{bmatrix}= 0 \]
This gives \(c_1=-c_2\) and the molecular orbitals attributed to this energy is then (based off of Equation \(\ref{LCAO}\)):
\[ \psi_2 \rangle = N_2 (\phi_1 \rangle - | \phi_2 \rangle ) \label{LUMO}\]
where \(N_2\) is the normalization constant for this molecular orbital; this is the anti-bonding molecular orbital.
The normalization constants for both molecular orbitals can obtained via the standard normalization approach (i.e., \(\langle \psi_i | \psi_i \rangle =1\)) to obtain
\[N_1 = N_2 = \dfrac{1}{\sqrt{2}}\]
These molecular orbitals form the \(\pi\)-bonding framework and since each carbon contributes one electron to this framework, only the lowest molecular orbital (\( | \psi_1 \rangle\)) is occupied (Figure \(\PageIndex{5}\)) in the ground state. The corresponding electron configuration is then \( \pi_1^2\).
Figure \(\PageIndex{5}\): Schemetic representation of the \(\pi\) molecular orbitals framework for ethylene . Notice that the antibonding molecular orbital has one more node than the bonding molecular orbital as expected since it is higher in energy.
HOMO and LUMO are acronyms for highest occupied molecular orbital and lowest unoccupied molecular orbital, respectively and are often referred to as frontier orbitals. The energy difference between the HOMO and LUMO is termed the HOMO–LUMO gap.
The 3-D calculated \(\pi\) molecular orbitals are shown in Figure \(\PageIndex{6}\).
alt alt
Figure \(\PageIndex{6}\): Calculated \(\pi\) molecular orbitals for ethylene . (left) the bonding orbital (|\psi_1 \rangle\) and (right) the antibonding \( (|\psi_2 \rangle\) orbital.
Limitations of Hückel Theory
Hückel theory was developed in the 1930's when computers were unavailable and a simple mathematical approaches were very important for understanding experiment. Although the assumptions in Hückel theory are drastic they enabled the early calculations of molecular orbitals to be performed with mechanical calculators or by hand. Hückel Theory can be extended to address other types of atoms in conjugated molecules (e.g., nitrogen and oxygen). Moreover, it can be extended to also treat \(\sigma\) orbitals and this "Extended Hückel Theory" is still used today. Despite the utility of Hückel Theory, it is highly qualitative and we should remember the limitations of Hückel Theory:
• Hückel Theory is very approximate
• Hückel Theory cannot calculate energies accurately (electron-electron repulsion is not calculated)
• Hückel Theory typically overestimates predicted dipole moments
Hückel Theory is best used to provide simplified models for understanding chemistry and for a detailed understanding modern ab initio molecular methods discussed in Chapter 11 are needed. |
f61feeb8a65a9ca0 | San José State University
Thayer Watkins
Silicon Valley
& Tornado Alley
A Matrix Version of the Hartree-Fock
Method Applied to a Helium Atom
The Hamiltonian function for the two electrons of a helium atom is easy to specify. Let p1 and p2 be the momentum of the electrons. The kinetic energy of the atom is then
K = p1²/2m + p2²/2m
where m is the electron mass. Let r1 and r2 be the position vectors of the two electrons with respect to an origin at the center of the nucleus. The magnitudes of r1 and r2 are denoted as r1 and r2. The potential energy of the atom is then
V = −2q/r1 −2q/r2 + q/|r1r2|
where q is the product of the constant for the electrostatic force and the square of the unit charge.
The Schrödinger equation for the system is also easily derived, but obtaining a solution is nearly impossible.
The Hartree Self-Consistent Field Approximation
The Hartree procedure consists of considering a single electron with the effect on it of the other electron being replaced by its effect on the potential energy function.
H = p²/2m − 2q/r + q/R
where R is the average distance between the electron and the average position of the other electron. The average position of the other electron may be the center of the atom, in which case R would be equal to r. However if the other electron is considered as a spherical distribution of charge that part which is closer to the origin than r would have an effect but that which is farther away than r would have no effect. The value of R would then be ½r.
This Hamiltonian function is then converted to its Hamiltonian operator by replacing p with −ih∂/∂r where h is Planck's constant divided by 2π and i is the imaginary unit √−1. The exponent of 2 for p results in the second derivative with respect to r. The time independent Schrödinger equation for the system is then
[−(h²/2m)d²/dr² − 2q/r + ½q/r]ψ(r) = εψ(r)
which reduces to
[−(h²/2m)d²/dr² − (3/2)q/r]ψ(r) = εψ(r)
where ψ is the wave function of the electron and ε is real-valued onstant, the energy of the system. This is converted into matrix form by letting Ψ represent ψ(r) as an infinite dimensional vector. Likewise V is an infinite dimensional diagonal matrix with 1/r on the principal diagonal. This means that points for the arguments of the function must straddle the origin to avoid having a term involving division by zero. This can be done by taking the points nearest the origin to be +δ/2 and −δ/2. Thus the points corresponding to the vector components are …, 2&fract12;δ, 1&fract12;δ, &fract12;δ, −&fract12;δ, −1&fract12;δ, −2&fract12;δ, ….
The second derivative operation can be represented as
(d²ψ/dr²) ≅ [ψ(r+δ)−2ψ(r)+ψ(r-δ)]/δ²
The matrix version of the system is then
[−(h²/(2mδ²))J − (3q/2)V]Ψ = εΨ
where J is a matrix of zeroes except for (… 1, −2; 1, …) centered on the principal diagonal.
As it happens the model with an electron in the same shell shielding a half unit of charge is equivalent to the Bohr model for hydrogen with the central charge being 3/2 rather than 1. The ionization energies can be computed and ccompared with the experimental values. For the details see helium model.
Comparison of Measured Helium Spectrum Lines
with Values Computed from a Modified Version
of the Bohr Model
438.793 nm433.937 nm-1.1%
471.314 nm486.009 nm+3.1%
492.193 nm 486.009 nm-1.3%
501.5675 nm486.009 nm-3.2%
667.815 nm656.112 nm-1.8%
(To be continued.)
HOME PAGE OF applet-magic
HOME PAGE OF Thayer Watkins |
2c72540a42d1e13d | Einstein, Schrödinger, and the story you never heard
How “faith” in the Universe destroyed two brilliant men of genius.
“I don’t like it, and I’m sorry I ever had anything to do with it.” -Schrödinger
The idea that if only we’re smart and clever enough, we can predict how the Universe will unfold, goes back long, long before humanity even had an inkling as to what the enterprise of science would come to be. Prophets, augurs, soothsayers and diviners were seen to hold a truly mystic power, as the very prospect of knowing the future was as elusive but as tantalizing as the magic thought to be inside a philosopher’s stone.
But after generations of failure, something incredible happened. Building on the old astronomical works of Eratosthenes, Aristarchus, Hipparchus, Apollonius of Perga and Ptolemy, scientists of the 16th and 17th century began to not only to uncover predictive descriptions of how the lights in the skies above would move over time, but of the mechanism behind it. Culminating in 1687 with Newton’s theory of universal gravitation and the publication of his Principia, we had moved into a new, scientific era of prediction.
Image credit: © Andrew Dunn, 5 November 2004.
No longer would prediction be some sort of bet or interpretation, but rather what would physically happen in the future — from gravitation on Earth to the return of comets to a series of colliding particles — was all of a sudden completely determined. All you had to do was know the position and motion of each particle in the system, even if your system is the entire Universe, and the future of everything could be completely known with arbitrary accuracy, so long as you had arbitrarily large calculational power.
In the 19th century, electricity and magnetism became well understood as well, culminating in 1865 (150 years ago, exactly) with the formulation of Maxwell’s equations. This, too, was completely deterministic. So long as you knew the number of particles, the charge and mass of each particle, how they interacted and what their initial velocities were, you could predict the behavior of everything from the largest scales down to the smallest, with no uncertainty at all.
This was the Universe that both Einstein and Schrödinger were born into, and the one that both of them grew up investigating.
Image credit: Benjamin Couprie, Institut International de Physique de Solvay, of the 1927 Solvay Conference.
They each followed their own, unique path, both educationally — with Einstein developing his physical intuition but faltering at the math he deemed mostly useless, while Schrödinger became a model student, willing to fall in line with the academic fads and fashion of the day — and personally, in ways most of us never knew. Academically, Einstein made a huge number of contributions at a relatively young age, including to Brownian motion, discovering special relativity, the photoelectric effect and deriving E = mc^2 all in his 20s, while Schrödinger’s accomplishments were minor and modest as a relatively young scientist. Einstein went on to develop General Relativity in his 30s and then, in his 40s, to uncover the statistical properties of gases of identical particles. It was only with this last contribution — and some inspiration from Einstein himself — that Schrödinger made his great breakthrough: the development of the Schrödinger equation, at the relatively late age of 38, describing the probability density of particles, their energies, and their time evolution at a quantum mechanical level.
Image credit: retrieved via http://astarmathsandphysics.com/university-physics-notes/quantum-mechanics/university-physics-notes-features-of-solutions-of-the-schrodinger-equation-for-the-hydrogen-atom-html-m90be9c.gif.
The story of two great physicists, and how they arrived at this point (and the personal struggles, influences, victories and… questionable choices they made along the way) is the subject of half of Paul Halpern’s new book: Einstein’s Dice and Schrödinger’s Cat: How Two Great Minds Battled Quantum Randomness to Create a Unified Theory of Physics. Along the way, you’ll run into a huge number of names and discoveries you’ll likely recognize (Rutherford, Ehrenfest, Schwinger, Heisenberg, Feynman, Minkowski and more), and other names and characters even professional physicists are mostly unfamiliar with.
The other half — perhaps the more philosophically interesting half — of the book focuses on the conviction that both men had (Einstein steadfast in his conviction, Schrödinger somewhat of an oscillatory enigma) that the Universe must, at some fundamental level, be the deterministic system that Newton had originally envisioned.
It was their correspondence and friendship that led to the thought experiment of Schrödinger’s cat, and to Schrödinger himself pursuing a unified theory of relativity and electromagnetism — hopefully along with the nuclear force — that would leave no room for indeterminism. The idea that a cat could simultaneously be dead-and-alive was simply abhorrent to them both.
Image credit: Wikimedia Commons user Dhatfield.
The story is masterfully told, with Paul displaying a clear reverence and respect for his subjects, even when we — the reader — might rush in judgmentally with damnation. Schrödinger, in particular, comes across as extremely cowardly and insecure, reminiscent (for the younger readers among you) of Shou Tucker, the sewing-life alchemist, who was willing to sacrifice everything he held dear for a stab at the greatest glory imaginable. The consequences, whatever they were, were a small price to pay.
Image credit: the Fullmetal Alchemist: Brotherhood anime, via Photobucket / ShawnMerrow’s Bucket.
In the end, no amount of evidence could dissuade either Schrödinger or Einstein from their ideological conclusion of how the world must be, despite finding themselves and their efforts thwarted over and over again no matter what avenue they took.
It’s easy to be reminded of the old saying about physics advancing one funeral at a time, as the old guard doesn’t seem to be able to let go of their ideas about how the world ought to work in favor of how it actually does work, as shown by the evidence. Even Einstein’s collaboration with Pauli, where they worked out explicitly why and on what grounds their unification schemes were bound to fail, didn’t dissuade either Einstein or Schrödinger from continuing that same avenue of exploration.
If you’re at all interested in the idea that, “perhaps God doesn’t play dice with the Universe,” you really should check this book out, and if you’re like me, you’ll wind up wanting, at various points, to jump into the pages and shake one (or both) men back-and-forth, pleading with them to learn the invaluable lesson that the Universe is trying to teach them. If you’re a history buff, you’ll want to take notes on all the discoveries and names simply thrown out in passing, as even the most well-read among you will find things you didn’t know; I myself didn’t realize that the “Klein” of Klein bottles (Felix Klein) and the Klein of Kaluza-Klein theory (Oskar Klein) were two different people!
Image credit: Konstantin Weixelbaum and Ilkay Sakalli, via http://bridgesmathart.org/bridges-2011/2011-short-movie-festival/.
As it is, Marcelo Gleiser sums up the book very well, stating,
“We have seen books that celebrate Einstein and Schrödinger as two of the greatest scientists of all time. With clarity and diligence, Halpern does something different: he explores how intellectual curiosity and vanity get enmeshed with power struggles and the media to bring out the worst in good-willing people, especially when the stakes are as high as the creation of a God-like ‘theory of everything.’”
While I don’t agree with everything Paul writes, in particular his seeming admiration for the cases where each man happened to be right about something then unknown for some very wrong reasons (like Schrödinger’s anticipation of dark energy), this is a remarkably informative read that will certainly elicit a strong opinion from most readers about the relative merits and demerits of these two lives. In Paul Halpern’s book about Einstein and Schrödinger, we get an inside look at two very different, brilliant minds struggling against a problem that no amount of brilliance will get you out of: pursuing a theory of physics that doesn’t describe the Universe we live in. It makes you wonder how many of the “best” theoretical ideas that lack evidence at the present:
• supersymmetry,
• extra dimensions,
• grand unification,
• string theory,
will turn out to be completely wrong. In addition, it makes you wonder how many — if any — of its proponents will be willing to abandon these ideas, or if, like Einstein and Schrödinger, they’ll struggle on trying to catch a glimpse of the promised land they’ll never reach, like Moses towards Canaan, until their dying day.
One clap, two clap, three clap, forty?
|
097f2486785c6990 | Skip to content
Classical probability does not apply to quantum systems (causal inference edition)
James Robins, Tyler VanderWeele, and Richard Gill write:
Neyman introduced a formal mathematical theory of counterfactual causation that now has become standard language in many quantitative disciplines, but not in physics. We use results on causal interaction and interference between treatments (derived under the Neyman theory) to give a simple new proof of a well-known result in quantum physics, namely, Bellís inequality.
Now the predictions of quantum mechanics and the results of experiment both violate Bell’s inequality. In the remainder of the talk, we review the implications for a counterfactual theory of causation. Assuming with Einstein that faster than light (supraluminal) communication is not possible, one can view the Neyman theory of counterfactuals as falsified by experiment. . . .
Is it safe for a quantitative discipline to rely on a counterfactual approach to causation, when our best confirmed physical theory falsifies their existence?
I haven’t seen the talk, but based on the above abstract, I think Robins et al. are correct. The problem is not special to counterfactual analysis; it’s with conditional probability more generally. If you recall your college physics, you’ll realize that the results of the two-slit experiment violate the laws of joint probability, as we discussed a few years ago here and here.
Given that classical probability theory (that is, the equation P(A&B)=B(A|B)P(B)) does not fit quantum reality, it makes sense to me that the Neyman-Rubin model of causation, which in practice is always applied with probabilistic models, will not work in the quantum realm. If one tries to imagine applying potential-outcomes notation to the two-slit experiment, you’ll see that it just won’t work.
Is this relevant for macroscopic statistics? I don’t know. Here are my thoughts (with Mike Betancourt) on the matter.
I think it’s a fascinating topic.
1. Rahul says:
“Is this relevant for macroscopic statistics?”
In my opinion, the answer is a strong, emphatic, NO! All attempts to convince me otherwise, have so far verged on crack-pottery or a misreading of the nuances of QM itself.
The especially wacky ideas (and flawed, IMHO) of applying QM to macroscopic phenomena come from people in the social sciences.
2. Alexandre says:
There is a quantum measure theory (an extension to the mathematical discipline called “measure theory”), that goes as follows:
If M is a quantum measure and Omega is the universe set then:
1. M(Empty) = 0
2. M(Omega) = 1
3. For any disjoint sets (measurable in the quantum sense) A, B and C: M(A U B U C) = M(A U B) + M(B U C) + M(A U C) – M(A) – M(B) – M(C)
Notice that, if A and B are disjoint sets then, in some quantum experiments, (A U B) cannot be always measured from the measurements of each isolated piece A and B as is usually considered in the classical measure theory. In these cases, we must compute a specific measure for the set (A U B). Naturally, if M(A U B) = M(A) + M(B) for all disjoint measurable sets A and B, then the usual probability measure emerges, but it is not the case in quantum experiments. The axiom 3. is called grade-2 additivity
There is a connection between M and the wave function. For more on this, gogoogle it: “quantum measure theory”.
Alexandre Patriota
3. Entsophy says:
[…] brings out the silliness in smart people like Quantum Mechanics; a subject I always associate with … R. A. Fisher. I confess to liking Fisher more than […]
4. konrad says:
“the results of the two-slit experiment violate the laws of joint probability”
This makes no sense at all. The result of a _physical_ experiment cannot violate a _mathematical_ law. It can only show that a particular _model_ provides a poor description of reality. Yet you repeatedly say that it is probability theory (rather than the model, e.g. by using an inappropriate choice of state space) that is at fault:
“If classical probability theory (which we use all the time in poli sci, econ, psychometrics, astronomy, etc) needs to be generalized to apply to quantum mechanics” (in one of the linked posts).
There is a world of difference between needing to discard a poor model (something we do all the time) and needing to generalise probability theory itself (which is not on the cards here).
• Andrew says:
Sure, a physical experiment can violate a mathematical law. The classic example is, if in a universe with closed curvature, you construct a large enough triangle, its angles will not add up to 180 degrees. Another classic example is that, for various particles, Boltzmann statistics do not apply, instead you have to use Fermi-Dirac or Bose-Einstein statistics. Boltzmann statistics is a mathematical probability model that does not apply in these settings. Another example is, in the two-slit experiment, p(A) does not equal the sum over B of p(A|B)p(B). In all these cases, you have a mathematical model that works (or approximately works) in some areas of application but not others. The math is not wrong but it does not apply to all settings.
• Cedric says:
I agree with Konrad. You write that QM violates the laws of probabilities. But probability is extended logic. Would you be comfortable if a phenonema were to “violate the laws of logic”?
Furthermore, Bell’s inequality only rules out *local* hidden variable theories. Bell himself famously wrote Against Measurement:
• Andrew says:
Yup. Euclidean geometry is extended logic too, but that doesn’t mean that the angles inside a real, physical triangle have to add up to exactly 180 degrees. Similarly, in real life, it’s not necessarily true that p(A) equals the sum over B of p(A|B)p(B). You either have to abandon the superposition of probabilities (instead adding complex numbers that have phases, just as we learned in college physics) or you have to restrict the use of joint probabilities.
• Tim Maudlin says:
So here is a simple point that will help clear some things up. Euclidean geometry is not “extended logic” and the theorems of Euclidean geometry (i.e. the logical consequences of its postulates) are not theorems of logic or logical truths. The theorems of Euclidean geometry, such as that about the interior angles of a triangle, are just what follows from various postulates about the geometrical structure of a space, not implications of logical principles alone. If the interior angles of a physical triangle do not add up to two right angles, that does not and cannot show that there is anything wrong with logic: after all it is by logic that one derives this consequence from the postulates. It just shows that physical space does not obey the postulates, i.e. space is not Euclidean.
You really ought also to stop a moment and reflect on the claim about “p(A) equals the sum over B of p(A|B)p(B)” that you keep repeating. That only holds if the set of B’s constitute a mutually exclusive and jointly exhaustive set of ways that A can occur. Try figuring out what the relevant set of B’s are supposed to be for the case in hand. In fact, this principle is not violated, just as non-Euclidean geometry does not violate any principle of logic.
• Cédric says:
What do you think of Cox’s theorem?
I feel very strongly that the laws of logic and probabilities are true [i]and applicable[/i] in all conceivable universes, even those without space-time. They will be found by any smart creature living therein, and used to reason under uncertainty. This is not necessarily true of geometry.
A model is a set of assumptions. “p(A) equals the sum over B of p(A|B)p(B)” is not a model any more than “2 + 2 = 4” or “A => ¬¬A” is a model. They are true facts (tautologies) in all conceivable universes. Would you argue that “2 + 2 = 4” is not necessarily true?
• Andrew says:
Not so many years ago, people thought Euclidean geometry was mathematical truth.
The expression “p(A) equals the sum over B of p(A|B)p(B)” applies in some settings but not in others. In classical mechanics with uncertainty (i.e., latent variables) it works just fine. In quantum mechanics, though, you can’t take “B” (the slit indicator in the 2-slit experiment) as a classical latent variable and average over it. You have to either expand your notation or use complex wave functions.
Complex wave functions are a generalization of classical probability. And quantum mechanics is famously counterintuitive.
• Cédric says:
I disagree, but Michael is right that your viewpoint is that of >90% of quantum physicists out there. It’s probably not a coincidence that so many Bayesians disagree with it. Thank you for the discussion.
Final question: suppose you’re back in time as a physics undergrad, and your advisor hands you a box with 1 gold atom. You’re going to measure its position, but before doing that, you want to make a prediction – i.e. compute its expected position, say. Unfortunately, your advisor forgot to tell you which of the three gold isotopes A, B, C it is, and that is important information. Do you think that the gold atom is in a quantum superposition of the three states? Would you use this (non-quantum) formula to compute your expectation of the position
E[x] = E[x|A]*P(A) + E[x|B]*P(B) + E[x|C]*P(C)
with a reasonable prior (taken from a table perhaps) for each of the three P(A), P(B), P(C)?
• konrad says:
“Complex wave functions are a generalization of classical probability” – another bizarre claim. Complex wave functions represent system states; in what sense can a system state be thought of as a generalization of probability? (Feel free to use “probability distribution” to refer to either of its usual meanings – an information state or an empirical property (a frequency distribution) of a system – or some new meaning; just tell us which you are using.)
• Andrew says:
In the words of Wikipedia, “In quantum mechanics, a probability amplitude is a complex number whose modulus squared represents a probability or probability density.” Classical probabilities are real numbers. They superimpose, and when you add positive probabilities you can’t get zero. In contrast, quantum probability amplitudes have phases, and you can superimpose them and get zero probabilities, as in the two-slit experiment. In the macroscopic world (the positions and momenta of “billiard balls” and the like), the phase information can be ignored and one can work with classical probability theory, no need for complex amplitudes.
• konrad says:
Interesting. So your reasoning is that, if a mathematical structure can be constrained in such a way that the constrained version does not violate the Kolmogorov axioms, then the structure in question is a generalisation of probability. Regardless of what it actually denotes.
Personally, I prefer to start with thinking about what the concept of probability denotes (for most Bayesians, an information state; for most frequentists, the frequencies of outcomes in a repeatable experiment) and looking for an extension that denotes the same thing. I wouldn’t call something an extension of probability just because I can calculate probabilities as a (many-to-one) function of it.
• phayes says:
Konrad, are you saying that probability theory should not be regarded as a special case of quantum theory just because it’s commutative? ;-)
• konrad says:
No, I’m saying that probability theory deals with how the conclusions we can draw changes as a function of available information, whereas quantum theory deals with how a physical system evolves in space and time. Apples and oranges.
• Bill Jefferys says:
To make Konrad’s point a little more explicit, the probability amplitudes of quantum theory are just devices used to calculate what the probabilities of certain events are. But these amplitudes depend on the experimental setup, e.g., one slit or two, where the slits are, whether the fact of a particle going through one or another slit is observed. But you will notice that even in this case, the amplitudes depend on how the experiment is set up, so the probabilities that are computed from them also depend on how the experiment is set up.
This is the point of my comments about how you have to condition on which experiment is being performed, just as Leslie Ballentine wrote in the paper I cited.
I mentioned also Ed Jaynes’ comment that many “paradoxes” in probability theory can be resolved by conditioning on all relevant prior information. He was urging people to think about what prior information is relevant to the problem they are considering, and actually to condition on that information EXPLICITLY so as to make clear (that is, write E1, E2, …) on the right-hand side of the conditioning bar so that we see, explicitly, what we are talking about.
• phayes says:
If you don’t make a distinction between quantum theory and quantum mechanics then it’s apples and oranges, yes, but I don’t think that’s a good idea. As Streater says (in that paper I linked to):
[35] that the interpretation of microscopic measurements must be done in classical
instrument faithfully measures an atomic observable, then the numbers indicated by
Apples and Cox’s Orange Pippins.
@Bill Jefferys
You (and Jaynes and Ballentine) are clearly correct about the conditioning business and the interpretation of the double slit experiment and the most important thing about it as far as I’m concerned is that no “psi-ontology” is needed to see that it’s correct. [ ] ;-)
• Konrad,
Yes, we can make the argument the the use of classical probability to model quantum systems is a poor choice of model and should be discarded. Yes, that has no effect on classical probability as an axiomatic mathematical entity.
But what it does say is that we need a better probability theory (perhaps, a generalized one) that relaxes the classical axioms and does model quantum systems well. And what’s interesting about this approach is that a theory relaxing certain axioms might have utility in modeling very complicated macroscopic systems (incorporating certain unknown biases or interactions, for example).
• Walt says:
We don’t need a new probability theory, because people have already invented about a billion different formalisms for this. Quantum mechanics isn’t exactly new.
• Sure — I was referring to “new” as in “new to people considering just classical probability”. There are lots of extensions/generalizations out there and there may be some utility. It’s an interesting applied question — a very different theoretical question.
• konrad says:
There is no mathematical law stating that the angles of triangles add up to 180 degrees in general, only a law stating that this happens in a Euclidean geometry. A curved universe does not violate mathematical law, it only violates the (poor) modelling assumption that it’s geometry is Euclidean. Now in this example (if Euclidean geometry is all you have to start off with), the problematic modelling assumption is an axiom of the mathematical framework so relaxing it requires generalising the mathematical framework. To make the same argument in the case of QT, you need to point at an axiom of PT (or a consequence of its axioms and only its axioms) that is inconsistent with experiment. It is not enough to show inconsistency in the context of a whole bunch of modelling assumptions that are extraneous to PT.
“Boltzmann statistics is a mathematical probability model that does not apply”. Exactly – all of these are cases where a poor model is falsified by experiment. Don’t blame probability theory.
It does not say that, because there are assumptions besides the PT axioms in play. Such as the assumption (discarded by the Copenhagen interpretation) that the position of a photon is well-defined and unique at all times.
• Andrew says:
Consider your last sentence. It is fundamental to probability theory that events can be defined conditional on other events. Hence notation such as p(x,y), p(x|y), p(y|x). The core of classical probability is that the definitions of “x” and “y” don’t depend on how other variables in the system are measured. Hence the problem with applying probability theory to the two-slit experiment etc.
This is not news. The mathematics of probability amplitudes (wave mechanics) is different from the mathematics of classical phase-less probability.
• konrad says:
Andrew, I assume you are defining your x and y as in the first of your linked posts. That is, in experiments 1 to 4 of that post y is the place on the screen that lights up (this is measured and hence definable in all four experiments), and in experiment 4 of that post x is the slit at which the photon is observed. Importantly, x is undefined (and not meaningfully definable) in experiment 3 where the setup does not observe the photon going through a slit. In a response to Tim Maudlin’s comment below you use the variable x in what appears to be a reference to experiment 3, where it is not defined.
Please clarify: are you claiming that PT is violated in experiment 4, and if so, how? Are you claiming that PT is violated in experiment 3? If so, what is the second variable you have in mind and how is PT violated? Are you claiming that PT is not violated in either experiment separately but in the combination of the two experiments? If so, what is the link between the experiments and in what way can they be combined?
• konrad says:
ps Some potential confusion can be avoided if one avoids overloading notation. It may be helpful to use the symbols y3, y4 and x4 for the three relevant variables defined thus far, and x3 should you choose to define such a variable in experiment 3. This could help avoid pitfalls such as assuming that p(y3|x3) is known or estimable when in fact only p(y4|x4) is known.
5. Tim Maudlin says:
The two slit experiment does not violate any laws of probability. The phenomenon, in the first place, is accurately predicted by the deBroglie/Bohm theory, which uses a deterministic dynamics and fixed probability distribution over initial states, and nothing but classical probability theory. The argument that there is problem with classical probability, which can be found in Feynman, is just an error. Given the probability of some outcome with slit A open and slit B closed, and the probability of the same outcome with slit B open and slit A closed, probability theory alone has exactly zero implications about the probability for the outcome with both slits open. How could it?
• Andrew says:
I discuss this in my linked blog post. But, in brief, the intuitive application of probability theory to the 2-slit experiment is that, if y is the position of the photon and x is the slit that the photon goes through, that p(y) = p(y|x=1)p(x=1) + p(y|x=2)p(x=2). But this is not true. As we all know, the superposition works not with the probabilities but with the probability amplitudes. Classical probabilities don’t have phases, hence you can just superimpose them via the familiar law of total probability. Quantum probabilities work differently.
• Tim Maudlin says:
I have no idea what the “intuitive” application of probability theory is supposed to mean. probability theory is a mathematical theory and, as I said, there are perfectly well-defined and exact physical theories that use “classical” probability to make statistical predictions and return the exact prediction of the quantum mechanics. So this is a decisive counterexample to the claim that the 2-slit phenomena are somehow incompatible with classical probability theory.
If by “intuitive” you mean the judgment that whether or not slit A is open can have no influence on what a photon that goes through slit B does, that is not a principle of probability theory! It is a bit of naive physics, I suppose. The 2-slit experiment refutes this naive physics, but does not, and cannot conflict with probability theory.
• The point is that classical probability does not describe quantum statistics — entanglement (or equivalently its consequences on conditional probabilities) is inconsistent with the Komolgorov axioms. There’s no way around that.
• konrad says:
No, it is inconsistent with a particular representation of the state space of a particle. Nothing to do with probability theory.
• Tim Maudlin says:
Once again, there exist well-defined theories (DeBroglie/Bohm and the GRW collapse theories, for example) that make all of these predictions and use classical probability theory in a perfectly normal way. No Komolgorov axioms are violated. That is just a clear mathematical fact about these theories. To say “there is no way around that” is not true: several ways around that exist.
In the deBroglie/Bohm picture, in addition, every particle goes through exactly one slit. The particle trajectory, however, is influenced by the state of the other slit via its dependence on the quantum state, which obeys the Schrödinger equation. In the GRW picture, it is not correct to say that a particle goes through exactly one slit: in a certain sense, it goes through both when both are open. But none of this is inconsistent with, or requires any modification to, classical probability theory. It is really not helpful to say that something is impossible when it has been done, and done in several different ways.
• Firstly, all the ontologists should step out of the room.
What Andrew was saying is that, if you maintain the standard assumptions of locality and unitarity in physics then quantum probabilities are inconsistent with classical probabilities. Yes, you can weaken the standard assumptions to restore the consistency, but now you’re changing the underlying system (incidentally, I have no problem with nonlocality but you’ll have to do better than BB — at least go with something where Poincare invariance is emergent).
But we’re not talking about changing the system. The point is that in complex modeling circumstances you may be making poor assumptions, but they’re too difficult to manipulate and you’re stuck with them. If one understands how to generalize the probability theory to achieve results equivalent to changing the underlying assumptions then you can build a more robust modeling tool appropriate for certain situations. And if this (i.e. the standard assumptions about probabilistic systems being broken) is true, then it should manifest as some violation of the standard assumptions, a la Stern-Gerlach or Bell.
• Tim Maudlin says:
“All the ontologists should step out of the room”? This is the response to a straightforward counterexample to a mathematical claim? You say something is impossible, and it is pointed out that the supposedly impossible thing has been done and you ask the person who points this out leave? well, that’s one way to deal with a counterexample….
Locality and unitarity are obviously, obviously, obviously not principles of classical probability theory, no matter how one understands that term. If all you mean to say is that no local theory can return the prediction of quantum theory: yes, that’s precisely what Bell proved. This has exactly nothing to do with probability theory.
It really does not help anyone’s understanding of anything to make false claims then ask people pointing out they are false to leave.
• I was referring to the fact that Andrew’s argument is an epistemological one. We’re not talking about which theory is the correct, only which theories are consistent with the data and then what those theories might imply, especially relative to the standard assumptions of quantum mechanics. Any discussion past that is not appropriate for this forum as it has no relevance for applications, hence ontological arguments should not be considered further.
As has been previously noted in the comments, no one is questioning the validity of classical probability given the axioms. The question is the physical validity of the axioms. Assuming locality, those axioms are not consistent with quantum mechanics (as a special case of a fully relativistic quantum field theory) and if locality is not abandoned then the axioms have to be modified. The relevance for applied statistics is whether or not complex systems with insufficient constrains might necessitate similar constructions, providing a possible path towards new tools in systems that have been notoriously hard to model. Likely? Probably not, but it’s not impossible and easy enough to check for with some well-designed experiments.
Forgoing locality is fine, but doesn’t provide any potential for applied statistics and is consequently not relevant to this discussion. Not to mention that non-local theories have trouble becoming fully relativistic without provided an emergent basis for Poincare invariance and rapidly become unappealing as physical, read useful and predictive, theories. But, again, this is a physics discussion and not relevant to the current thread (or forum, for that matter).
• Roger says:
I agree with Tim that there is no contradiction with classical probability theory. In quantum mechanics, a photon is not a classical particle, but also has wave properties. The photon history is not just the sum of two particle possibilities. It can also be a wave that passes thru both slits at once.
The double slit experiment does show that light has wave properties. Every has agreed to that since 1803. If you deny that light is a wave that can go thru both slits at once, then you can get a contradiction. That is another way of saying the same thing. But the contradiction is with the classical particle theory of light, and not with probability theory.
• Andrew says:
In the two-slit experiment, p(A) does not equal the sum over B of p(A|B)p(B). This violates probability theory, or at least the version where one can assign probabilities to measured outcomes, which is the version of probability that is used in applied statistics.
• Walt says:
Andrew, you’re trying to fix the state space used to explain the experiment to force this conclusion. Does there exist a state space with a classical probably distribution that describes the outcome of the experiment? Yes there does. It’s a much bigger state space where you have to include which experiments you actually did, etc., but it exists. For example, you can make the state space be probability amplitudes themselves.
It’s probably less _useful_ to do it that way, but it’s not impossible.
• Andrew says:
Exactly. You can model the 2-slit experiment using classical probability but only by expanding the sample space in a way that is awkward enough that we only do it because we have to. If the 2-slit data looked like what you would get from classical probability (superposing probabilities rather than complex densities), there would be no problem.
• Roger says:
When you do that sum over B, you are not summing over all possibilities, but only certain outcomes of measurements that are known to disrupt the system. In particular, you assuming that the light is a photon that can be modeled as a classical particle that is confined to one slit. That assumption is false.
• Tim Maudlin says:
Nope. In the Bohm theory, electrons (for example) are particles and do go through exactly one slit. If you like, you can calculate the probabilities for outcomes conditional on going through slit A (with both open) and conditional on going through slit B (with both open), and the total probability for the outcome is just the sum, of course, because every particle does exactly one or the other. And you still get the right predictions. So this “diagnosis” is also demonstrably wrong.
6. Tim Maudlin says:
This is a really strange forum. The original post contains this sentence: “If you recall your college physics, you’ll realize that the results of the two-slit experiment violate the laws of joint probability”. That sentence is false. It’s falsity is demonstrated by theories (whether you like them or not) that predict exactly these results using classical probability theory, in any sense of the term one might like to give. Then you are told not to mention this counterexample because “this is a physics discussion and not relevant to the current thread”. Well, the current thread started with a false claim about a physical phenomenon.
If you don’t care that what is posted here is false (and, by some of the comments posted, some people reading the thread are very confused), why have a forum at all? If you want to dispute that the sentence is false, then show the proposed counterexample (which is, by the way, both local and unitary as well in this application, although that is not really relevant) isn’t really a counterexample.
As for applied statistics: well, the deBroglie/Bohm theory makes exactly the same predictions for observations as standard non-relativistic quantum theory, so if you think quantum mechanics is useful for applied statistics, that theory is exactly as useful. But again, it would be best to either just acknowledge the falsity of the false claim and try to correct it, or explain why the proposed counterexample isn’t one.
• Andrew says:
The 2-slit data indeed violate the laws of joint probability. I learned about this in physics class in college. In quantum mechanics, it is the complex functions that superimpose, not the probabilities. It is the application of the mathematics of wave mechanics to particles. The open question is whether it might make sense to apply wave mechanics to macroscopic measurements. For example, when we model voting behavior or test scores, we use the classical probability model in which conditional probabilities add up using the formula p(a) = sum_b p(a|b)p(b). But maybe there are settings where it would make sense to model p(a), p(b) etc. as complex functions with phases, in which case we would be using quantum probability models.
• Tim Maudlin says:
The 2-slit data do not violate any laws of probability. Whatever you think you learned in physics class in college, it was not this. That claim is just wrong. One proof that it is wrong is the existence of a theory using standard probability that predicts the data. You might just focus on that fact and then try to figure out where you have gotten confused. But if it will help: there is nothing in classical probability theory that says that the data with the experimental condition with both slits open has any mathematical relation at all to the data with only one slit open. If I am reading your post correctly, you seem to think this: if some outcome happens a certain proportion of the time PA with only slit A open, and a certain proportion of the time PB with only slit B open, then probability theory says with both slits open the probability must be PA + PB. But neither classical probability theory nor anything else has any such implication. It is trivial of think up phenomena in classical physics that violate that principle, or in everyday life.
Try filling in the ‘a’ and ‘b’ in your formulas with actual conditions, not just letters. Maybe that will make the point clear to you. What is it you think “a” and “b” stand for here?
• Andrew says:
See my response to Walt above. I agree that one can place the two-slit experiment within a classical probability model, but this model has an extra level of complication owing to the Uncertainty Principle. The two-slit results are counterintuitive, and they are counterintuitive because we expect probabilities, not complex numbers whose squares are probabilities, to superimpose.
• Tim Maudlin says:
This has nothing to do with the uncertainty principle! The theory probabilities for outcomes conditional on experimental set-ups. One experimental set-up has only slit A open, one has only slit B open, one has both slits open. Here is a direct question: please answer this. Do you think that given the data with only slit A open and the data with only slit B open, classical probability theory has any implications at all about the data with both open? If so, what are the implications? If not, how can the data “violate classical probability”?
• Andrew was very clearly assuming the standard interpretation of quantum mechanics adopted by 99% of physicists and, more relevant to the exact details of his comment, just about every college syllabus. Within that context nothing no false claims are being made.
Given your knowledge of the subject you clearly understood the assumptions Andrew was making but, instead of pointing out the different approaches to formalizing theories of quantum mechanics given the Bell results, you attack the result as wrong based on pedantic arguments of the assumed context and, unfortunately, derailed a possibly productive conversation. Moreover, the existence of alternative theories of quantum mechanics (which NO ONE is arguing do not exist) is completely irrelevant to the original point, as the use of generalized probability theories is only MOTIVATED by their appearance in orthodox quantum theory, not in any way DEPENDENT on it. Hence the inappropriateness for this discussion.
I do not believe there is anything else to say on the matter.
• Tim Maudlin says:
This is just ridiculous at this point. The 2-slit experiment has certain data. The claim I quoted is that the data is inconsistent with classical probability. That claim is just flatly false. The deBroglie/Bohm theory, in the non-relativistic domain “makes precisely the same predictions for the data as “standard” quantum mechanics*, whatever you mean by that term. In fact, I have no idea at all what “assumptions Andrew was making”. If you want to make them clear, make them clear. That would be productive. At some point, you seemed to think these additional assumptions are locality and unitarity. Well, in the 2-slit experiment, in the theory I mentioned, the quantum state always evolves unitarily (if that is what you want) and there is no violation of locality.
I have asked Andrew to fill in what he means by ‘a’ and ‘b’ in his post. That might be productive. If you want to spell out these supposed tacit “assumptions”, that might be productive, and might lead to a statement that even could be true. But you seems instead just to want to shut down any clarification.
• Roger says:
Tim’s argument does not depend on assuming some esoteric interpretation of quantum mechanics. It is a physical fact that a 2-slit experiment is not the sum of 2 1-slit experiments. Andrew has converted this statement into a statement about probability, and concluded that the probability theory is wrong. No, his physical assumption is wrong.
If you are so sure that 99% of the physicists and textbooks are on your side, it might help if you cite them saying that the double-slit violates probability.
• Bill Jefferys says:
In my opinion, Andrew is wrong about this and Tim is correct. We discussed this earlier (Andrew’s first link, above: ). I pointed out there that Leslie Ballentine showed here: that the two-slit experiment is completely compatible with classical probability theory. In particular see my comments here:
and here:
The mistake is that the experiment with one slit open is simply not the same experiment as the one with two slits open. Why then would one analyze the different experiments as if they are the same? The answer is that you can’t. You must condition on all the information at your disposal (a point that the physicist Ed Jaynes has made repeatedly and which he blames for many of the apparent “paradoxes” that have been claimed in probability theory), and this includes whether one slit or two are open. Since you are conditioning on different things, you are no longer allowed to sum over everything since the rules of probability theory don’t allow you do sum over the conditions, only over the stuff to the left of the conditioning bar. In other words, for experiment 1 you have
and for the second you have
But you can’t sum over E1 and E2 in this experiment since it’s sitting over on the right of the conditioning bar and you aren’t allowed to sum over those. You can of course sum over B separately in each of these but that doesn’t lead to problems since the experiment is fixed. You could select E1 or E2 at random with probabilities P(E1) and P(E2), but that would just give you a mixture model and now summing will correctly tell you the result of making a measurement on a mixture model. But again, it’s all classical probability theory.
I wasn’t able to convince Andrew at that time that the point of view he is taking here is not correct, and I probably won’t be able to convince him this time either. But I am in Tim’s camp here.
• You guys are kidding me, right? Locality, causality, unitary, and consistency with the Bell results requires that the states (STATES not observables) of any entangled system do not obey classical probability theory (but can be modeled by various generalizations of axiomatic measure theory). 99% of physicists will take this at face value — pick up any quantum text. You’re welcome to cling to classical probability if you give up locality, causality, or unitarity, but those are not at all common choices in physics, especially given the difficulty they provide in formulating a proper quantum field theory.
The relevance of the double slit is the inconsistency of conditioning a system on a measurement at one of the slits with the conditioning of any observable given the resulting state. Cycles of coherence/decoherence do not respect the rules of classical probabilities, unless you take another perspective and make measurement a completely different process.
• Tim Maudlin says:
So we start with a clear, and clearly false, claim: the 2-slit phenomena are incompatible (in some sense) with classical probability theory. That simple, clear, false claim was to be understood as this claim: “given locality, causality, unitarity, and consistency with the Bell results, the states of any system do not obey classical probability theory”. Well, this fancier claim is certainly not what was meant (2-slit obeys Bell’s inequalities in any case), and is either empty of false. Empty, because locality (as Bell defined it) is incompatible with violations of Bell’s inequality: that is just the content of his theorem. So on that reading, no theory can be local and be “consistent with Bell’s results” if that means violating his inequalities, as quantum theory does. Or in any case, it can’t give the predictions of quantum mechanics. If “locality” means no-signalling, then Bohm is again a counterexample. What 99% of physicists think is neither here nor there about anything.
It ought to cause some pause that Feynman himself makes exactly this erroneous claim about the 2-slit experiment in the Lectures. Feynman does not mention locality, unitarity, or causality. He makes a straight claim about the data, based on a bad argument—exactly the argument I was attributing to Andrew. So if Feynman screwed this up, it would not be odd of many other physicists do too.
So here’s another simple question for Andrew: is the argument you have in mind the same or different from Feynman’s? An answer will help. If it is the same, I am glad to work through that text line-by-line and point out the mistakes.
7. bxg says:
> Locality, causality, unitary
I’m out of my depth here, but wonder – if we accept these – is it known to be sufficient to develop a new and consistent probability theory – or is that perhaps just the start of the fix-up’s needed?
Does boolean logic survive? (Are there X, Y such that left implies X and right implies Y, and we accept left \or \right, while
X \or \Y is not thereby implied?) And if logic changes, where does it stop? Might we end up having to distorting all of mathematics and reason to make the assumptions tenable?
That would seem an outrageous situation (what would “true” and “false” even mean when the most fundamental rules are negotiable?) But maybe it’s actually known that we can stop the fix-ups to “classical X” at “X = probability theory” – is there any sense in which that is so?
• konrad says:
I hate to disappoint, but the structure of the argument is “Well-established axiomatic system X plus questionable assumption Y doesn’t work. We really like Y, so let’s toss out X.” The argument is equally sound (or unsound) regardless of whether X is Boolean logic or probability theory.
• bxg says:
You don’t disappoint, but you don’t answer my implicit question so I was unclear. I am trying to ask something which
may not make sense, but if it does it’s specifically about quantum theory and not about argument structure.
It’s fairly obvious IMO that “classical probability theory is violated” is false. But let’s take the charitable interpretation; someone says they would like to work as though “Y” is so (since “99% of physicists” do) and is willing
to accept other large compromises in order to work that way. This is just a mode of thought; we aren’t saying Y is ‘true’ (what a weird argument to authority that would be!) but rather asking: can we work “as though” Y, given some clear and limited modifications from X to X’. Even if so it wouldn’t be a refutation of X but just “if you really insist on thinking and working as though Y, here’s something else (X) you must treat differently, and how (X’)”.
But if there is no coherent and bounded X’, if the attempt at consistency just spirals out and out to take everything with it, there’s not even a utility argument. Then there’s no useful sense in which we can say “Ok, if you really want to reason as if Y (for whatever personal reason), you need to change your beliefs on [what goes here?}]”.
That’s what I am asking. Whether you think it’s useful or not, is there a bounded “X” we can toss out and replace by some “X'” if someone for their own idiosyncratic reasons thinks “Y” has primacy? Or instead does accepting down Y inevitably bring down all of human reason once you follow it to its conclusion? This is question about quantum theory, or the two slit experiment in specific.
• Tim Maudlin says:
This is a question that would have to be answered on a case-by-case basis, and in this case one would have to determine whether it even makes sense to “revise” X and if so, whether the revision really accomplishes anything. (Since one has to use logic to answer these very questions, the whole idea of “revising logic” can obviously be tricky.) But I appreciate that you understand the situation, namely that none of this is forced by any data or phenomena. Once more Bell: “Why is the pilot wave picture ignored in textbooks? Should it not be taught, not as the only way, but as an antedote to the prevailing complacency? To show that vagueness, subjectivity, and indeterminism [one here adds: supposed revisions of probability theory or logic] are not forced on us by experimental facts, but by deliberate theoretical choice?”—On the impossible pilot wave.
8. Tim Maudlin says:
“When one forgets the role of the apparatus, as the word ‘measurement’ makes all too likely, one despairs of ordinary logic—hence ‘quantum logic’. When one remembers the role of the apparatus, ordinary logic is just fine.” John Bell, Against ‘measurement’. Exactly the same can be said for “probability theory’. Thinking that the phenomena predicted by quantum theory either require or even suggest a need to revise logic or probability theory is a mistake. No changes are required, and no proposed changes have ever helped with any problem. Bell is the best place to start to understand this. See also “On the Impossible pilot wave”.
9. For a discussion of the hydrodynamic pilot wave analogs see this discussion in PNAS:
10. Tim Maudlin says:
Bell is the first thing to read on foundational issues in quantum theory. If you don’t have it, get Speakable and Unspeakable in Quantum Mechanics: all the papers are there.
Yes, the “bouncing oil drop” experiments provide a (manifestly classical) analog system for the two-spit experiment, and make it easy to visualize what is going on. The key is (of course) that the quantum state with both slits open is different from what it is with only one slit open, even when the “particle” goes through one particular slit. When more than one particle is involved, though, you can non longer think of the quantum state as defined on physical space, since it is defined on configuration space. It is only then that violations of Bell’s inequality can arise.
• If you can, would you care to enlighten me and others on more of the Bohm theory? I have heard and read a little about it before, but would like to get a better picture under a few simple experimental examples. For example, suppose that we have one particle of interest. It starts at point X(0) which we can not determine with absolute precision (in other words, it starts as best as we can determine very near X(0)).
We can define a wave function psi(x,t) based on the electric fields and things that we set up in our apparatus. And from that we get a path X(t) and a velocity V(t) which comes directly out of the “quantum field” defined by psi(x,t), in other words, the wave psi pilots the particle along the path X(t). Of course since we don’t know X(0) exactly, the actual path will not be known exactly either and if we repeat the experiments, we will get a distribution over the locations X(T_i) where T_i are the times at which we detect a particle in each experiment and X(T_i) is the location… I think I understand this more or less fine…
now you say the pilot wave has to be defined over the quantum state and not over physical space if you have more than one particle. Is that equivalent to saying that there are N pilot waves over space where N is the number of particles? Do these pilot waves interact with each other? Do they interact with particles other than their paired particle? For a simple example let’s try a Stern-Gerlach experiment…
We generate two particles which start very near X1(0) and X2(0) which are themselves very near each other in the center of the apparatus, and these particles have entangled spins. We send them along their way left and right in opposite directions through the apparatus. The one that goes left interacts with a screen and we see an upwards deflection. Suppose that we set up the experiment so that when this occurs we will ALWAYS see a downwards deflection on the screen in the other direction.
we’ll set up two pilot waves psi_1(s1,x,t) and psi_2(s2,x,t). Entanglement means that s1 = -s2 but we don’t know which one is +1 and which one is -1. To determine what’s going to happen in the experiment, we need to do the math of propagating the psi_1 and psi_2 waves and piloting the particles. particle 1 is piloted by psi_1 and particle 2 by psi_2, is that right?
Finally, because we don’t know the spins of the particles, only the fact that they’re opposite, we need to also pilot the waves via psi_1′ and psi_2′ which are the pilot waves for the case where the spins are interchanged. Once we observe the outcome on the left, we’ll immediately know which of the pilot waves psi_1 vs psi_1′ actually occurred, and hence we’ll know psi_2 vs psi_2′ and will be able to predict the result on the right.
If my interpretation is right, I think your point about “configuration space” has to do with the fact that at any given location, there is not *one* field defined by *one* pilot wave, but rather N fields defined by N pilot waves (in my example N=2) is that correct?
• Tim Maudlin says:
No, the point is exactly not to have two “pilot waves”, one for each particle. The “pilot wave” in Bohmian mechanics is just the wavefunction that anyone is already familiar with from any quantum mechanics textbook, and always evolves via the Schrodinger equation. That wavefunction is a complex-valued function over *configuration space*, not physical space. That is, each single point in configuration space corresponds to a complete configuration of the system, i.e. to specifying the exact location of all of the particles. In the case of a single particle, the configuration space is isomorphic to physical space, because you say where every particle is by giving just one location. If there are many particles, then you have to specify many positions. In general the dimension of the configuration space of an N-particle system is 3N, so the configuration space for, e.g., a mole of particles in a box is very,very high dimensional. It is important that the wavefunction for a system is given by a single function over 3N dimensional space rather than N functions over 3-space: that is what makes for entanglement. But again, with a single particle the configuration space is just 3 space. That’s why the bouncing oil-drop model is OK for a single-particle phenomenon, like 2-slit interference, but can be misleading when you get to multiple-particle systems.
Given the wavefunction for the entire system, one can analytically define a “conditional wavefunction” for subsystems. This is not a new object, but just a mathematical construct. There is an interesting story about these conditional wave functions in the theory (e.g. they undergo “collapse” even though the universal wave function never does), but that would be a bit complicated to go into.
• Got it, thanks that makes a lot of sense.
• Ok, thinking a little more about it, configuration space is N copies of 3D space right? So there’s one mathematical object, but at every point in space it has N tagged complex values. Each tag represents the effect the wave would have on the associated particle. Your “conditional wavefunction” is just slicing this wavefunction by tag ??
• Tim Maudlin says:
No, that’s not it. Suppose I have 4 particles in a 3-space. I can indicate the position of each particle with three real numbers, so I need 12 numbers to give the positions of all four, and each set of 12 numbers fixes the positions. So the configuration space is 12-dimensional. (Things are a little different if the particles are identical, but let’s ignore that now.) The wave function assigns a single complex number to each point in this space, not N complex numbers to each point in 3-space. This is what allows the wavefunction to carry information about correlations between the particles.
If you have actual particles with actual positions at all times, then the evolution of their configuration corresponds to the motion of a single point in configuration space. So specifying the dynamics for the collection of particles amounts to specifying a velocity field on configurations space. If you have a function on a space, the obvious way to use it to define a vector field, like a velocity field, is to take some sort of gradient. That is exactly what the “guidance equation” in Bohmian mechanics does: you basically take the imaginary part of the gradient of this (single) complex function on configuration space to be the relevant velocity field: given any configuration, this determines how the configuration changes with time. So that determines how all the particles move. That’s the whole theory. (Spin is treated not as a property of the particles with a value, but by using a spinorial wave function in calculating the velocity vector field.)
• I see, so it’s very similar in principle to the Lagrangian formulation of classical mechanics, where the whole state of N locations is a single point in 3N space, and the dynamics occur in that abstract space. If I understand it correctly that means the Schrodinger equation within the pilot wave theory is different from the Schrodinger equation within say Copenhagen interpretation, because the wave is defined on different spaces (3N dimensional vs 3 dimensional), and the gradients or laplacians are taken over those spaces.
In the 3D/Copenhagen type interpretation, we have no knowledge of the particle’s path, only its final position which is a random variable predicted by |psi|^2. But if the particles “really do” move on a huge configuration space, it’s hardly surprising that projecting 3N dimensions down to 3D throws away a lot of information….
Thanks for the background. I’ve always wanted to look into this area a little deeper, ever since intro QM classes back in… 2002 or something like that.
• Tim Maudlin says:
Yes, there is a similarity to a Hamiltonian formulation, but there is no difference between the quantum state in Bohmian mechanics and the quantum state in any other “interpretation”: they are all complex functions on configuration space, not on physical space. What is strange about the “Copenhagen” approach is that according to that theory, the “particles” do not always have definite positions, so there generally isn’t an actual precise configuration at all. In the Bohmian theory the “particles” are really particles, and always have positions, so there is always a definite configuration. It is the existence of a definite configuration that allows for the definition of a conditional wavefunction of a subsystem, so this definite is not available in a Copenhagen setting.
11. Corey says:
So here we have a certain state of affairs.
AG wants to describe it as “classical probability does not apply to quantum systems.” TM (and BJ presumably) wants to describe it as
My question: is there any actual disagreement about the actual state of affairs, or is the disagreement only about the words used to describe it?
(If the disagreement is only about the words, I have to say that TM’s phrasing seems distinctly less misleading to the uninitiated.)
• Andrew says:
As an applied statistician, I use the expression p(A) = sum_B p(A|B)p(B) all the time. And when we are taught probability, we’re taught to use this expression. However, it doesn’t work in quantum systems. In quantum systems the superposition occurs at the level of complex probability amplitudes, not the probabilities themselves. It seems to me that at the technical level, the discussants on this thread are disagreeing with me, because I keep mentioning complex probability amplitudes (which have phases, unlike classical probabilities) and the discussants don’t.
Alternatively, you can do as Bill Jeffreys does above and condition on the measurement, thus p(A|E1) if measurement is taken using method 1, p(A|E2) if measurement is taken using method 2, etc. That would be fine, but it’s not what we generally do in applied statistics (or, for that matter, in probability textbooks). In the classical (non-quantum) world, when you observe data B, you condition on B, that’s it. In the quantum world, you either need to move to complex amplitudes, or you need to condition not just on B but on the fact that B was measured. By doing this latter step, you can keep all the probabilities working ok, but at the cost of requiring many more probability statements, and at the cost of no longer being able to simply condition on and average over latent variables.
• Corey says:
I think some clarity and/or specificity is missing from the above claim. I could respond to the claim as stated by pointing out that if, say, E1 refers to a measuring device with Gaussian error of variance 1 and E2 refers to a measuring device with Gaussian error of variance 100, then we would indeed condition on that information when calculating marginal sampling distributions or posterior distributions. But this response seems to miss your point in some way that I can’t get clear in my head.
• Andrew says:
I agree that issues of measurement in classical statistics are not always trivial. Nonetheless, we are generally taught that the way to do conditioning is simply to consider the joint distribution and then count all the possibilities corresponding to the given data. With the exception of problems where the measurement error depends on the parameter of interest, or tricky examples such as the Monty Hall problem, we don’t usually think too much about measurement.
But in quantum mechanics, the issue is not just that measurement error can vary. The issue is that we can’t simply condition on outcomes and average over probabilities. Instead we have to use complex probability amplitudes. There’s nothing like this in classical probability. Classical probabilities do not have phases, they don’t cancel out, they don’t exhibit the wave behavior associated with quantum probabilities.
• Andrew, the quantum probability phases thing is one interpretation of QM, but Tim has pointed to another in which such things don’t seem to happen. It’s maybe sufficient to have classical uncertainty on the initial state of the particle, and then for any realization of the initial state, QM gives you a deterministic location for the particle to be observed which depends on a pilot wave that propagates according to the Schrodinger PDE.
Now, the PDE for wave propagation can be connected to a different kind of probabilistic interpretation through the Feynman-Kac formulation of certain PDEs, and if you mish-mash the uncertainty in initial conditions + deterministic wave propagation together with Feynman-Kac propagation of the pilot wave you may be able to get a system where essentially you’ve said “we don’t know where the particle started, and we’re willing to throw away the deterministic path it took, so we might as well identify the physical particle with the virtual particle we’re using for the Feynman-Kac solution of the wave equation”.
Then because the wave itself has phase, and you’ve thrown away the deterministic path of the real particle, it looks to you like the particle has phase too and because it’s all embedded in probability theory for the diffusion implied in the Schrodinger equation, the whole thing looks like we’re talking about probability for the actual physical particle, when in fact we’re talking about probabilities for the virtual mathematical particles associated with the diffusive propagation of the PDE.
I admit to being pretty novice on the QM stuff, but I am somewhat knowledgeable about the classical connection between PDEs with diffusion and stochastic motion of virtual particles, and I could see how this all could get mish-mashed together in the minds of physicists, especially those who are more interested in physical outcomes of experiments than in foundational issues.
• Bill Jefferys says:
Andrew, you say:
Do you really mean this? Do you mean to say that, whatever the data are, it doesn’t matter how the experiment was constructed or how the data were obtained? Do you mean to say that it doesn’t matter if the data are the result of an RCT or if they were gathered from clinical results?
I don’t think you mean this.
If you make a difference between data gathered from an RCT and data gathered from clinical results, then you ARE conditioning on background data that is NO DIFFERENT from a physicist who distinguishes between a mixed experiment where one or the other slit is opened with probability 0.5, or another experiment where both slits are always open.
• Bill Jefferys says:
Let me add: Just because you don’t explicitly put the conditions of the experiment into the “conditions” on the right hand side of the ‘|’ bar, doesn’t mean that you aren’t conditioning on something else. Just because “that is not what we generally do…” doesn’t mean that you shouldn’t do it.
That is the point of Ed Jaynes’ remark that many of the “paradoxes” of probability theory arise from failure to condition on ALL the actual conditions involved. When you “do it” you may find that the results are not what you thought they were.
• Corey says:
I think a more charitable interpretation of what AG is claiming might be something like: our expectation is that things without brains or brain-analogues don’t display the Hawthorne effect, so it’s rather shocking that subatomic particles seem to.
• It’s totally natural to me to think that subatomic particles interact with different apparati in different ways. Two slits is simply a different experiment than 1 slit. The particles interact with it differently. Changing behavior would be like saying the laws of physics change depending on whether I’ve got one slit open vs two. The fact is that the laws of physics stay the same, but they imply different behaviors whether you have one slit or two… I don’t see anything unintuitive about that.
What’s unintuitive is “spooky action at a distance” at least if you believe in locality. I’m pretty happy to throw out locality personally. I think it makes much more sense than throwing out “realism” (ie. the idea that the electrons are definite things that take some definite path)
• Also, it should be pointed out that in real experiments we are already conditioning on the electron/photon whatever hitting the detector. There are always some which will interact with the material forming the slits and be absorbed or otherwise modified in their path. In that sense, if you fire a single photon/electron into an apparatus it isn’t just going to light up your screen in one spot, it is sometimes going to do that, sometimes nothing will happen… In particular, if you do entangled experiments, sometimes you’ll detect one of the entangled particles and not the other. Sometimes you might detect one entangled particle and on unrelated particle. You’ll have to decide that the second particle wasn’t the entangled pair it was just experimental noise… so real statistics will look funny compared to the idealized ones of thought experiments. Of course the better your apparatus the better your stats will look I suppose.
• Corey says:
I think people are forgetting what’s really remarkable about the two-slit experiment. It isn’t that two-slit experiment gives a different result than the sum of the two one-slit experiments. It’s that if you do the two-slit experiment with a detector measuring which slit the particle passes through (hard for photons, easier for electrons), you do get the sum of the two one-slit experiments.
• Tim Maudlin says:
But that would only be remarkable if you thought the detectors worked by magic, not by any actual physical interaction. It is not astounding that they don’t, and can’t. And as soon as you model the detectors in any physically reasonable way (i.e. as establishing a correlation between the particle state and the detector state), then simple physical analysis shows that the interference bands ought to go away. Simple physical analysis with normal probability theory. See quote from Bell about forgetting the apparatus. Putting in a detector changes the physical situation. No surprise it has an effect. What the effect is depends on the physics.
• My answer was going to look a lot like Tim’s answer. The detector changes the physical experiment too. A 2 slit experiment + detector has a different Hamiltonian etc than a 2 slit without detector.
• Corey says:
Tim and Daniel,
I don’t disagree. I’m saying that *AG’s claim* would make intuitive sense for scientists trained in macroscopic realms where the process of establishing correlations between the observed object and the detector has a negligible effect on the subsequent behavior of the observed object.
• Tim Maudlin says:
Ah, yes. That’s a helpful comment. So the thought that one ought to (or has to) tinker with probability theory itself arises from not noticing this.
And of course the really weird thing is not just that adding a detector at one slit changes the outcome, but that it does so even for the sub-ensemble where the detector doesn’t fire! That’s the key to the Elitzur-Vaidman bomb problem. But, as you say, even in this case the Hamiltonian changes. And this phenomenon has nothing to do with probability: the comment about the sub ensemble does not require any probabilistic concepts at all.
• Tim Maudlin says:
As far as I can tell, there is a substantive disagreement here. The original claim was that even the 2-slit experiment (which does not involve entanglement, or violations of Bell’s inequality, or anything) cannot be accounted for using classical probability theory. The Bell quote actually concerns the so-called “no hidden variables” “proofs” that go back to von Neumann, and those proofs do not even apply in the 2-slit case. There is just no question that the claim about the 2-slit data is incorrect, and also no question that the von Neumann “proof” cannot prove what people think it does: e.g. that no deterministic theory can recover the predictions of quantum theory. There are just straightforward counterexamples to such claims.
Note that “forgetting the role of the apparatus” when trying to physically account for some observed data is just a plain error, not a viable option! The apparatus is there, as a physical object. It has to be taken account of when trying to account for the data—the apparatus operates by physics, not magic. Bell’s point is that if you make this mistake, you are going to be in an incoherent situation, and so despair of logic itself.
• Andrew says:
In the context of applied statistics, the question that Mike Betancourt and I posed in our article is whether something is to be gained by modeling macroscopic phenomena using complex probability amplitudes instead of just real-number probabilities.
Regarding all the rest, I’d just like to thank you and the other discussants for commenting here. I much prefer direct back-and-forth to sniping on twitter etc.
• Tim Maudlin says:
Thanks for the comment. I really would like this to be helpful, and would be happy to try to go into as much detail as would be useful.
Your last comment suggests something, but maybe this does not get at the main point. What you are calling a “complex probability amplitude” is just what I would call a wavefunction or quantum state, and of course it plays a central role in the explanation of any phenomena using quantum theory. It is just that on some understandings, this item has nothing very directly to do with probability. This is perhaps most clear in a pilot-wave picture, where the direct role of the object in the theory is to provide a (deterministic!) dynamics for the particles. The probabilistic aspects of the theory arise then in the normal way: a probability distribution over possible initial states consistent with the experimental situation is carried by the dynamics over into a probability distribution over outcomes. All of that is just using standard probability theory. And the probabilities for the outcomes are exactly what are given by the standard quantum mechanical predictive algorithms. So no one is denying the central importance of a complex function in the theory, just the relation of that function to how probabilities are handled.
• konrad says:
Not sure if the question makes sense. The discussion (and all of modelling, more generally) is about how to describe reality – i.e which words and equations should be used to describe it.
• Corey says:
konrad, I generally try to ask that genre of question when it appears people are talking past one another. I do my best to make the question make sense, but I offer no guarantees.
• konrad says:
Well it is abundantly clear that people are talking past each other in this thread, and I didn’t mean to imply that it was a bad question. My point was that, yes, the disagreement _is_ just about how to describe reality, but that this constitutes a substantive disagreement (because in modelling we really care about how to describe reality).
Central to the disagreement (I suspect) is the usual point of difference, namely whether probabilities are defined as descriptors of information states (e.g. probability theory as an extension of logic) or of empirical reality. Andrew has previously stated that he supports the latter rather than the former interpretation. It feels like it _might_ be possible to argue that the latter interpretation could be extended to refer directly to the state of a system, but I don’t see how such an argument could be made. Certainly it would be a radical reinterpretation even of the frequentist (i.e. empirical) notion of what the word “probability” actually means.
12. David Lovis-McMahon says:
Having not seen the talk, I was under the impression that the speaker (Robins) was leveraging Bell’s theorem precisely because it puts two things on the line for any quantum mechanical explanation of the world: localism and realism. One or the other (or perhaps both) has to be incorrect. The speaker clearly assumes that localism is true, “Assuming with Einstein that faster than light (supraluminal) communication is not possible…”. By virtue of that assumption, realism must be false.
As I understand realism, it is the requirement that reality have an “object permanence” such that the moon will still be there in the sky even when I’m not looking at it. Realism implies that while we cannot simultaneously observe what my fever would be had I [taken vs. not taken] the aspirin, we can speak meaningfully about the result under the different choices made.* More formally this means that the counterfactual “took Aspirin” is included with the factual “didn’t take Aspirin” in the statistical population of possible outcomes describing my fever. Finally, using the laws of probability, I could condition my way from the factual to make inferences about the counterfactual.
Rejecting realism is a pretty profound thing to do given that the entire mechanism of Neyman-Rubin causal modeling rests on this idea that we can speak meaningfully about the result under the counterfactual. The correctness of this seems to be contingent on assuming the objectivity of measurement and the corresponding counterfactuals.
Do I understand this correctly and are there going to be any articles by Robins, et al., coming out soon on the topic?
*I should note that I am making the claim that realism implies something about how the world works and not how or whether we can know how the world works.
• Roger says:
Rejecting realism would be pretty profound if it really proved that the moon is not there unless we look at it. They would have given a Nobel prize to anyone who could prove that.
13. Corey says:
I’m resurrecting this thread to link a paper which shows in excruciating detail exactly how AG and Mike Betancourt get this one wrong.
• Andrew says:
You perhaps won’t be surprised to hear that I am not at all convinced that we are wrong. I can leave it to Mike to weigh in himself, but, very briefly, let me say that, yes, I think that classical (i.e., Bayesian, Kolmogorovian, etc) probability theory can be used to model quantum-mechanical outcomes such as the 2-slit experiment, but at the cost of a level of complexity that would generally be considered unacceptable in applied statistics. That is the point of our paper: conditioning all models on the sequence of measurements is something that could be done, but is generally not done in probability modeling. (It’s similar to the problem of utilities in economics: yes, you could model utilities as functions of the sequence of steps that are taken to reach the outcome, but this is contrary to the spirit of the Neumann theory, in which utilities are taken as a function of outcomes alone.) A probability model that increases exponentially in complexity with each additional measurement is not a probability model in the usual sense.
• Corey says:
Andrew: “conditioning all models on the sequence of measurements” is neither here nor there. If you actually want to know how you’ve gone wrong, read the paper.
• Andrew says:
“Conditioning on the measurements” is definitely the issue in the 2-slit experiment! Also there’s other issue of Fermi-Dirac and Bose-Einstein statistics, which don’t follow the rules of classical (Boltzmann) statistics. Again, it should be possible to shoehorn this all back into classical probability but at the cost of an awkward increased complexity in the model.
• Firstly, the paper is not new — a version has been on the arXiv for years, In fact, I was at one of the conferences where Philip first began presenting on the work. I know both Philip and Kevin, I respect them, and I’ve always liked their idea. But, as they note, in order to reconcile classical probability with quantum mechanics some physical assumptions have to be changed, and those changes have nontrivial consequences that put them at odds with mainstream physics community.
As I stated numerous times above, there are many approaches to understanding quantum “weirdness”. The most common (by a large margin) preserve causality and locality at the expense of classical probability theory (there are important reasons for the popularity of this approach, in particular the ability to scale to quantum field theories, but that’s not relevant to this discussion), but there are many others. Some focus on maintaining classical probability theory while sacrificing locality, some sacrifice causality. At this point there are no experiments that can separate these theories experimentally, so it’s an ontological argument. In other words, one without an answer.*
Also as has been stated numerous times, our proposal of the possible utility of generalized probability theories is completely independent of their being relevant to any “true” model of physics. Regardless of their physical nature, their different properties might have use in modeling systems that don’t fall into the domain of classical probability (ill-defined state spaces, misbehaving measures, etc).
* Not that it’s not fun to argue those theories. My fellow physicists and I would often discuss these matters, often over beer and never too seriously. Never trust someone who takes quantum mechanics too seriously.**
** I’m already waiting for people to take this comment too seriously…
• Corey says:
Recall that the claim in the original post is “classical probability theory… does not fit quantum reality”. As you note, it is *physical* assumptions that are at the source of the apparent disagreement — and that’s all your various interlocutors were ever claiming in the original discussion!
As to the opinion of the mainstream physics community, well, you would know more about that than me, so I’ll take your word for it. But it does seem to me that the only physical assumption that needs to be changed to make classical probability *reconcilable* with quantum mechanics is the assumption that there is a fact of the matter about which slit the particle passes through when the experiment doesn’t measure it. I’d be surprised if the opinion of the mainstream physics community is that there really is such a fact.
14. Entsophy says:
[…] was reminded of this old post by Andrew Gelman about whether Quantum Mechanics requires a change in the axioms of probability […] |
025aea06b4ae0cba | ‘Quantum Mechanics’ as the Mechanics of the Time Region
Reciprocity, Volume XXIV, Number 1, Spring 1995, p. 1–9; Revised Feb. 1998
The preliminary results of a critical study of the Wave Mechanics carried out in the light of the knowledge of the Reciprocal System of theory have been reported earlier.1 Some of its important findings are as follows. While the Wave Mechanics has been very successful mathematically, it contains some fundamental errors. The principal stumbling block has been the ignorance of the existence of the Time Region and its peculiar characteristics. The crucial points that need to be recognized are that the wave associated with a moving particle, in a system of atomic dimensions, exists in the equivalent space of the Time Region: and that switching from the particle view to the wave view is equal in significance to shifting from the standpoint of the three-dimensional spatial reference frame to that of the three-dimensional temporal reference frame that is germane to the Time Region. To imagine that even gross objects have a wave associated with them is a mistake: the question of the wave does not arise unless the phenomena concerned enter the Time Region.
One corollary is that the theorists’ assumption that the wave associated with the moving particle is spatially co-extensive with the particle is wrong since the former exists in the equivalent space, not in the extension space of the conventional spatial reference system. The Uncertainty Principle stems from the theorists’ practice of resorting to wave packets.
It has further been shown that the probability connotation of the wave function arises from the two facts that the wave is existent in the three-dimensional temporal manifold, and that locations in the three-dimensional temporal manifold are only randomly connected to locations in the three-dimensional spatial manifold. The non-local nature of the forces (motions) in the Time Region also follows from these facts.
Calculations based on the inter-regional ratios applicable confirm Larson’s assertion that the measured size of the atom is in the femtometer range and hence what is found from the scattering experiments is the size of the atom itself—not of a nucleus.
From the above study it became abundantly clear that the critics’ comments that the small-scale world is not intrinsically rational, and that the Quantum theory cannot be understood intuitively were wrongly founded. What was really missing was the knowledge of the existence and characteristics of the Time Region, the region inside the natural unit of space, where only motion in time is possible. Since our knowledge of the Reciprocal System helped straighten some of the conceptual kinks of the Wave Mechanics and has indicated that its original basis has been rightly (though unconsciously) founded, an attempt has been made to inquire into its mathematical aspects in order to see whether they are valid in the light of our understanding of the Reciprocal System. The results of this inquiry are reported in this article.
1. Where Do We Stand
Before proceeding further it would be desirable to take a stock of the atomic situation from the point of view of the Reciprocal System.
Firstly, Larson2 asserts that the atom is without parts, that it is a unit of compound motion, motion being the basic constituent of the physical universe. This means that both the nucleus and the so-called orbital electrons are non-existent.
Secondly, he argues that there is no electrical force either, involved in the atomic structure. This, therefore, leaves gravitation and the space-time progression as the only two motions (forces) that operate inside the Time Region with, of course, the appropriate modifications peculiar to the Time Region introduced into them.
Under these circumstances the question of a ‘nuclear’ force does not arise at all. But it is perfectly legitimate to inquire what forces (motions) are encountered by a particle as it approaches the vicinity of an atom, and indeed, as it enters the very atom itself. Equally important is to inquire into the mechanics of the converse process of the emission of a particle by the atom.
2. The Wave Equation
The most fundamental starting point for the mathematical treatment in the Quantum Mechanics is the wave equation. The wave equations in the quantum theory govern the wave functions associated with the particles, and correspond to Newton’s laws of classical mechanics. From our earlier study we have seen that changing from the particle picture to the wave picture is a legitimate strategy that needs to be adopted on entering the Time Region, as it is tantamount to shifting from the conventional three-dimensional spatial reference frame of the time-space region to the three-dimensional temporal reference frame of the Time Region. Therefore the next logical step is to examine how the governing equations of the wave phenomena have been arrived at, and see if it is in consonance with the Reciprocal System.
Since it is always possible to constitute a wave of any shape by superposing different sinusoidal waves of appropriate wavelengths and frequencies, we shall limit our discussion to these elementary sinusoidal waves. The relation between the wave number k and the wavelength λ on the one hand, and that between the angular frequency ω and frequency ν on the other, are as follows
k = 2π/λ; ω = 2π
The wave speed u is given by
u = λ.ν = ω/k
The general functional forms of sinusoidal waves are
sin (kx ± ωt) ; cos (kx ± ωt)
and in complex exponential form (see Appendix I)
ei(kx ± ωt)
where the imaginary unit i is defined by i2 = –1.
Complex functions involve a real part and an imaginary part. Since at this stage of our discussion the nature of the wave function of particles is yet unknown, there is no theoretical reason to exclude complex functions. Let us bear in mind that the criterion of judgment is what is possible in the Time Region, not what is possible in the time-space region. To be sure, observable quantities in the time-space region ought to be real. However, by virtue of the second power relation between corresponding quantities in the Time Region and the time-space region, the observable value of a Time Region quantity would still be real even if it were to be imaginary in the Time Region (e.g.: a quantity i.v in the Time Region would appear as (i.v)2, that is, –v2 in the outside region).
2.1 Radiation Waves
Let us derive the governing equation for the wave propagating at constant speed, like that of radiation. First we note the relation between the momentum p of the wave and the wave number k, and the energy E and its angular frequency ω,
p = ħk ; E = ħω
where ħ is Planck’s constant h divided by 2π.
From the energy–momentum relationship of the wave, p2c2 = E2, (c being the constant wave speed) we have
p2 = E2/c2; ħ2k2 = ħ2ω2/c2; k2 = ω2/c2
Assuming the simplest wave form, that of a sine wave, we write the wave function in complex exponential form as
Ψ(x,t) = A.ei(kx–ωt)
where A is an arbitrary constant. For such a function,
∂/∂x = ik.Ψ and ∂/∂t = –iω.Ψ
That is, taking the derivative with respect to x is equivalent to multiplying by ik, and taking the derivative with respect to time t is equivalent to multiplying by –iω. Thus
2Ψ/∂x2 = (ik)2.Ψ = –k2.Ψ and ∂2Ψ/∂t2 = (–iω)2.Ψ = –ω2
Substituting these in the last of Eq.(6) we obtain
2Ψ/∂x2 = (1/c2) ∂2Ψ/∂t2
which is exactly the wave equation we are seeking (see Appendix II).
2.2 Matter Waves
At the instance of his mentor Peter Debye, Erwin Schrödinger made a detailed study of the wave hypothesis advocated in 1924 by de Broglie. Schrödinger noted that the energy–momentum relationship of a free particle (not acted by forces) of mass m
p2/2m = E
leads to the wave number–angular frequency relation
ħ2k2/2m = ħω
From Eqs. (2) and (12) we see that the wave speed in this case is given by
u = ħk/2m
Therefore the speed of the matter waves is not constant like that of the radiation waves, but is a function of the wave number k. Eq. (12) could be rearranged as
–(ħ2/2m) (ik)2 = iħ (–iω)
Multiplying both sides by Ψ, we can at once see from Eqs. (8) and (9) that
–(ħ2/2m) (∂2Ψ/∂x2) = iħ (∂/∂t)
which is the governing equation for the wave associated with the free particle that we are looking for. This is the Schrödinger equation for the free particle. It is the equation in the Time Region which corresponds to Newton’s first law of the time-space region.
In order to include interactions of the particles with the environment we note that the total energy of such a particle consists of the kinetic energy and the potential energy. The latter could be taken to be dependent only on position and represented by a potential energy function V(x). Thus for a conservative system we have the constant total energy E given by
p2/2m + V(x) = E
The corresponding wave number–frequency relation, associating frequency with the total energy, is
ħ2k2/2m + V = ħω
Adopting Eqs. (8) and (9) as before, we arrive at the Schrödinger wave equation with interaction present
–(ħ2/2m) (∂2Ψ/∂x2) + V(x)Ψ = iħ (∂/∂t)
This corresponds in the Time Region to Newton’s second law in the time-space region.
As can be seen from the foregoing derivations, nothing against the principles of the Reciprocal System has been introduced so far. Hence the Schrödinger equations can be admitted as legitimate governing principles for arriving at the possible wave functions of an hypothetical particle of mass m traversing the Time Region, with or without potential energy functions as the case may be. We may note in the passing that often considerable mathematical dexterity is required in solving these differential equations, though computer-oriented numerical methods are fast replacing closed-form solutions.
Any wave corresponding to a state of definite energy E has a definite frequency ω = E/ħ. Therefore from Eq. (7) we can write
Ψ(x,t) = A.e–iEt/ħ.ψ(x)
where ψ(x) is a function of space variable only. Inserting the above into Eq. (16) and dividing out the factor e–iEt/ħ throughout, we get the differential equation to be satisfied by ψ(x)
–(ħ2/2m) (∂2ψ/∂x2) + V(x)ψ(x) = E.ψ(x)
which is referred to as the time-independent Schrödinger equation. This equation is less general and is valid only for states of definite total energy.
3. States of Negative Energy
It is instructive to see what the solutions of Schrödinger equation turn out to be. Firstly, in any region of constant potential energy V, we see that the solution of Eq. (18) is a sinusoidal function,
ψ(x) = A.sin kx or A.cos kx, and k2 = 2m(E–V)/ħ2
(E–V) being the kinetic energy.
3.1 The Step Function
In Fig. 1(a) we picture a step-function potential energy, which is constant at V1 and V2 respectively in two different regions. A possible wave function corresponding to this case is shown in Fig. 1(b). The particle’s greater kinetic energy (E–V1) in the region x<0 is reflected in its larger wave number (smaller wavelength) in this region. Also since its speed in this region is greater, it spends comparatively less time in this region, and this reflects as its smaller amplitude in this region.
An interesting case occurs when the potential energy V in any region is greater than the total energy E. Here the kinetic energy, E–V, becomes negative! This is physically impossible in the time-space region and the particle can never enter such region. However, the situation is different in the Time Region: Eq. (18) has valid solutions in the region, with k from Eq. (19) taking on imaginary values,
ψ(x) = A.e±bx, and b = ik
The sign of the exponent is so chosen as to see that ψ tends to zero for large x. Fig. 2 illustrates this case: in the region x>0 we see that E is less than the potential energy. The wave function is sinusoidal in the region of positive kinetic energy and is exponential in the region of negative kinetic energy. Both functions join smoothly at x=0 with a first order continuity. The penetration of the wave function into the region of negative kinetic energy has no classical analog and is purely a phenomenon of the Time Region.
3.2 Explanation of the Negative Energy States
When we turn to the Reciprocal System for an explanation of the possibility of the existence of negative energy states, what we find is as follows. In the time-space region, that is, in the context of the three-dimensional spatial reference frame, speed (space/time) is vectorial, that is, can have direction in space and therefore could take on positive or negative values. This is because in this case space is three-dimensional and time is scalar. In this frame, energy, which is one-dimensional inverse speed (time/space), is scalar, and can take on zero or positive values only. On the other hand, the Time Region is a domain of the three-dimensional temporal reference frame. In this case time is three-dimensional and space is scalar. Consequently the inverse speed (namely, energy) is the quantity that is ‘directional,’ that is, can take on a ‘temporal direction’ in the context of the three-dimensional temporal reference frame. Therefore it is perfectly possible for it to take on negative values as well. (It must be cautioned that ‘direction in time’ has nothing to do with direction in space; it is to be understood that we are only speaking metaphorically.) Further, in the Time Region, speed is the quantity that is scalar, an example being the net total speed displacement of the atom, namely, the atomic number Z.
Moreover the possibility that even potential energy (being an inverse speed) could be ‘directional’ in the three-dimensional time, and hence be represented by complex numbers in the Time Region, cannot be overlooked. Indeed the Quantum theorists find it necessary to adopt the complex potential V+iW in place of V in scattering theory. Here the wave number k becomes complex and is written as k+iq. b of Eq. (20) becomes b = i(k + iq) = –q + ik, and we have
ψ = (A.e–qx)(eikx)
We can at once see that this is the wave function of a travelling wave of whose amplitude decreases as it advances, and therefore represents a beam of particles some of which are getting absorbed.
3.3 The Potential Energy Barrier
An interesting situation arises when two regions of positive kinetic energy occur separated by a potential energy barrier that is higher than the total energy as shown in Fig. 3(a). In the central region (of negative kinetic energy) the wave function is exponential, while it is sinusoidal on either side as shown in Fig. 3(b). At either boundary the function and its first derivative are continuous. From this it is apparent that the particle represented by the wave has a non-zero probability of appearing on the other side of the barrier! While this is a real Time Region phenomenon that has been observed (the ‘tunneling’), it has no analog in the time-space region (classical mechanics).
3.4 The Potential Energy Well
The last case of interest we wish to consider is that of a potential well as shown in Fig. 4(a), wherein the total energy E is less than the potential energy V1 in the outer regions. As before, we find that the wave function is sinusoidal in the (central) region of positive kinetic energy, and is exponential in the (outer) regions of negative kinetic energy, maintaining first order continuity at the boundaries. But here a new factor emerges, namely, that if we choose an arbitrary value of E, it might become necessary to adopt growing exponentials in the outer regions (for example, e+bx for x>L) so as to satisfy the continuity conditions at the boundary. This therefore leads to an unreal state of affairs. The physical requirement is that the wave function goes towards zero with increasing space coordinate in the outer regions. This necessitates the choice of shrinking exponentials in the outer regions (for example, e–bx for x>L). This requirement, coupled with the continuity constraints at the boundary, limits the possible energies to a series of distinct levels, each with its own wave function. Thus, well-type potential energy functions give rise to set of possible discrete energy levels. This fact can be seen directly to lead to the explanation of several observable facts including the atomic spectra.
4. Origin of the Pauli Exclusion Principle
The so-called exclusion principle was originally promulgated by Wolfgang Pauli. This is an empirical law to which no exception was ever found. It has been a heuristic guiding rule for understanding many an important quantum phenomenon. In spite of its important role, the explanation of its origin has defied the theorists. Therefore that this explanation is now forthcoming from the Reciprocal System is a point in favor of the general nature of the latter theory.
4.1 The Spin
But first we must recognize a point that we have been emphasizing,[3,4] namely, that rotational space is as fundamental as the linear (extension) space. Larson explains: “…the electron is essentially nothing more than a rotating unit of space. This is a concept that is rather difficult for most of us when it is first encountered, because it conflicts with the idea of the nature of space that we have gained from a long-continued, but uncritical, examination of our surroundings. … the finding that the “space” of our ordinary experience, extension space, as we are calling in this work, is merely one manifestation of space in general opens the door to an understanding of many aspects of the physical universe …”5 He points out that an atom, for example, can exist in a unit of rotational space as it can in a unit of extension space.
In a Paper entitled Photon as Birotation6 we have derived that the basic unit of angular momentum is ½ ħ. Now we find that the Quantum theorists have been referring to this basic unit of rotational space as the spin. In addition to the three space coordinates spin is treated as a fourth coordinate. Thus two different particles can occupy the same location in extension space at the same time if their spin coordinate differs.
4.2 Indistinguishability
In connection with a class of elementary particles, we know that any two individual particles (say, two electrons) are absolutely alike. In the time-space region, the fact that two particles are identical presents no complications since they can be kept distinguished by their respective locations. But in the quantum phenomena, because of the non-local nature of the Time Region, no such distinction is possible. This intrinsic indistinguishability gives rise to some special constraints. Let us take ψ(1,2) to be the wave function of two indistinguishable particles with particle 1 at location r1 (whose coordinates include the spin coordinate also) and particle 2 at location r2. Then [ψ(1,2)]2 represents the probability distribution for particle 1 to be at r1 and particle 2 to be at r2. Since we cannot distinguish between the particles, the wave function should be of such a form that it results in the same probability distribution if we interchange the two particles in ψ. That is
[ψ(1,2)]2 = [ψ(2,1)]2
This can be satisfied in two ways,
ψ(1,2) = +ψ(2,1) and ψ(1,2) = –ψ(2,1)
The first type of wave functions are referred to as the symmetric and the second as the antisymmetric functions.
Now the empirical finding is that the wave functions of particles like protons and neutrons which are known to have half-integral spin (½ ħ) are antisymmetrical, and those of particles with integral spin (like the photons) are symmetrical. The most fundamental statement of Pauli exclusion principle goes somewhat like this: “Any permissible wave function for a system of spin-½ particles must be antisymmetric with respect to interchanging of all coordinates (space and spin) of any pair of particles.” But enunciating a principle is quite different from explaining its origin, and the fact is that no theoretical explanation has been found for this empirical finding. One author writes: “For reasons that are not clearly understood, for electrons, protons, neutrons, and all other spin-½ particles, the minus sign is chosen…”7
4.3 The Two Types of Reference Points
From the Reciprocal System we have now the explanation. Let us recall that in the universe of motion there are two types of reference frames—the conventional, stationary three-dimensional spatial reference frame (or its cosmic analog, the three-dimensional temporal reference frame) and the moving natural reference frame. We also have two kinds of objects, those having independent motion like the gravitating particles and those having no independent motion of their own and hence are stationary in the natural reference frame, like the photons and those particles having potential mass8 only. The reference point for the scalar inward motion of the gravitating particle is the particle itself. Thus if there are two locations A and B in the three-dimensional reference frame with this particle situated at A, say, its gravitational motions appears in the direction BA, because it is inward, toward itself. If now the particle is shifted to location B, the direction of its gravitational motion seems reversed, being in the direction AB. This is the origin of the antisymmetry of the wave functions of such particles.
As already remarked a unit of one-dimensional rotation carries unit spin (½ ħ). The resultant spin of a two-dimensional rotation with unit spin in each dimension is 1x1 = 1 (that is, ½ ħ) or is 1x(–1) = –1 (that is, –½ ħ). On the other hand, the resultant spin of a birotation (like the photon) is 1+1 = 2 (that is, ħ) or 1–1 = 0. Since gravitation arises out of the two-dimensional rotation, we can see that a gravitating particle carries spin-½. Thus the wave function of spin-½ particles turns out to be antisymmtric.
On the other hand, the reference point for the motion of particles like the photons is the location in the natural reference frame, or what Larson calls the absolute location. The natural reference frame is not a spatial manifold; nor is it a temporal manifold. It is a speed manifold: each location in it is moving at unit speed, one unit of space per unit of time. Suppose that the spatial separation between two locations in this frame (the absolute locations) increases by n natural units of space. Because of the unit speed criterion, there is concomitant increase in the separation in time by n natural units of time, making n/n = 1. The expansion in space is completely nullified by the expansion in time (because an increase in space is equivalent to a decrease in time and vice versa), and from a space-time point of view there is no separation between absolute locations.
In the context of the three-dimensional reference frame, photons appear to move outward from the point of their origin. But we have already seen that the photon is stationary in the absolute location. Its apparent motion is the outward motion of the absolute location (in which it is situated) away from all other absolute locations. The crucial point that should now be recognized is that outward from one absolute location is still outward from any other absolute location because of the equivalence of these absolute locations as explained above. Therefore, interchanging the location of the photon between two such absolute locations has no effect on the sign of its wave function. That is, the wave function of such particles is symmetric. One final word is in order: all that has been said above is also true in the Time Region, except that the scalar direction outward in the time-space region manifests as inward in the Time Region and vice versa.
5. Potentials in the Time Region
Finally it might be of interest to explore the nature and type of the potential energy functions V (see Eq. (15)), in the Time Region. In view of the maiden nature of the investigation and the insufficient time available, the results reported in this section may have to be treated as tentative.
5.1 Dimensional Relations across the Regions
Discussing the effect of the inversion of space and time at the unit level on the dimensions of inter-regional relations, Larson9 shows that the expressions for speed and quantities related to speed in the Time Region are the second power expressions of the corresponding quantities belonging to the time-space region. This is because motion (speed) has a spatial component and a temporal component. Since unit space is the minimum that can exist, within the Time Region—the region inside unit space—the spatial component of a speed remains constant at 1 unit and all variability can be in the temporal component, t, only. By virtue of the reciprocal relation between space and time the t units of time are equivalent to 1/t unit of space and manifest so in the Time Region. That is why Larson uses the term equivalent space (that is, inverse space) as synonym for Time Region. The equivalent speed in the Time Region is, therefore, given by the ratio of the equivalent space to time, (1/t)/t = 1/t2. This quantity is the second power expression of the speed in the time-space region with 1 unit of space component and t units of time component, namely, 1/t.
In an earlier article1we have identified two different zones of the Time Region, namely, the one-dimensional and the three-dimensional. The second power relation mentioned above could be seen to apply specifically to the one-dimensional zone, the zone of one-dimensional rotation associated with the atoms or subatoms. On the other hand, for the three-dimensional zone—where the compound motions constituting an atom exist—the situation is different because the basic rotation that constitutes the atom is two-dimensional. The temporal component of a two-dimensional rotation in the Time Region would be t2, and its spatial equivalent is 1/t2. So the equivalent speed in the case of two-dimensional rotation turns out to be (1/t2)/t2 = 1/t4. As could be seen, this is the fourth power expression of the corresponding time-space region speed 1/t. (Note that in the time-space region time is scalar and there cannot be anything like two-dimensional time.)
Looking back, we can now easily see why the quantum theorists required complex numbers to deal with the so-called ‘electronic energy levels’ of the atom adequately: they needed to cope up with the two-dimensional character of the equivalent speed pertaining to the one-dimensional rotation in the Time Region. It also suggests itself that we require to adopt quaternions to handle the so-called ‘nuclear energy levels’ since the dimensionality of the equivalent speed pertaining to the two-dimensional rotation in the Time Region is four.
5.2 Potentials in the Time-space Region
At this stage of our study we have only two scalar motions (forces) to consider: the space-time progression and gravitation. In the outside region (the time-space region), the forces due to the space-time progression and gravitation are respectively given by
FPO = KPO and FGO = –KGO/r2
where all the quantities concerned are in the natural units, the K’s are positive constants and r the distance factor. Suffix G refers to gravitation, P to space-time progression and O to outside region. From the definition of potential, F = –∂V/∂r, we obtain the expressions for the corresponding potentials due to the space-time progression and gravitation, in the outside region respectively as
VPO = –KPO.r and VGO = –KGO/r
The potential due to the space-time progression is repulsive while that due to gravitation is attractive as can be seen.
5.3 Potentials in the One-dimensional Zone of the Time Region
Potential energy being inverse speed, the expressions for the potentials in the one-dimensional zone of the Time Region would be the second power expressions of the corresponding ones in the time-space region (Section 5.1). Consequently the space-time progression and gravitational potentials in this zone could be written as
VP1 = KP1.r2 and VG1 = KG1/r2
with suffix 1 referring to the one-dimensional zone. We can at once verify that gravitation is repulsive and the space-time progression attractive in this region. In addition there could be a constant term KI1, representing the initial level of the Time Region potential. Thus the total Time Region potential in the one-dimensional zone turns out to be
VT1 = KP1.r2 + KG1/r2 ± KI1
The values of KG1 and KI1, and possibly KP1, are functions of the displacements of the atom in the three scalar dimensions.
It is instructive to see what the expressions for the corresponding forces would be: differentiating with respect to r and taking the negative sign, we have
FP1 = –2.KP1.r and FG1 = 2.KG1/r3
Larson10 however, while calculating the inter-atomic distances in solids, basing on the equilibrium of the Time Region forces, adopts
FP1 = –1 and FG1 = K/r4
where K is a function of the several atomic rotations. These expressions can be seen to differ from Eqs. (27) above. But whether we take Eqs. (27) or Eqs. (28), the force equilibrium equation, FP1 = FG1 can be seen to lead to the same fourth power dependence on the distance factor. Consequently, even if we find that Eqs. (27) are to adopted in preference to Eqs. (28), Larson’s original inter-atomic distance calculations would remain unaltered.
The Time Region potential Eq.(26) results in a potential well and therefore the solutions of Schrödinger’s Eq. (18) yield a set of discrete energy levels for the atomic system (see Section 3.4). It remains to be verified whether these truly correspond to the values inferred from the spectroscopic data.
5.4 Potentials in the Three-dimensional Zone of the Time Region
Turning now to the potentials in the three-dimensional zone, following our earlier analysis of the dimensional situation (Section 5.1), we adopt the fourth power expressions of the corresponding outside region (that is, the time-space region) quantities from Eqs. (24)
VP3 = KP3.r4 and VG3 = KG3/r4
with suffix 3 denoting the three-dimensional zone.
We know that the space-time progression acts away from unit space. In the time-space region away from unit is also away from zero (the origin of the conventional spatial reference frame), whereas in the Time Region (that is, in less than unit space) away from unit is toward zero. This is the reason why the space-time progression is an outward motion in the outside region while it is inward in the Time Region. This is true in the one-dimensional zone of the Time Region as much as in the three-dimensional zone. But the ‘unit’ of the three-dimensional zone does not coincide with the ‘unit’ of the one-dimensional zone. Its boundary is determined by the apparent size of the atom in question. This is because the atom and the three-dimensional zone are one and the same thing. (We must avoid falling into the trap of imagining that first there is an atom, and that it ‘occupies’ the pre-existing three-dimensional zone!) In Eq. (7) of the article on Wave Mechanics1 we have derived the following expression for the size of the atom,
rA = 1.2 * A1/3 femtometers
where A is the atomic weight. Expressing this in the natural units as rAn, we now note that the reference point for reckoning distance in the case of VP3 is not the origin of the reference system but the point at rAn. Finally, since the potential due to progression has to be attractive a minus sign has to be introduced. Thus the expressions for the two potentials are
VP3 = –KP3.(rAn – r)4 and VG3 = KG3/r4
Adding a constant term KI3 to take care of initial level of the potential energy, we have the total expression for the potential of the three-dimensional zone of the Time Region as
VT3 = –KP3.(rAn – r)4 + KG3/r4 ± KI3
We note that this corresponds to what the conventional Quantum theorists would call the nuclear potential. Our study indicates that Eq. (31) bears a remarkably close qualitative resemblance to the potentials arrived at through the scattering experiments. An unexpected feature of the experimental data analysis was the occurrence of a repulsive core of small radius. The Reciprocal System, on the other hand, actually predicts this repulsive core, namely, VG3.
6. Conclusions
Let us summarize the highlights. Having resolved the riddle of the wave-particle duality in an earlier article1 and understood the legitimacy of the wave picture in the Quantum theory, attempt has been made to examine the foundation of its mathematical formalism with the benefit of our knowledge of the Reciprocal System. This proved productive in two ways: firstly it clarified the situation in connection with the Quantum Mechanics, identifying some of its conceptual errors. Secondly it gave scope to expand our knowledge of the Reciprocal System in the form of new insights that would not have been possible otherwise.
1. The Schrödinger equations were found to be valid general rules for the exploration of the wave functions in the various situations.
2. In the time-space region, speed can be vectorial (that is, directional in the context of the three-dimensional spatial reference frame), whereas inverse speed (like, energy) is scalar. In the Time Region, speed is found to be scalar, whereas inverse speed is directionaldirectional in the three-dimensional temporal reference frame. Variables of the latter type, therefore, could take on inherently negative values and be represented by complex numbers or quaternions as the case may be.
3. The penetration of the wave associated with particle into the regions of negative kinetic energy resulting from potential energy barriers is found to be a genuine Time Region phenomenon.
4. In a similar vein, it is found that the occurrence of a well-type potential energy function in the Time Region leads to the limiting of possible values of total energy to a discrete set.
5. Such an important empirical law as Pauli exclusion principle, which has no theoretical explanation in the context of the conventional theory, could easily be understood from the knowledge of the positive and negative reference points brought to light by the Reciprocal System.
6. Reasoning from the principles of the Reciprocal System the possible potential energy functions of the Time Region relevant to atomic systems are surmised. While they evince a close qualitative resemblance to the empirically found potentials, detailed further study needs to be carried out to see if they lead to the correct prediction of the properties pertaining to spectroscopy, radioactivity and the scattering experiments.
On the whole there seems to be a prima facie case in favor of adopting the Quantum Mechanics after purging it of its conceptual errors.
1. Nehru K.V.K., “The Wave Mechanics in the Light of the Reciprocal System,” Reciprocity, Vol. XXII, No. 2, Autumn 1993, p.8–13
2. Larson D.B., The Case Against the Nuclear Atom, North Pacific Pub., Oregon, USA, 1963
3. Nehru K.V.K., “The Law of Conservation of Direction,” Reciprocity, Vol. XVIII, No. 3, Autumn 1989, p.3
4. Nehru K.V.K., “On the Nature of Rotation and Birotation,” Reciprocity, Vol. XX, No. 1, Spring 1991, p.8
5. Larson D.B., Basic Properties of Matter, International Soc. of Unified Science, Utah, USA, 1988, pp. 102–3
6. Nehru K.V.K., “The Photon as Birotation,” Reciprocity, Vol. XXV, No. 3, Winter 1996–97, pp.11–16
7. Cohen B.L., Concepts of Nuclear Physics, Tata McGraw Hill, India, 1971, p. 38
8. Larson D.B., Nothing But Motion, North Pacific Pub., Oregon, USA, 1979, pp. 141–2, 165–7
9. Ibid., p. 155
10. Larson D.B., Basic Properties of Matter, op. cit., p. 8
Appendix I: Euler’s Relations
Often calculations are facilitated by adopting exponential functions with imaginary arguments in place of the sine or cosine functions, making use of Euler’s relations
eia = cos a + i.sin a
e–ia = cos a – i.sin a
which directly follow from the series expansions of these functions.
A number containing imaginary as well as real parts is called a complex number. Complex numbers may be represented graphically on a rectangular coordinate system, with the real part corresponding to the horizontal axis and the imaginary part to the vertical axis. Any complex number can then be represented by a vector extending from the origin and inclined at the angle a to the real axis. Thus A.eiωt represents a (radial) vector of magnitude A rotating at the angular speed ω (t being time). It may be noted that each of the inverse relations,
sin a = (eia – e–ia)/2i
cos a = (eia + e–ia)/2
represents a birotation.
Appendix II: The General Equation of a Constant Speed Wave
Let a wave of arbitrary but unchanging shape be traveling in the X-direction of the stationary reference frame X–Y at a constant speed u. This wave appears stationary in a reference frame X1–Y1 which moves at the same speed u along the X-direction. We can then write
x1 = x – u.t ; y1 = y
If the wave shape in the co-moving frame is given by y1 = f(x1), we have from Eq. (i)
y = f(x – u.t)
By the chain rule for derivatives we have
∂y/∂x = (dy/dx1)(∂x1/∂x) = (dy/dx1).1,
∂y/∂t = (dy/dx1)(∂x1/∂t) = (dy/dx1).(–u).
Therefore the relation between the two derivatives is
∂y/∂x = –(1/u)(∂y/∂t)
Similarly for a wave traveling in the –X direction we obtain
∂y/∂x = +(1/u)(∂y/∂t)
Now a repeated application of the above procedure yields
2y/∂x2 = (1/u2)(∂2y/∂t2)
which is the governing equation of the wave function; and it is the same for waves traveling in either direction of the X-axis. |
88a2c815cca29eea | Get access
Wave Packet Defocusing Due to a Highly Disordered Bathymetry
Address for correspondence: André Nachbin, IMPA, Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro, RJ, Brazil, CEP 22460-320; e-mail:
Slowly modulated water waves are considered in the presence of a strongly disordered bathymetry. Previous work is extended to the case where the random bottom irregularities are not smooth and are allowed to be of large amplitude. Through the combination of a conformal mapping and a multiple-scales asymptotic analysis it is shown that large variations of a disordered bathymetry can affect the nonlinearity coefficient of the resulting damped nonlinear Schrödinger equations. In particular it is shown that as the bathymetry fluctuation level increases the critical point (separating the focusing from the defocusing region) moves to the right, hence enlarging the region where the dynamics is of a defocusing character. |
a54d743faf112c3f | Singularity structure of Møller - Plesset perturbation theory
D. Z. Goodson and A. V. Sergeev
Møller-Plesset perturbation theory expresses the energy as a function E(z) of a perturbation parameter, z. This function contains singular points in the complex z-plane that affect the convergence of the perturbation series. A review is given of what is known in advance about the singularity structure of E(z) from functional analysis of the Schrödinger equation, and of techniques for empirically analyzing the singularity structure using large-order perturbation series. The physical significance of the singularities is discussed. They fall into two classes, which behave differently in response to changes in basis set or molecular geometry. One class consists of complex-conjugate square-root branch points that connect the ground state to a low-lying excited state. The other class consists of a critical point on the negative real $z$-axis, corresponding to an autoionization phenomenon. These two kinds of singularities are characterized and contrasted using quadratic summation approximants. A new classification scheme for Møller-Plesset perturbation series is proposed, based on the relative positions in the z-plane of the two classes of singularities. Possible applications of this singularity analysis to practical problems in quantum chemistry are described.
Text of the paper: PDF format, TeX file.
Results of work at UMassD
Designed by A. Sergeev. |
5015949ef1b96884 | Scientists achieve reliable quantum teleportation for first time
A mass of optic equipment rigged by the research team at Delft to guide photos between the entangled particles.
Hanson lab/Delft University of Technology
Albert Einstein once told a friend that quantum mechanics doesn't hold water in his scientific world view because "physics should represent a reality in time and space, free from spooky actions at a distance." That spooky action at a distance is entanglement, a quantum phenomenon in which two particles, separated by any amount of distance, can instantaneously affect one another as if part of a unified system.
Now, scientists have successfully hijacked that quantum weirdness -- doing so reliably for the first time -- to produce what many sci-fi fans have long dreamt up: teleportation. No, not beaming humans aboard the USS Enterprise, but the teleportation of data.
Thanks to the strange properties of entanglement, this allows for that data -- only quantum data, not classical information like messages or even simple bits -- to be teleported seemingly faster than the speed of light. The news was reported first by The New York Times on Thursday, following the publication of a paper in the journal Science.
Proving Einstein wrong about the purview and completeness of quantum mechanics is not just an academic boasting contest. Proving the existence of entanglement and teleportation -- and getting experiments to work efficiently, in larger systems and at greater distances -- holds the key to translating quantum mechanics to practical applications, like quantum computing. For instance, quantum computers could utilize that speed to unlock a whole new generation of unprecedented computing power.
Quantum teleportation is not teleportation in the sense one might think. It involves achieving a certain set of parameters that then allow properties of one quantum system to get tangled up with another so that observations are reflected simultaneously, thereby "teleporting" the information from one place to another.
To do this, researchers at Delft first had to create qubits out of classical bits, in this case electrons trapped in diamonds at extremely low temperatures that allow their quantum properties, like spin, to be observed.
A qubit is a unit of quantum data that can hold multiple values simultaneously thanks to an equally integral quantum phenomenon called superposition, a term fans of the field will accurately associate with the Schrödinger equation, as well as Heisenberg's uncertainty principle that says something exists in all possible states until it is observed. It's the same way quantum computing may one day surpass the speeds of classical computing by allowing calculations to spread bit values between 0, 1 or any probabilistic value between the two numbers -- in other words, a superposition of both figures.
With quibits separated by a distance of three meters, the researchers were able to observe and record the spin of one electron and see that reflected in the other qubit instantly. It's an admittedly wonky conception of data teleportation that requires a little head scratching before it begins to clear up.
Still, its effects could be far reaching. The researchers are attempting to increase that distance to more than a kilometer, which would be ample leeway to test whether or not entanglement was a consistent phenomenon and that the information was traveling faster than the speed of light. Such experiments would more definitively knock down Einstein's disqualification of entanglement due to its violation of classical mechanics.
"There is a big race going on between five or six groups to prove Einstein wrong," Ronald Hanson, a physicist leading the research at Delft, told The New York Times. "There is one very big fish."
This article originally appeared on CNET. |
33a6cc97784198df | Big Bad Quantum Computer Revisited
A recent DefenseNews article again put Shor's algorithm front and center when writing about Quantum Computing. Yet, there is so much more to this field, and with Lockheed Martin working with D-Wave one would expect this to be an open secret in the defence sector. At any rate, this is a good enough reason to finally publish the full text of my quantum computing article, that the Hakin9 magazine asked me to write for their special issue earlier this year:
Who’s afraid of the big bad Quantum Computer?
“Be afraid; be very afraid, as the next fundamental transition in computing technology will obliterate all your encryption protection.”
If there is any awareness of quantum computing in the wider IT community then odds are it is this phobia that is driving it. Probably Peter Shor didn’t realize that he was about to pigeonhole the entire research field when he published his work on what is now the best known quantum algorithm. But once the news spread that he uncovered a method that could potentially speed up RSA decryption, the fear factor made it spread far and wide. Undoubtedly, if it wasn’t for the press coverage that this news received, quantum information technology research would still be widely considered to be just another academic curiosity.
So how realistic is this fear, and is breaking code the only thing a quantum computer is good for? This article is an attempt to separate fact from fiction
First let’s review how key exchange protocols that underlie most modern public key encryption schemes accomplish their task. A good analogy that illustrates the key attribute that quantum computing jeopardizes is shown in the following diagram (image courtesy of Wikipedia):
Let’s assume we want to establish a common secret color shared by two individuals, Alice and Bob - in this example this may not be a primary color but one that can be produced as a mix of three other ones. The scheme assumes that there exists a common first paint component that our odd couple already agreed on. The next component is a secret private color. This color is not shared with anybody. What happens next is the stroke of genius, the secret sauce that makes public key exchange possible. In our toy example it corresponds to the mixing of the secret, private color with the public one. As everybody probably as early as kindergarten learned, it's easy to mix colors, but not so easy - try practically impossible - to revert it. From a physics standpoint the underlying issue is that entropy massively increases when the colors are mixed. Nature drives the process towards the mixed state, but makes it very costly to reverse the mixing. Hence, in thermodynamics, these processes are called “irreversible”.
This descent into the physics of our toy example may seem a rather pointless digression, but we will see later that in the context of quantum information processing, this will actually become very relevant.
But first let's get back to Alice and Bob. They can now publicly exchange their mix-color, safe in the knowledge that there are myriads of ways to get to this particular shade of paint, and that nobody has much of a chance of guessing their particular components.
Since in the world of this example nobody has any concept of chromatics, even if a potential eavesdropper were to discover the common color, they’d still be unable to discern the secret ones, as they cannot unmix the publicly exchanged color shades.
In the final step, Alice and Bob recover a common color by adding their private secret component. This is a secret that they now share to the exclusion of everybody else.
So how does this relate to the actual public key exchange protocol? We get there by substituting the colors with numbers, say x and y for the common and Alice’s secret color. The mixing of colors corresponds to a mathematical function G(x,y). Usually the private secret numbers are picked from the set of prime numbers and the function G is simply a multiplication, exploiting the fact that integer factorization of large numbers is a very costly process. The next diagram depicts the exact same process, just mapped on numbers in this way.
If this simple method is used with sufficiently large prime numbers then the Shared Key is indeed quite safe, as there is no known efficient classical algorithm, that allows for a reasonably fast integer factorization. Of course “reasonably fast” is a very fuzzy term, so let’s be a bit more specific: There is no known classical algorithm that scales polynomially with the size of the integer. So for instance, an effort to crack a 232-digit number (RSA-768) that concluded in 2009 took the combined CPU power of hundreds of machines (Intel Core2 equivalents) over two years to accomplish.
And this is where the quantum computing bogeyman comes into the picture, and the aforementioned Peter Shor. This affable MIT researcher formulated a quantum algorithm almost twenty years ago that can factorize integers in polynomial time on the, as of yet elusive, quantum hardware. So what difference would that actually make? The following graph puts this into perspective:
scaling complexity graph
Scaling with Z3 (red curve) versus $ (z^2 \sqrt[3]{z} ) ~ 4 / (3^{5/2}) $ (green). The latter will look almost like a vertical ascent.
Z encodes stands for the logarithmic value of the size of the integer. The purple curve appears as almost vertical on this scale because the necessary steps (y-axis) in this classic algorithm grow explosively with the size of the integer. Shor’s algorithm, in comparison, shows a fairly well behaved slope with increasing integer sizes, making it theoretically a practical method for factoring large numbers.
And that is why common encryptions such as RSA could not protect against a deciphering attack if a suitable quantum computer was to be utilized. So now that commercial quantum computing devices such as the D-Wave One are on the market, where does that leave our cryptographic security?
First off: Not all quantum computers are created equal. There are universal gate-based ones, which are theoretically probably the best understood, and a textbook on the matter will usually start introducing the subject matter from this vantage point. But then there are also quantum simulators, topological design and adiabatic ones (I will forgo quantum cellular automatons in this article). The only commercially available machine, i.e. D-Wave’s One, belongs to the latter category but is not a universal machine, in that it cannot simulate any arbitrary Hamiltonian (this term describes the energy function that governs a quantum system). Essentially this machine is a super-fast and accurate solver for only one class of equations. This kind of equation was first written out for describing solid state magnets according to what it now called the Ising model.
But fear not: The D-Wave machine is not suitable for Shor’s algorithm. The latter requires a gate programmable device (or universal adiabatic machine) that provides plenty of qbits. The D-Wave one falls short on both ends. It has a special purpose adiabatic quantum chip with 128 qbits. Even if the architecture were compatible with Shor’s algorithm, the amount of qbits falls far short: If N is the number we want to factorize then we need a bit more than the square of that number in terms of qbits. Since the integers we are interested in are pretty large, this is far outside anything that can be realized at this point. For instance, for the RSA-768 challenge mentioned earlier, more than 232²=53824 qbits are required.
So you may wonder, what good is this vanguard of the coming quantum revolution if it can’t even handle the most famous quantum algorithm? To answer this let’s step back and look at what motivated the research into quantum computing to begin with. It wasn’t the hunt for new, more powerful algorithms but rather the insight, first formulated by Richard Feynman, that quantum mechanical systems cannot be efficiently simulated on classical hardware. This is, of course, a serious impediment as our entire science driven civilization depends on exploiting quantum mechanical effects. I am not even referring to the obvious culprits such as semiconductor based electronics, laser technology etc. but the more mundane chemical industry. Everybody will probably recall the Styrofoam models of orbitals and simple molecules such as benzene C6H6:
As the graphic illustrates, we know that sp2 orbitals facilitate the binding with the hydrogen, and that there is a delocalized π electron cloud formed from the overlapping p2 orbitals. Yet, these insights are inferred (and now thanks to raster electron microscopy also measured) but they don’t flow from an exact solution of the corresponding Schrödinger equations that govern the physics of these kinds of molecules.
Granted, multi-body problems don’t have an exact solution in the classical realm either, but the corresponding equations are well behaved when it comes to numerical simulations. The Schrödinger equation that rules quantum mechanical systems, on the other hand, is not. Simple scenarios are still within reach for classical computing, but not so larger molecules (i.e. the kind that biological processes typically employ). Things get even worse when one wants to go even further and model electrodynamics on the quantum level. Quantum field theories require a summation over an infinite regime of interaction paths - something that will bring any classical computer to its knees quickly. Not a quantum computer, though.
Quantum Computing has, therefore, the potential to usher in a new era for chemical and nano-scale engineering, putting an end to the still common practice of having to blindly test thousands of substances for pharmaceutical purposes, and finally realizing the vision, of designing smart drugs that specifically match targeted receptor proteins.
Of course, even if you can model protein structures, you still need to know which isomer is actually the biologically relevant one. Fortunately, a new technology deploying electron holography is expected to unlock a cornucopia of protein structure data. But this data will remain stale if you cannot understand how these proteins can fold. The latter is going to be key for understanding the function of a protein within the living organism.
Unfortunately, simulating protein folding has been shown to be an NP hard problem. Quantum computing is once again coming to the rescue, allowing for a polynomial speed-up of these kinds of calculations. It is not an exaggeration to expect that in the not too distant future lifesaving drug development will be facilitated this way. And first papers using D-Wave's machine in this way have been published.
This is just one tiny sliver of the fields that quantum computing will impact. Just as with the unexpected applications that ever-increasing conventional computing power enabled, it is safe to say that we, in all likelihood, cannot fully anticipate how this technological revolution will impact our lives. But we can certainly identify some more areas that will immediately benefit from it: Artificial Intelligence, graph theory, operational research (and its business applications), database design etc. One could easily file another article on each of these topics while only scratching the surface, so the following observations have to be understood as extremely compressed.
It shouldn’t come as a surprise that quantum computing will prove fruitful for artificial intelligence. After all, one other major strand that arguably ignited the entire research field was contemplations on the nature of the human mind. The prominent mathematician and physicist Roger Penrose, for instance, argued vehemently that the human mind cannot be understood as a classical computer i.e. he is convinced (almost religious in his certainty) that a Turing machine in principle cannot emulate a human mind. Since it is not very practical to try to put a human brain into a state of controlled quantum superposition, the next best thing is to think this through for a computer. This is exactly the kind of thought experiment that David Deutsch discussed in his landmark paper on the topic. (It was also for the first time that a quantum algorithm was introduced, albeit not a very useful one, demonstrating that the imagined machine can do some things better than a classical Turing machine).
So it is only fitting that one of the first demonstrations of D-Wave’s technology concerned the training of an artificial neural net. This particular application maps nicely onto the structure of their system, as the training is mathematically already expressed as the search for a global minimum of an energy function that depends on several free parameters. To the extent that an optimization problem can be recast in this way, it becomes a potential candidate to benefit from D-Wave’s quantum computer. There are many applicable use cases for this in operational research (i.e. logistics, supply chain etc.) and business intelligence.
While this is all very exciting, a skeptic will rightfully point out that just knowing a certain tool can help with a task does not tell us how well it will stack up to conventional methods. Given the price tag of $10 million, it had better be good. There are unfortunately not a lot of benchmarks available, but a brute force search method implemented to find some obscure numbers from graph theory (Ramsey numbers) gives an indication that this machine can substitute for some considerable conventional computing horsepower i.e. about 55 MIPS, or the equivalent of a cluster of more than 300 of Intel’s fastest commercially available chips.
Another fascinating aspect that will factor into the all-important TCO (Total Cost of Ownership) considerations is that a quantum computer will actually require far less energy to achieve this kind of performance (its energy consumption will also only vary minimally under load). Earlier I described a particular architecture as adiabatic, and it is this term that describes this counterintuitive energy characteristic. It is a word that originated in thermodynamics and describes when a process progresses without heat exchange. I.e. throughout most of the QC processing there is no heat-producing entropy increase. At first glance, the huge cooling apparatus that accompanies a quantum computer seems to belie this assertion, but the reason for this considerable cooling technology is not a required continuous protection of the machines from over-heating (like in conventional data centers) but because most QC implementations require an environment that is considerably colder than even the coldest temperature that can be found anywhere in space (the surface of Pluto would be outright balmy in comparison).
Amazingly, these days commercially available Helium cooling systems can readily achieve these temperatures close to absolute zero. After cooling down, the entire remaining cooling effort is only employed to counteract thermal flow that even the best high vacuum insulated environments will experience. The quantum system itself will only dissipate a minimal amount of heat when the final result of an algorithm is read out. That is why the system just pulls 15 KWatt in total. This is considerably less than what our hypothetical 300 CPU cluster would consume under load i.e. >100KW per node, more than double D-Wave’s power consumption. And the best part: The cooling system, and hence power consumption, will remain the same for each new iteration of chips (D-Wave recently introduced their new 512 qbits VESUVIUS chip) and so far steadily followed their own version of Moore’s law, doubling integration about every 15 months.
So although D-Wave’s currently available quantum computing technology cannot implement Shor’s algorithm, or the second most famous one, Grover’s search over an unstructured list, the capabilities it delivers are nothing to scoff at. With heavyweights like IBM pouring considerable R&D resources into this technology, fully universal quantum processors will hit the market much earlier than most IT analysts (such as Gartner) currently project. Recently IBM demoed a 4 qbit universal chip (interestingly using the same superconducting foundry approach as D-Wave). If they also were to manage a doubling of their integration density every 18 months then we’d be looking at 256 qbit chips within three years.
While at this point current RSA implementation will not be in jeopardy, this key exchange protocol is slowly reaching its end-of-life cycle. So how best to mitigate against future quantum computing attacks on the key exchange? The most straightforward approach is simply to use a different “color-mixing” function than integer multiplication i.e. a function that even a quantum computer cannot unravel within a polynomial time frame. This is an active field of research, but so far no consensus for a suitable post-quantum key exchange function has evolved. At least it is well established that most current symmetric crypto (cyphers and hash functions) can be considered secure from the looming threat.
As to key exchange, the ultimate solution can also be provided by quantum mechanics in the form of quantum cryptography that in principle allows to transfer a key in such a manner that any eavesdropping will be detectable. To prove that this technology can be scaled for global intercontinental communication, the current record holder for the longest distance of quantum teleportation, the Chinese physicist Juan Yin, plans to repeat this feat in space, opening up the prospect for ultra secure communication around the world. Welcome to the future.
4 Responses to Big Bad Quantum Computer Revisited
1. spacegoat says:
The article reminded me of the “Arnold Anold” scare in the 198o’s – that a claimed mathemaical advance would wipe out encryption techniques. See here
(In case that disappears use”arnold arnold” factorization)
The idea that encryption techniques would become useless overnight is scary both for banking and finance. I wonder if any such organisation has this risk on their radar?
• Henning Dekant says:
Shore’s algorithm’s notoriety is certainly due to its impact on encryption and I would expect an IT security expert worth his salt to be aware of this – e.g. Hakin9 is not a science periodical but is covering exclusively computing security issues.
On the other hand most successful security exploits are due to social engineering. We humans are unfortunately so easily fooled that a brute force cryptographic attack is usually not at all required.
2. Geordie says:
Hi Henning!
Shore –> Shor (see, Shor’s Algorithm.
3. Pingback: Quantum Cryptography Made Obsolete? | Wavewatching
Comments are closed. |
3cd397b9883d8162 | Take the 2-minute tour ×
In Feynman's book "Quantum Mechanics and Path Integrals" Feynman states that
the probability $P(b,a)$ to go from point $x_a$ at time $t_a$ to the point $x_b$ at the time $t_b$ is $P(b,a) = \|K(b,a)\|^2$ of an amplitude $K(b,a)$ to go from $a$ to $b$. This amplitude is the sum of contributions $\phi[x(t)]$ from each path. $$ K(b,a) = \sum_{\text{paths from $a$ to $b$}} \phi[x(t)]$$ The contributions of a path has a phase proportional to the action $S$: $$ \phi[x(t)] = \text{const}\ e^{(i/\hbar)S[x(t)]}$$
Why must the contribution of a path be $\sim e^{(i/\hbar)S[x(t)]}$? Can this be somehow derived or explained? Why can't the contribution of a path be something else e.g. $\sim \frac{S}{\hbar}$, $\sim \cos(S/\hbar)$, $\log(S/\hbar)$ or $e^{- (S[x(t)]/\hbar)^2}$ ?
Edit: I have to admit that in the first version of this question, I didn't exclude the possibility to derive the contribution of a path directly from Schrödinger's equation. So answers along this line are valid although not so interesting. I think when Feynman developed his formalism his goal was to find a way to quantize systems, which cannot be treated by Schrödinger's equation, because they cannot be described in terms of a Hamiltonian (e.g. the Wheeler-Feynman absorber theory). So I think a good answer would explain Feynman's Ansatz without referring to Schrödinger's equation, because I think Schrödinger's equation can only handle a specific subset of all the systems that can be treated by Feynman's more general principle.
share|improve this question
It's chosen in such a way that for actions that have a classical interpretation, you recover classical mechanics and the variational principle. It's motivated theoretically by the correspondence principle. But it is really just because that's the way nature seems to work. – Raskolnikov Apr 14 '11 at 20:56
@Raskolnikov It is not chosen that way. Rather, this is a result from computing the quantity $K(b,a)$ rigorously using the Schrödinger equation and the time evolution operator, together with the Trotter-Kato formula. – Lagerbaer Apr 14 '11 at 23:13
@Lagerbaer: but the Schrödinger equation is yet another way of formulating QM, just like the path-integral method. It's obvious that there is a correspondence between them, but both essentially have to be derived from the correspondence-principle (to classical physics) and the match of experiments (Aharonov-Bohm, dual-slit etc.). – BjornW Apr 14 '11 at 23:22
@Bjorn Wesen: the classical-quantum correspondence does not always exist (see various comments by Lubos on this site). Last time I checked, the canonical (Hilbert spaces, operators, etc.) formalism was still the correct one (i.e. correct in all limits). For the cases when a path-integral formalism can be found, the latter has to agree with the canonical formulation. – genneth Apr 15 '11 at 7:35
5 Answers 5
up vote 10 down vote accepted
There are already several good answers. Here I will only answer the very last question, i.e., if the Boltzmann factor in the path integral is $f(S(t_f,t_i))$, with action $S(t_f,t_i)=\int_{t_i}^{t_f} dt \ L(t)$, why is the function $f:\mathbb{R}\to\mathbb{C}$ an exponential function, and not something else?
Well, since the Feynman "sum over histories" propagator should have the group property
$$ K(x_3,t_3;x_1,t_1) = \int_{-\infty}^{\infty}\mathrm{d}x_2 \ K(x_3,t_3;x_2,t_2) K(x_2,t_2;x_1,t_1),$$
one must demand that
$$f(S(t_3,t_2)f(S(t_2,t_1)) = f(S(t_3,t_1)) = f(S(t_3,t_2)+S(t_2,t_1)),$$
$$f(S(t_1,t_1)) = 1.$$
So the question boils down to: How many continuous functions $f:\mathbb{R}\to\mathbb{C}$ satisfy $f(s)f(s^{\prime})=f(s+s^{\prime})$ and $f(0)=1$?
Answer: The exponential function!
Proof (ignoring some mathematical technicalities): If $s$ is infinitesimally small, then one may Taylor expand
$$f(s) = f(0) + f^{\prime}(0)s +{\cal O}(s^{2}) = 1+cs+{\cal O}(s^{2}), $$
with some constant $c:=f^{\prime}(0)$. Then one calculates
$$ f(s)=\lim_{n\to\infty}f(\frac{s}{n})^n =\lim_{n\to\infty}\left(1+\frac{cs}{n}+o(\frac{1}{n})\right)^n =e^{cs}, $$
i.e., the exponential function.
share|improve this answer
outstanding answer! i was thinking this myself and couldn't come up with a good argument other than the plane wave limit. This makes the deduction rigorous – lurscher Apr 15 '11 at 16:16
Wow, that was an eye opener for me. If I could, I would upvote it much more. Let me remark, that I read that the exponential function is also a solution even if $s$ and $s'$ are allowed to be matrices, as long as these matrices commute. I don't know if there can be a solution if $s$ and $s'$ don't commute. – asmaier Apr 15 '11 at 19:52
small comment; the group property of the propagator is implied by demanding time translation invariance of the propagator description (it doesn't imply that the propagator itself needs to be time translation invariant though) – lurscher Apr 15 '11 at 19:53
You start by writing down the probability to find a particle at $y$ at time $t$ when it was at $x$ at time $0$, denoted as $K(y,t;x,0)$. You get this by solving the Schrödinger equation with the initial condition $\psi(y,0) = \delta(y-x)$. Then, $K(y,t;x,0) = \psi(y,t)$. Thus, to solve this, we need to know the time development of the initial condition $\psi(y,0)$.
Let us start with the simple example of a free particle. This is easiest solved in momentum-representation, obtained by Fourier-transforming $\psi(y,t)$:
$$\psi(y,t) = \frac{1}{\sqrt{2\pi\hbar}} \int dp \exp(ipy/\hbar) \tilde \psi(p,t)$$ For $\tilde \psi$, the Schrödinger equation gives $$\tilde \psi(p,t) = \frac{1}{\sqrt{2\pi\hbar}} \exp \left(-\frac{i}{\hbar} \left[\frac{p^2 t}{2m} - px\right]\right)$$ This can be inserted back into the equation for $\psi(y,t)$. The integral over $p$ can be solved exactly. The final result is $$K_\text{free}(y,t;x,0) = \sqrt{\frac{m}{2\pi i\hbar t}} \exp\left(\frac{im(x-y)^2}{2\hbar t}\right)$$
Next step: The solution of the Schrödinger equation can generally be written as $$|\psi, t\rangle = \exp\left(-\frac{iHt}{\hbar}\right) |\psi,0\rangle$$ with $H$ being the Hamiltonian of your system. Writing $H = T+V$, the general formula for $K$ becomes $$K(y,t;x,0) = \langle y \mid \exp(-\frac{i(T+V)t}{\hbar}) \mid x \rangle$$ We use the Trotter-Kato Formula (which holds under certain conditions which I won't go into detail at this point. It allows us to write $$K(y,t;x,0) = \lim_{N\rightarrow \infty} \langle y \mid \left[ \exp(-\frac{iTt}{N\hbar}) \exp(-\frac{iVt}{N\hbar})\right]^N \mid x\rangle$$ We insert the unity operator, decomposed as $1 = \int dx | x \rangle \langle x |$ $N-1$ times, which gives us $$K(y,t;x,0) = \int dx_1 dx_2 \dots dx_{N-1} \prod_{j=0}{N-1} \langle x_{j+1} \mid \exp(-iTt/N\hbar) \exp(-iVt/N\hbar) \mid x_j \rangle$$ Note that $V$ as an operator acting on $|x\rangle$ gives just $V(x) |x\rangle$. And $\langle x_{j+1} | \exp(-iTt/N\hbar) | x_j \rangle$ gives us just the contribution of a free particle, i.e. $$\sqrt{\frac{mN}{2\pi i\hbar t}} \exp\left(\frac{imN}{2\hbar t}(x_{j+1} - x_j)\right)^2$$. If we abbreviate $\tau = t/N$, we can write: $$K(y,t;x,0) = \lim_{N\rightarrow \infty} \int dx_1 dx_2 \dots dx_{N-1} \left( \frac{m}{2\pi i\hbar \tau}\right)^{N/2} \times$$ $$\exp \left(\frac{i\tau}{\hbar} \sum_{j=0}^{N-1} \left[ \frac{m}{2}\left(\frac{x_{j+1}-x_j}{\tau}\right)^2 - V(x_j)\right]\right)$$
The next step is to see the values $x_j$ as points of a certain path $x(t')$ evaluated at points $t' = t_j = j\tau = jt/N$. If $\tau$ is small, we write $$\sum_{j=0}^{N-1} \tau f(t_j) \rightarrow \int f(t') dt'$$ $$\frac{x_{j+1} - x_j}{\tau} \rightarrow \dot x(t')$$ where the dot denotes the time-derivative.
The argument of the exponential then becomes $$\frac{i}{\hbar} \int_0^t dt' \left( \frac{m\dot x(t')^2}{2} - V(x(t'))\right)$$ You will have no trouble identifying the integrand as the Lagrangian $L = T-V$. The integral itself, therefore, is the classical action.
Thus, the formula we have for $K$ can be interpreted as the sum over all possible paths from $(x,0)$ to $(y,t)$ of the function $\exp\left(\frac{i}{\hbar} S(t,0)\right)$ of the classical action.
The interpretation of this was given in other answers: The classical path is that which minimizes the action, i.e. the action is stationary for the classical path. In your path-integral formula, this path will have a large contribution, as all paths that vary only slightly from the classical path will still have pretty much the same phase factor as the classical one, leading to constructive interference of those paths. For paths far from the classical path, the action will vary greater among the paths, so that there all possible phases occur, which will ultimately cancel each outer out.
Reference A lecture on advanced quantum mechanics given by Prof. Crispin Gardiner. Lecture notes are, unfortunately, not freely available. It was a good lecture :)
share|improve this answer
@Lagerbaer I don't think the OP was asking for the derivation which can be found in any text on QFT. – user346 Apr 15 '11 at 4:45
+1: Nobody who spent their time to genuinely try to answer deserves a negative vote. – timur Apr 15 '11 at 8:13
@Timur you're right that @Lagerbaer did put in a great deal of effort. However, it was my feeling that the OP asked more for a physical explanation than a mathematical one. The complete derivation is of course nice to see but it does not provide insight into why the appropriate quantity, to associate the phase of the particle's world-line with, is the classical action $S$? In this regard I think first @Bjorn's and secondly @Lurscher's answers do a better job. – user346 Apr 15 '11 at 11:47
+1 for taking the time to write this up. Whether a direct answer to the OP's question or not, it is certainly related and makes for some interesting reading. – qftme Apr 15 '11 at 13:38
The OP asked if the form could be derived. If you start from the Schrödinger equation, it can indeed be derived. It wasn't specified where one should start with the derivation. – Lagerbaer Apr 15 '11 at 14:37
If you accept that Quantum Mechanics is built upon the fact that you sum complex amplitudes of processes (see this previous Question/Answers about this fact) you would expect that a sum over multiple paths behaves like a sum of different complex phases: $$M \sim \sum e^{i*phase}$$
Applying the variational principle to the phase, you see that the paths which vary their phase the least will contribute the most to the sum (because the others will average each other). Add the fact that you want the classical path to be the main contribution (because we want to match classical physics, this is the correspondence princple), and that the classical path is the path where the action $S$ varies the least, you can identify the phase with the action and get $phase \sim S[x(t)]$. Then you get $1 / \hbar$ as an experimental constant.
I'm not sure if this is a satisfactory answer, but most of the "strangeness" here comes from the QM superposition principle in the first place anyway. Note that the variational principle in classical mechanics was known and used before QM was invented and had the teleological property of "sniffing out" paths of least action. In the QM path-integral method this is at least explained from a more local point of view.
share|improve this answer
An approach similar to Lagerbaer’s may be formulated without reference to the probability function. The overlap between states at different times $\langle\psi_{t^\prime}|\psi_t\rangle$ may be written according to a product of $|q_{t+n\delta t}\rangle\langle q_{t+n\delta t}$ at different time slices. The wave overlap is then $$ \langle\psi_{t^\prime}|\psi_t\rangle~=~\lim_{\delta t\rightarrow 0}\lim_{N\rightarrow\infty} \prod_{n=0}^N \int dq_{t+n\delta t}\langle\psi_{t+n\delta t}|q_{t+n\delta t}\rangle\langle q_{t+(n-1)\delta t}|\psi_{t+(n-1)\delta t}\rangle. $$ This description of the overlap is then according to snapshots determined by projectors, where in the limit the time increment vanish they recover the density matrix.
We now focus on a product defined on one particular time slice. Each infinitesimal overlap is written as $$ \psi^*(q,t)\psi(q,t~-~\delta t)~=~ \psi^*(t)\Big(\psi(q,t)~+~\delta t{\frac{d\psi}{dt}}(q,t)~+~O(\delta t^2)\Big). $$ The term to $O(\delta t)$ is easily seen to be $$ \delta t {\frac{d\psi}{dt}}(q,t) ~=~\delta t \Big(\frac{\partial\psi}{\partial t}(q,t)~+~ \frac{dq}{dt}\nabla \psi(q,t)\Big)~=~\frac{i\delta t}{\hbar}\Big(\frac{dq}{dt} p~-~ H\psi(t)\Big). $$ The integrand of the infinitesimal overlap is $$ \psi^*(q,t)\psi(q,t~-~\delta t)~=~ e^{\frac{i\delta t}{\hbar} \big({\dot q} p~-~H\big)}\psi^*(t)\psi(t) $$ This is a way of deriving the probability function above.
share|improve this answer
I think the justification goes like this:
first a couple observations from the classical limit:
1) paths that are far from the classical solution are not near an extremum value of the action, which means that the action will have a non-zero variance in all paths that are neighbour to this path.
2) the classical solution itself is an extremum value of the action (either a minimum or a maximum) in the space of classical paths. in the neighbourhood of this path, the variance of the paths will approach zero.
so, an approach to construct a quantum limit would be to think on the double-slit experiment and see that the interference pattern is constructed from taking two plane waves paths that go from the source to each slit, and then from the slit to the a point in the interference screen.
In this case, none of the paths match exactly the classical path. if you write a plane wave you'll see that the argument $p x - E t$ and you'll notice that this is actually the action of a free particle. So you can think of the De Broglie wave as a plane wave with the action of a free particle $e^{i(kx - \omega t)}$
From this, is just a small step to just infer that in general, when paths are not restricted by a double slit, you need to allow for all classical paths possible, and when the action is more complex than a free particle, you need to replace the wavefunction argument by the action of the path
share|improve this answer
Your Answer
|
6d017a651526cd58 | Skip to main content
Select Source:
Dirac, Paul Adrien Maurice
(b. Bristol England, 8 August 1902; d. Miami, Florida, 20 October 1984)
quantum mechanics, relativity, cosmology.
Dirac was one of the greatest theoretical physicists in the twentieth century. He is best known for his important and elegant contributions to the formulation of quantum mechanics; for his quantum theory of the emission and absorption of radiation, which inaugurated quantum electrodynamics; for his relativistic equation of the electron; for his “prediction” of the positron and of antimatter; and for his “large number hypothesis” in cosmology. Present expositions of quantum mechanics largely rely on his masterpiece The Principles of Quantum Mechanics (1930), and a great part of the basic theoretical framework of modern particle physics originated in his early attempts at combining quanta and relativity. Not only his results but also his methods influenced the way much of theoretical physics is done today, extending or improving the mathematical formalism before looking for its systematic interpretation.
Dirac spent most of his academic career at Cam bridge and received all the honors to which a British physicist may reasonably aspire. He became a fellow of St. John’s College at the age of twenty-five, a fellow of the Royal Society in 1930, Lucasian professor of mathematics in 1932, a Nobel laureate in 1933 for his “discovery of new fertile forms of the theory of atoms and for its applications,” a Royal Medalist in 1939, and a Copley Medalist in 1952. He was frequently invited to lecture or to do research abroad. For instance, he traveled around the world in 1929, visited the Soviet Union several times in the 1930’s, and was a fellow at the Institute for Advanced Studies. Princeton, in the years 1947–1948 and 1958–1959. In 1973 he was made a member of the Order of Merit. Dirac retired in 1969 but resumed his scientific career in 1971 at Florida State University. In January 1937 Dirac married Margit Wigner, the sister of Eugene Wigner; they had two daughters.
Dirac made his mark through his scientific writings. He had few students; the fundamental problems that he tackled were not for beginners. Unlike many of his colleagues, he was little involved in war projects.
Bristol , Dirac’s mother, Florence Hannah Holten, was British; his father. Charles Adrien Ladislas Dirac, was an émigré from French Switzerland. His father did not receive friends at home and forced Paul to silence by imposing French as the language spoken at the dinner table. From childhood Dirac was a loner, enjoying the contemplation of nature, long walks, or gardening more than social life. He was not much inclined to collaboration and did his best thinking by himself. At the Merchant Venturer’s Technical College, where his father taught French, he excelled in science and mathematics, and neglected literary and artistic subjects.
From 1918 to 1921 Dirac trained to be an electrical engineer at Bristol University. This background, he explained later, strongly influenced his way of doing physics: he learned how to tolerate approximations when trying to describe the physical world and how to solve problems step by step. He also developed a nonrigorous constructive conception of mathematics, beautifully articulating symbols before precisely defining them, very much as the British physicist Oliver Heaviside did in his calculus.
In 1921 the postwar economic depression prevented Dirac from finding a job, so he accepted two years of free tuition from the mathematics department at Bristol. During this period he was influenced by an outstanding professor of mathematics. Peter Fraser, who convinced him that rigor was sometimes useful and imparted to him his love for projective geometry, with its derivations of complicated theorems by means of simple one-to-one correspondences.
At Bristol, Dirac also attended Charlie Dunbar Broad’s philosophy course for students of science, in which Broad criticized the fundamental concepts of science on the basis of Alfred North Whitehead’s principle of extensive abstraction and argued that the ideal objects of mathematics must be constructed from the mutual relations—not the inner structure— of the roughly perceived objects of nature. This genesis was supposed to explain the relevance of geometrical concepts when they were applied to the physical world, particularly the success of Einstein’s theory of relativity. For Broad, theorists were best when they were their own philosophers. Dirac also read John Stuart Mill’s System of Logic (1843), but derived the opposite conclusion: that philosophy was “just a way to think about discoveries already made.”
Broad’s lectures included a serious account of the theory of relativity, which immediately fascinated Dirac. Arthur S. Eddington’s Space, Time and Gravitation (1920) written in the euphoric period after the British eclipse expedition confirming Albert Einstein’s theory in 1919, made a further impression on Dirac. Evidence of epistemological comments by’ the fountainhead of relativity in England’ can be found in several places in Dirac’s work.
Cambridge . In the fall of 1923, Dirac entered St. John’s College, Cambridge, as a research student, thanks to an 1851 Exhibition studentship and a grant from the department of scientific and industrial Research for work in advanced mathematics. He hoped to study relativity with Ebenezer Cunningham, but was assigned Ralph Fowler as his adviser. Fowler was not only a preeminent specialist in statistical mechanics but also the enthusiastic leader of quantum theoretical research at Cambridge. As a correspondent of Niels Bohr, he regularly got information about the latest advances or failures in atomic theory. As the son-in-law of Ernest Rutherford, he took a strong interest in the experimental work at the Cavendish Laboratory (at Cambridge, theoretical physics was part of the Faculty of Mathematics).
Because of his retiring personality and the relative isolation of the various colleges, Dirac did not have any regular scientific interlocutor but Fowler. To compensate, he joined two physicists’ clubs, the ∇2V Club and the more casual Kapitza Club, where theorists and experimenters discussed recent problems and welcomed foreign visitors. He also attended the colloquia at the Cavendish and, to keep up with developments in fundamental mathematics, took part in the tea parties of the distinguished Cambridge mathematician Henry Frederick Baker, who was concerned primarily with projective geometry.
Before arriving in Cambridge, Dirac did not know about the Bohr atom, This gap in his knowledge was quickly and excellently filled by Fowler’s detailed lectures. Dirac also read Arnold Sommerfeld’s textbook Atomic Structure and Spectral Lines (English ed., 1923), Bohr’s On the Application of the Quantum Theory to Atomic Structure (1923), and Max Born’s Vorlesungen über Atommechanik (1925). These three fundamental texts involved advanced techniques of Hamiltonian dynamics (to derive the most general expression of the rules of quantization), which Dirac learned from Edmund T. Whittaker’s standard text, A Treatise on the Analytical Dynamics of Particles and Rigid Bodies (1904). Perhaps more than anything in quantum theory he enjoyed reading Eddington’s Mathematical Theory of Relativity (1923), which developed the tensor apparatus of Einstein’s and Hermann Weyl’s theories of gravitation. They became his models of beauty in mathematical physics.
Fowler was quick to detect the qualities of his new student and began to encourage his originality. Only six months after arriving in Cambridge. Dirac started to publish substantial research papers, Whenever his subject had not been imposed by Fowler, he tried to clarify and to generalize in a relativistic way points that he had found obscure in his readings—for instance, the definition of a particle’s speed according to Eddington, or the covariance of Bohr’s frequency condition, or the expression of the collision probability in the thenfashionable “detailed balancing” calculations. The main characteristics of Dirac’s style showed through in this early work: directness, economy in mathematical notation, and little reference to past work.
At the end of 1924, following suggestions by Fowler and Darwin, Dirac focused on the more fundamental problem of generalizing the application of Paul Ehrenfest’s adiabatic principle in quantum theory. According to this principle, the quantum conditions for a complicated system could be obtained by infinitely slow (“adiabatic”) deformation of a simpler system for which one knew to which variables q the Bohr-Sommerfeld rule ∫pdq = nh applied.
Another method, introduced by Karl Schwartzschild and systematized by Johannes Burgers, applied to the so-called multiperiodic systems, the configuration of which can be expressed in terms of s periodic functions with s incommensurable frequencies ω1, ω2, ….ωd….ωs.One had only to introduce the “angle” variables wex = ωωt and the corresponding Hamiltonian conjugates, the “action” variables jα. In the nondegenerate case for which s is also the number of degrees of freedom, the quantum conditions can simply be written jα = nαħ, where 2πħ is Planck’s constant, Burgers showed that this procedure was equivalent to the adiabatic principle because the J’s are adiabatic invariants. Dirac increased both the rigor of the demonstration and its scope, including magnetic fields and degeneracy. He also tried to remove the restriction of multiperiodicity and to calculate the energy levels of the helium atom, but he failed. Presumably he believed that a good part of the difficulties of quantum theory could be solved by extension of the adiabatic principle without facing the basic paradoxes emphasized by Bohr and the Göttingen school. Bohr’s correspondence principle did not trigger Dirac’s interest as a hint toward a fundamentally new quantum mechanics. His only consideration of it was purely operational, as a set of rules to derive intensities of emitted radiation in the action-angle formalism.
Commutators and Poisson Brackets . In 1925 Bohr and Werner Heisenberg both brought their revolutionary spirit to Cambridge. Bohr lectured in May after being distressed by the results of Walther Bothe and Hans Geiger’s experiment confirming the lightquantum explanation of the Compton effect and making the paradoxical features of light more obvious than ever. According to Bohr, Pauli, and Born, the crisis in quantum theory had reached its climax. The world needed a new mechanics that would preserve the quantum postulates and agree asymptotically with classical mechanics. Heisenberg came to Cambridge in July 1925 with what soon proved to meet this expectation. He lectured at the Kapitza Club, on “term zoology and Zeeman botanics”— that is, on his latest theory of spectral multiplets and anomalous Zeeman effects. It is not known how much of this talk dealt with more recent ideas, nor if Dirac in fact attended it. Fowler certainly heard of Heisenberg’s brand-new “quantum kinematics” in private conversations, and asked to be kept informed.
In late August or early September, Fowler gave Dirac the proof sheets of Heisenberg’s fundamental paper, “A Quantum-The oretical Reinterpretation (Umdeutung) of Kinematics and Mechanical Relations.” Heisenberg had replaced the position x of an electron by an array representing the amplitudes of virtual oscillators directly giving the observable properties of scattered or emitted radiation corresponding to the energy levels Em and En. To keep the new kinematics as analogous as possible to the classical one, he guessed the multiplication law of two arrays xnm and ynm from the corresponding rule for the Fourier coefficients of x and y, and obtained . In the same way he guessed the quantum version of the quantization rule (μ being the electron mass). The dynamics— the equation of evolution for x—was taken over from classical dynamics, At that point the most advanced quantum problem that Heisenberg could solve was the weakly anharmonic oscillator. The”essential difficulty,“he noticed, was the fact that, according to the new multiplication rule, xy ≠ yx.
Since there was no familiar Hamitonian formalism in Heisenberg’s paper, it was about ten days before Dirac realized that the new multiplication law might solve the difficulties of quantum theory. He first looked for a relativistic generalization of Heisenberg’s scheme, but this proved premature. More successfully, he tried to connect it to a Hamiltonian formalism. The difference xy -yx, once evaluated for high quantum numbers and in terms of actionangle variables J and w, gave iħ that is, the classical Poisson bracket {x, y} times iℏ In other words, Heisenberg’s strange noncommutativity had a classical counte)t in the Poisson-bracket algebra of Hamiltonian mechanics. Dirac then assumed that the relation xy -yx = ih{x, y} held in general (far from the classical limit and for nonmultiperiodic systems) and provided the proper quantum conditions. For canonically conjugate variables p and q it reduced to qp – pq = iℏ3, containing Heisenberg’s quantization rule.
Dirac was very pleased with this close analogy between classical and quantum mechanics because it allowed him to retain the’ beauty’ of classical mechanics and to transfer Hamiltonian techniques to quantum mechanics, Hence he could develop very quickly a version of quantum mechanics more elegant than that developed at Göttingen.
q-Numbers . The identity between commutator and Poisson brackets led to the fundamental equations iℏg = gH -Hg (for any dynamical variable g evolving with the Hamiltonian H) and qp -pq = iℏ (for any canonical couple), determining the formalism of quantum mechanics. Dirac thought that Heisenberg’s interpretation of the quantum variables in terms of matrices giving the observable properties of radiation was provisional and too restrictive; he preferred a symbolic approach, developing the algebra of abstract undefined “q-numbers” and looking only later for those numbers’ representation in terms of observable (ordinary)“c numbers.” The domain of q-numbers had to be extensible, adapting to the further progress of the theory. Some of the axiomatic properties that Dirac imposed on them—for instance, the unicity of the square root and no divisor of zero—had to be dropped later because they cannot be realized in an algebra of operators. Dirac’s idea of q-numbers and his axioms for them most probably originated at Baker’s tea parties. In Baker’s Principles of Geometry there is an abstract noncommutative algebra of coefficients for linear combinations of points, which permitted elegant and condensed proofs of theorems in projective geometry (where noncommutativity means dropping Pappus’s theorem).
In the case of multiperiodic systems. Dirac could show that his fundamental equations were satisfied by an algebra of matrices with rows and columns corresponding to integral values (times h) of the action variables J. In this representation the energy matrix is diagonal, which suggests that the diagonal elements represent the spectrum of the system. Through a correspondence argument Dirac identified the matrix element xJ1 J16 of the electric polarization with the amplitude of the corresponding transition J → J˝, in accordance with Heisenberg’s original definition of the position matrix. In this representation Dirac could solve the hydrogen atom in early 1926 (a little later than Wolfgang Pauli, but independently). Within a few months he also found the basic commutation and composition rules for angular momentum in multielectron atoms, and he made the first relativistic quantum-mechanical calculation giving the characteristics of Compton scattering. Physicists in Copenhagen were impressed by this achievement, the more so because Dirac treated the field classically, without light quanta.
Dirac assembled all these bright results in his doctoral dissertation, completed in June 1926. At that time he had solved by himself about as many quantum problems as the entire Göttingen group together. In principle his q-numbers were more general and more flexible than the Göttingen matrices, which were rigidly connected to a priori observable quantities. But Dirac had been able to solve the quantum equations only insofar as action-angle variables could be introduced into the corresponding classical problem. To proceed further, he needed a new method of finding representations of q-numbers. That is exactly what Erwin Schrödinger made available in a series of papers submitted for publication between January and June 1926.
The Impact of Schrödinger’s Equation . Dirac’s first reaction to Schrödinger’s equation was negative: Why a second quantum mechanics, since there already was one? Why propose that matter waves were analogous to light waves, since the properties of light waves were already so paradoxical? In a letter written on 26 May 1926, Heisenberg convinced him that Schrödinger’s equation (for one degree of freedom) provided a simple and general method to calculate the matrix elements of a general function F of p and q just by forming the integrals . Then, in an astonishingly short time, Dirac accumulated new essential results. The time dependence of the matrix elements could be supplied by the equation HΨ = iℏ∂Ψ/∂t, suggested by the relativistic substitution pμ → iℏ ∂/∂xμ. A set of identical particles, following Heisenherg’s idea of eliminating unobservable differences from the formalism, had to be represented by either symmetric of antisymmetric wave functions in configuration space, the first corresponding to the Bose-Einstein statistics and the second to Pauli’s exclusion principle. Finally, Dirac developed the time-dependent perturbation theory to calculate Einstein’s B coefficients of absorption and stimulated emission. He also improved his calculation of the Compton effect. To reach these physical results he did not subscribe to Schrödinger’s picture of ∣ψ∣2 as a density of electricity; instead he relied on Heisenberg’s interpretation of the polarization matrix or on Born’s statistical interpretation of the Ψ function.
Interpretation of Quantum Dynamics . Dirac was not satisfied by the provisional and parochial assumptions made to interpret q-numbers and the quantum formalism: according to Heisenberg, the diagonal elements of H and the elements of the polarization matrix had an immediate meaning; according to a paper by Born (June 1926), the coefficients cn in the development over the set of eigenfunctions Ψn gave the probability ∣cn2 for the system to be in the state n; and, according to Schrödinger’s fourth memoir (June 1926), Ψ2 of weight function was “a sort of weight function in configuration space.” In Dirac’s view, a general interpretation should be based on a transformation theory, as in the theory of relativity (and as emphasized by Eddington).
To arrive at the interpretation, Dirac first worked out the transformations connecting the various matrix representations of his fundamental equations qp -pq = iℏ and ihg = gH -Hg. He called ξ and α two maximal sets of commuting q-numbers; ξ and α1 corresponding eigenvalues; and (ξ11), the transformation from the representation where ξ is diagonal to the one where α is diagonal acting on the representation gEE of g according to gαα = ∫(αEE˝˝) dξ˝. In this framework the solutions of the (time-independent) Schrödinger equation were nothing but a particular transformation for which a contains H and ξ contains the position. Thenotations, introduced for the sake of economy and in obvious analogy to tensor notation, proved to be extremely convenient and spread widely, especially after their later improvement (1939) into the’ bra-ket’ (or “bra” and “ket”) notation. In fact, the symbolic rules were better defined than the mathematical substratum, which was made clear only much later by mathematicians. For instance, the treatment of continuous spectra on the same footing as the discrete ones necessitated singular “δ-functions” (as in (x/x˝) = δ(x-x˝). perceived by Dirac as limits of sharply peaked functions but raised today to the rank of Schwartz distributions.
To interpret his transformations, Dirac needed only a minimal assumption suggested by the correspondence principle: that for an arbitrary physical quantity g expressed in terms of ξ and the canonical conjugate η gξξ signifies the average of the corresponding classical g for x = ξ and η uniformly distributed. From δ(g -g1)+ ξ’ξ’ = ξ/g)∣2 it follows that ξ∣g)∣2 dg is proportional to the probability that g is equal to g within dg when ξ = ξ. Dirac finished this transformation theory in November 1926 at Copenhagen.
In Göttingen, Pascual Jordan obtained roughly the same results at the same time, though from a different point of view. He defined axiomatically a concept of canonical conjugation at the quantum level and looked for the transformations (ξ. η) → (α, β) from one canonical couple to another. In this more general framework the quantum variables did not necessarily have a classical counte)t, and conjugation did not necessarily correspond to Poisson-bracket conjugation. In other words. Dirac’s transformation theory was more constraining than Jordan’s, and gave more precise directions for the future extensions of quantum mechanics.
Dirac was also original in his conception of the role of probability in quantum mechanics. He thought that probabilities entered into the description of quantum phenomena only in the determination of the initial state (still described in terms of p’s and q’s), and not necessarily in the behavior of an isolated system. But, as Bohr had said at the Solvay Conference in 1927, isolated systems were unobservable. Dirac then assumed that the state of the world was represented by its wave function Ψ and that it changed abruptly during a measurement, whereupon “nature made a choice.”
Dirac retained his basic machinery of transformations in his subsequent lectures on quantum mechanics, but he introduced a substantial change in his fundamental textbook, The Principles of Quantum Mechanics (1930). In the original exposition of transformation theory, he had carefully avoided the concept of quantum state, presumably to depart from Schrödinger’s idea of Ψ as a state. In his Principles, however, he presented the principle of superposition and the related concept of space of states as capturing the most essential feature of quantum theory: the interference of probabilities. It seems plausible that this move was inspired by Bohr’s insistence on the superposition principle and by John von Neumann’s and Hermann Weyl’s formulations of quantum mechanics, in which Hilbert spaces played a central role. From this perspective transformations were just a change of base in the space of states. The correspondence with Hamiltonian formalism appeared only in a later chapter of the book.
A New Radiation Theory . Dirac liked his transformation theory because it was the outcome of a planned line of research and not a fortuitous discovery. He forced his future investigations to fit it. The first results of this strategy were almost miraculous. First came his new radiation theory, in February 1927, which quantized for the first time James Clerk Maxwell’s radiation in interaction with atoms. Previous quantum-mechanical studies of radiation problems, except for Jordan’s unpopular attempt, retained purely classical fields. In late 1925 Jordan had applied Heisenberg’s rules of quantization to continuous free fields and obtained a light-quantum structure with the expected statistics (Bose Einstein) and dual fluctuation properties. Dirac further demonstrated that spontaneous emission and its characteristics—previously taken into account only by special postulates—followed from the interaction between atoms and the quantum field. Essential to this success was the fact that Dirac’s transformation theory eliminated from the interpretation of the quantum formalism every reference to classical emitted radiation, contrary to Heisenberg’s original point of view and also to Schrödinger’s concept of Ψ as a classical source of field.
This work was done during Dirac’s visit to Copenhagen in the winter of 1927. Presumably to please Bohr, who insisted on wave-particle duality and equality, Dirac opposed the “corpuscular point of view” to the quantized electromagnetic “wave point of view.” He started with a set of massless Bose particles described by symmetric Ψ waves in configuration space. As he discovered by’ playing with the equations, ’ this description was equivalent to a quantized Schrödinger equation in the space of one particle; this’ second quantization’ was already known to Jordan, who during 1927 extended it into the basic modern quantum field representation of matter. Dirac limited his use of second quantization electromagnetic to radiation: to establish that the corpuscular point of view, once brought into this form, was equivalent to the wave point of view.
The Dirac Equation . An even more astonishing fruit of Dirac’s transformation theory was his relativistic equation of the electron. He and many other theorists had already made use of the most obvious candidate for such an equation—(2μμ+m2)Ψ=0 (Klein-Gordon)—but it did not include the spin effects necessary to explain atomic spectra. More crucially for Dirac, it could not fit into the transformation theory because it could not be rewritten under the form ℏ∂Ψ∖∂t=HΨ To be both explicitly relativistic and linear in ∂∖∂t the new equation had to take the form (iℏϒμμ–m)Ψ=0 or, more explicitly, For the spectrum to be limited to values satisfying Einstein’s relation E2 = p2 + m2, the coefficients α and β had to be such that that is, and β2=1, αiαjiαj=2δij, and αiβ+βαi=0.
The simplest entities satisfying these relations are 4 x 4 matrices, as Dirac noted with the help of Pauli’s σ matrices (such that .Surprisingly, the new equation included spin effects, the value 2 of the gyromagnetic factor, and the correct fine structure formula (Sommerfeld’s) as worked out approximately by Dirac and exactly by Darwin and Walter Gordon. Other theorists (Pauli, Darwin, Jordan, Hendrik, Kramers) had been searching for a wave equation integrating spin and relativistic effects, but they all started by assuming the existence of spin, either as an intrinsic particle rotation or as a wave polarization. In contrast, the key to Dirac’s success was his persistent adherence to the simplest classical model, the point-electron, as a basis for quantization. Spin effects, as might have been expected from their involving h, were a consequence of relativistic quantization.
Antimatter, Monopoles . The Dirac equation played an essential role not only in atomic physics but also in high-energy physics, through the Klein-Nishina and Møller formulas describing the absorption of relativistic particles in matter. Nevertheless, it presented several strange features that enhanced the “magic” of Dirac’s work: a new type of relativistic covariance involving the spinor representations of the Lorentz group, soon elucidated by Göttingen mathematicians; the trembling of the electron imagined by Schrödinger to harmonize the observed electron speed and the expectation value c of the speed operator from Dirac’s equation; and, above all, the negative-energy difficulty.
The equation E2 = p2 + m2, applying to the spectrum of free Dirac electrons, has two roots: E = ±(p2 + m2)1/2; therefore a Dirac electron with an initially positive energy should fall indefinitely by spontaneous emission toward states of lower and lower energy. To avoid this, Dirac imagined in late 1929 that the states of negative energy were normally filled up according to the exclusion principle and that holes in this “sea” would represent protons. If this were true. Dirac had in hand a grandiose unification of the particle physics of his time. But he still had to explain the ratio mp/m1 between the proton mass and the electron mass. He thought that the disparity in mass might originate in the mutual interaction between the “sea” electrons. The precise numerical value of the ratio would perhaps appear at the same time as the other dimensionless constant, e2/4πℏc, as suggested by Eddington in a 1928 paper containing a mysterious derivation of this remarkable number.
Eddington believed that electromagnetic interactions could be reduced to the “exchange” interactions, the change of sign of a wave function owing to the permutation of two fermions or to a full rotation being of the same nature as the change of phase following an electromagnetic gauge transformation, At the end of’ his speculation, he got e2/4πℏc=1/136. In his search for a theoretical derivation of mp/mc and e2/4πℏc, Dirac also concentrated on the phase of the wave function.
It is usually assumed that the phase of a wave function is unambiguously defined in space (for a given gauge). But a multivalued phase is also admissible, Dirac noted, as long as the variation of phase around a closed loop is the same for any wave function (to preserve the regular statistical interpretation of Ψ based on quantities such as . To ensure the continuity of Ψ, the variation of phase around an infinitesimal closed loop can only be a multiple of . This determines lines of singularities starting from (gauge) invariant singular points. Now, following the relation between electromagnetic potential and phase implied by gauge invariance, the singular points must be identified with magnetic monopoles carrying the charge g = nℏc/2e. If there is only one monopole g in nature, every electric charge must be a multiple of ℏc/2g. Dirac always considered this explanation of the quantization of charge in nature as the strongest argument in favor of monopoles.
Unfortunately, no other restriction on e followed from this line of reasoning and Dirac missed his targets, the derivation of e2/4πℏc and a subsequent determination of mp/mc. But the latter was no longer needed: in 1931 he learned from Weyl that, due to charge conjugation symmetry, the holes in his “sea” theory necessarily carried the charge -e. In the same year and in a single paper, he proclaimed the necessity of antielectrons (and also antiprotons) and the possibility of monopoles, and pondered the most efficient method of advance in theoretical physics. As in his quantum-theoretical work, he had first to work out the formalism in terms of abstract symbols denoting states and observables, and next to in vestigate the symbols’ interpretation. This was. Dirac said, “like Eddington’s principle of identification, ’ according to which the interpretation of the fundamental tensors of general relativity came after their mathematical justification.
Dirac gave a full quantum-mechanical treatment of his monopoles in 1948 with the help of “nonphysical strings” allowing a Hamiltonian formulation. More recently monopoles have been shown to be necessary in any non-Abelian gauge theory, including electromagnetic interactions. But no experimental evidence has yet been found. On the other hand, the antielectron (or positron) was discovered by Carl Anderson and Patrick Blackett in the years 1932–1933, much earlier than foreseen by Dirac. although its concept faced the general prejudice against a charge-symmetric nature.
The Multitime Theory . After the discovery of the positron, most theorists agreed that the negativeenergy difficulty was solved by the “sea” concept. Another fundamental difficulty, also rooted in one of Dirac’s early works, his radiation theory, lasted much longer. In 1929, when working out their version of quantum electrodynamics, Heisenberg and Pauli discovered that the second order of approximation involved infinite terms, even when it was related to physical phenomena such as level shifts in atoms. The difficulty looked so serious that in the first edition of his Principles, Dirac omitted the quantization of the electromagnetic field and presented only the light-quantum configuration-space approach.
In 1932 Dirac tried to start a new revolution by giving up (for electrodynamics) the most basic requirement of his quantum-mechanical work: the Hamiltonian structure of dynamical equations. Imitating Heisenberg’s revolutionary breakthrough, he declared that the new theory should eliminate unobservable things like the electromagnetic fields during the interaction process, and focus on their asymptotic values before and after the interaction. The electromagnetic field, he said, was nothing but a means of observation, and therefore should not be submitted to Hamiltonian treatment. On these lines he derived a set of equations that apparently were quite new; in fact, as Leon Rosenfeld soon pointed out, it differed from the theory of Heisenberg and Pauli only in the use of the interaction representation (for which the quantum fields evolve as free fields) and of a multitime configuration space for electrons (instead of Jordan’s quantized waves). Nonetheless, once it had been improved with the help of Vladïmir A. Fock and Boris Podolsky, Dirac’s formulation had the great advantage of being explicitly covariant. a feature particularly attractive to the Japanese quantum-field theorists Hideki Yukawa and Sin-Itiro Tomonaga.
The Large-Number Hypothesis . The infinities were still there. The discovery of the positron in 1932 gave some hope that the deformations of Dirac’s “sea,” the “vacuum polarization,” would cure them. But such was not the case (although Wendell Furry and Victor Weisskopf made the infinities “Smaller”), and Dirac himself judged the “sea” theory ugly. In 1936, depressed by this state of affairs, he hastily concluded from some experimental results of Robert S. Shankland that the energy principle should be given up in relativistic quantum theory. Needing some diversion, he turned to cosmological speculation following Eddington, who believed in a grand unification of atomic physics and cosmology. Dirac also knew Edward A. Milne, the other famous Cambridge cosmologist, who had been his supervisor for a term in 1925, and he had made friends with the American astronomer Howard P. Robertson, who believed in the expansion of the universe, during a short stay at Göttingen in 1927.
Like Eddington, Dirac focused on dimensionless numbers built from the fundamental constants of both atomic and cosmic phenomena; and he observed that there was a cluster of these numbers around 1039, including the age of the universe in atomic time units and the ratio of electric forces to gravitational ones inside atoms. In 1937 he proposed the “large-number hypothesis.” according to which numbers in the same cluster should be simply related. Consequently, the gravitation constant had to vary in time, as in Milne’s cosmology and contrary to general relativity.
Milne believed in an “extended principle of relativity,” which stipulated that the universe should look the same from wherever it is observed, and completed it by the stricture that the cosmological theory should not include any constant having dimensions. To elaborate his own cosmology further, Dirac provisionally adopted Milne’s first principle (he rejected it later, in 1939) but replaced the second hypothesis—which conflicted with Eddington’s idea that atomic constants should play a role in cosmology—with his large-number hypothesis. As a result the spiral nebulas (then the furthest objects known, from whose behavior Edwin Hubble had deduced his recession law) had to recede in time according to t1/3, and the curvature of the threedimensional space had to be zero. Relativity did not enter these reasonings; Dirac expected it to play only a subsidiary role in cosmology, since Hubble’s law provided a natural speed at any point of space— and therefore a natural time axis. To reconcile this position with his admiration for Einstein’s theory of gravitation, Dirac introduced two different metrics for atomic and cosmic phenomena. Only the second one was ruled by Einstein’s theory; the first one varied in time according to the large-number hypothesis.
Cosmology was not just a hobby for Dirac. Rather, as he explained in 1939, it embodied his notion of progress in physics—an ever increasing mathematization of the world. In the old mechanistic conception, the equations of motion were mathematical but the initial conditions were given by observation. In the new cosmology the state preceding the initial explosion (posited by Georges Lemaître) was so simple that any complexity in nature pertained to the mathematical evolution. In this context Dirac even expressed the hope that the history of the universe would be only a history of the properties of numbers from 1 to 1039. From the 1970’s to the end of his life he often came back to his cosmological ideas. His large-number hypothesis has been seriously considered by several astrophysicists in spite of its speculative character.
Classical Point Electron, Indefinite Metrics . Therest of Dirac’s work, from the 1930’s on, centered on quantum electrodynamics. Dirac remained true to the research method that he had developed in his early work. He never reached his ultimate aim, a mathematically clean theory, but left interesting by-products of his quest. All the creators of quantum mechanics attempted to deal with the disease of infinite self-energy. One possibility they discussed was a revision of the correspondence basis. the classical theory of electrodynamics, which already involved either ambiguities (dependence on the structure of a finite electron) or infinite self-energy (for point electrons). In 1938 Dirac created a finite theory of point electrons by a convenient’ reinterpretatioir of the Maxwell-Lorentz equations that canceled the infinite self-mass. In spite of its formal beauty, this theory involved unphysical “runaway” solutions (spontaneously accelerating electrons) that could be eliminated (at the classical level) only at the price of making supraluminal signals possible. Not fully conscious of the latter difficulty, Dirac brought his equations to the Hamiltonian form and quantized them. Unfortunately, only half the divergent integrals of quantum electrodynamics were cured by this procedure. To take care of the other half, Dirac imagined in 1942 a nonpositive (he called it “indefinite”) metric in Hilbert space that allowed a new natural representation of the field commutation rules but implied negative probabilities difficult to interpret physically. Pauli admired the new formalism but criticized Dirac’s artificial interpretation of it, which involved a “hypothetical world” initially (before collisions occur in the real world) empty of photons and filled up with positrons (to dry out the sea).
In 1946 Dirac realized that his new equations allowed a finite nonperturbative solution; in addition, they could be connected with the regular formalism (with only positive probabilities) by a change of representation, that is, a unitary transformation in Hilbert space. Although not able to explicate this transformation (which presumably would reintroduce infinities). Dirac concluded in 1946 that the difficulties of quantum electrodynamics were purely mathematical. During the next year other theorists realized that the difficulties were connected instead with a proper definition of physical parameters like charge and mass. Nonetheless, the indefinite metric proved to be indispensable in quantum field theory for another reason: a covariant quantization of Maxwell’s field requires the introduction of (unobservable) states of negative probability. In the 1960’s several theorists, including Heisenberg, also developed Dirac’s idea of a finite quantum electrodynamics with indefinite metrics.
Relativistic Ether, Strings . Developed by other physicists in 1947, renormalization, a way to absorb infinities in a proper redefinition of mass and charge, allowed very successful calculations of higher-order corrections to atomic and electrodynamical processes. From this resulted the best numerical agreement ever encountered between a fundamental theory and experiment. Always more concerned with internal beauty than with experimental verdict, Dirac called it a “fluke” and kept searching for a closed quantum electrodynamics purged of infinities at every stage of calculation. His point of view quickly became heterodox as more and more theorists thought that quantum electrodynamics did not have to exist by itself, but only as a part of a more general theory encompassing other types of interactions As if to stress his originality, Dirac did not show any interest in the growing but messy field of nuclear and particle physics.
Some of Dirac’s late attempts at a new quantum electrodynamics brought fundamentally new ideas. For instance, in 1951 he resurrected ether, arguing that quantum theory allowed a Lorentz invariant notion of ether for which all drift speeds at a given point of space-time are equiprobable, in analogy with the S states of the hydrogen atom, which are invariant by rotation although the underlying classical model is not. The idea had come to him after the proposal of a new electrodynamics for which the potential is restricted by AμAμ = k2, which suggests a natural ether speed vμ = k-1Aμ, even in the absence of matter.
In 1955 Dirac proposed strings as the basic representation of quantum electrodynamics, a photon corresponding to a closed string and an electron corresponding to the extremity of an open string. Originally suggested by a manifestly gauge-invariant formulation of quantum electrodynamics in which the electron is explicitly dragging an electromagnetic field with it, this picture “made inconceivable the things we do not want to have,” for instance, a physically meaningless “bare” electron.
The Lagrangian in Quantum Mechanics . None of the above-mentioned attempts questioned the basic frame of quantum mechanics that Dirac had established in his younger years. But all through his scientific career he looked for alternative or more general formulations of quantum mechanics that might be more suitable for relativistic applications. Some of the products of this kind of exploration proved to be of essential importance. For instance, in 1933, exploiting a relation discovered by Jordan between quantum canonical transformations and the corresponding classical generating functions, he found that the transformation (q1+T/qt) from q taken at time t to q taken at time t + T “corresponded” to exp dt, where L denotes the Lagrangian and q(t), the classical motion between q1 and q1+T. In the same paper he introduced the “generalized transformation functions,” substituting the covariant motion of timelike surface of measurement for the usual hyperplanes’ t = constant’ in fourdimensional space. Theremark about the Lagrangian, generalized by Dirac himself in 1945 to provide the amplitude of probability of a trajectory, inspired Richard Feynman in his discovery of the “Feynman integrals.” now (he most efficient method of quantization. The “general transformation function” was adopted by the Japanese school to suggest, in combination with Dirac’s multitime theory, Tomonaga’s manifestly covariant formulation of quantum electrodynamics (1943).
The Role of Mathematics . Dirac believed in a “mathematical quality of nature.” In the ideal physical theory, the whole of the description of the universe would have its mathematical counte)t. Conversely, he claimed around 1924, at one of Baker’s tea parties, that any really interesting mathematical theory should find an application in the physical world. After sufficient progress, the field of mathematics would be purified and reduced to applied mathematics, that is, theoretical physics. The foundation of this belief in an asymptotic convergence of mathematics and physics is not easy to trace in Dirac’s writings, since he generally avoided philosophical discussion. What could be said mathematically was clear enough to him, and he did not require, as most philosopher-physicists would, a recourse to common language to improve understanding. When circumstances compelled him to epistemological statements—for instance, in the foreword to his Principles—he simply borrowed them from physicist-philosophers who were “right by definition”: Bohr and Eddington. From both these masters he took the rejection of mental pictures in space-time of the old physics. From Eddington he had the idea of a “nonpicturable substratum” and the recognition, through the development and justification of transformation theories, of “the part played by the observer in himself introducing the regularity that appears m his observations, and the lack of arbitrariness in the ways of nature.”
It is doubtful that Dirac regarded these statements as really meaningful. When expressing his personal feelings on the role of mathematics, his leitmotiv was the idea of “mathematical beauty.” For him the main reason for the successful appearance of groups of transformations in modern theories was their mathematical beauty, something no more subject to definition than beaut) in art, but obvious to the connoisseur. In this perspective the mathematical quality of nature could be just the expression of its beauty, More significantly, Dirac’s requirement of beauty materialized into a methodology: one had first to select the most beautiful mathematics and then, following Eddington’s “principle of identification,” try to connect it to the physical world.
To implement the first stage of this methodology, a more definite notion of beauty is needed. Dirac constantly refers to the museum of his early beautiful mathematical experiences. First comes the magic of projective geometry, exemplifying the power to find surprising relations between picturable mathematical objects through simple, invisible manipulations. Then follows general relativity with the appearance of symmetry transformations, and tensor calculus perceived as a symphony of symbols. At the moment of its introduction, beauty excludes rigor. Exact mathematical meaning comes after a heuristic symbolic stage, as in the introduction of the δ-function or of the q-numbers. It is less difficult, according to Dirac, to find beautiful mathematics than to interpret it in physical terms. Here is perhaps the most creative part of his work, invoking subtle analogies and correspondence with older bits of theories.
On the whole, Dirac’s method sounds highly a priori, but he occasionally insisted on the necessity of a proper balance between inductive and deductive methods. A more detailed analysis would also show that where he was the most successful, he always remained securely tied to the empirically solid parts of existing theories.
I. Original Works. A list of Dirac’s publications is in the biography by Dalitz and Peierls (see below). His main works are “The Fundamental Equations of Quantum Mechanics.” in Proceedings of the Royal Society of London, A109 (1925), 642–653; “On the Theory of Quantum Mechanics.” ibid., A112 (1926), 661–677; “The Physical Interpretation of the Quantum Dynamics,” ibid., A113 (1927), 621–641;’ The Quantum Theory of Emission and Absorption of Radiation, ’ ibid., A114 (1927), 243–265;’ The Quantum Theory of the Electron, 1ibid., A117 (1928), 610–624; The Principles of Quantum Mechanics (Oxford, 1930); “A Theory of Electrons and Protons,” in Proceedings of the Royal Society of London, A126 (1930), 360–365; “Quantized Singularities in the Electro magnetic Field,” ibid., A133 (1931) 60–72; and “The Cosmological Constants,” in Nature, 139 (1937), 323. Nontechnical writings include “The Relation Between Mathematics and Physics,” in Royal Society of Edinburgh, Proceedings, 59 (1939). 122–129; “The Evolution of the Physicist’s Picture of Nature,” in Scientific American, 208 , no. 5 (1963), 45–53; The Development of Quantum Theory (New York, 1971); and “Recollections of an Exciting Era,” in Charles Weiner, ed., History of TwentiethCentury Physics (New York, 1977), 109–146.
Some of Dirac’s papers have been deposited at the Churchill College Archive, Cambridge. Photocopies of manuscripts and letters, and an interview by Thomas S. Kuhn, are available in the Archive for the History of Quantum Physics (Berkeley, Copenhagen, London, New York, Rome).
II. Secondary Literature. Joan Bromberg, “The Concept of Particle Creation Before and After Quantum Mechanic,” in Historical Studies in the Physical Sciences, 7 (1976), 161–183, and “Dirac’s Quantum Electrodynamics and the Wave-Particle Equivalence,” in Charles Weiner, ed., History of Twentieth-Century Physics (New York, 1977), 147–157; Hendrik Casimir. “Paul Dirac, 1902 1984,” in Naturwissenschaftliche Rundschau, 38 (1985), 219–223; R. H. Dalitzand Sir Rudolf Peierls, “Paul Adrien Maurice Dirac,” in Biographical Memoirs of Fellows of the Royal Society, 32 (1986), 137–185; Olivier Darrigol, ’ La genese du concept de champ quantique, ’ in Annales de physique, 9 (1984), 433–501, and “The Origins of Quantized Matter Waves,” in Historical Studies in the Physical and Biological Sciences, 16 , no. 2 (1986), 197–253; Michelangelo de Maria and Francesco La Teana, “Sehrödinger’s and Dirac’s Unorthodoxy in Quantum Mechanics,” in Fundamenta scientiae, 3 (1982), 129–148; Norwood R. Hanson. The Concept of the Positron (Cambridge, 1963); Max Jammer. The Conceptual Development of Quantum Mechanics (New York, 1966); Helge Kragh, “The Genesis of Dirac’s Relativistic Theory of Electrons,” in Archive for the History of Exact Sciences, 24 (1981), 31–67, “The Concept of the Monopole,” in Studies in History and Philosophy of Science, 12 (1981), 141–172, ’ Cosmo-physics in the Thirties: Towards a History of Dirac’s Cosmology, ’ in Historical Studies in the Physical Sciences, 13 no. 1 (1982), 60–108, and, as editor, Methodology and Philosophy of Science in Paul Dirac’s Physics. University of Roskilde text no. 27 (Roskilde, 1979); B. N. Kursunoglu and E. P. Wigner, eds., Reminiscences About a Great Physicist (Cambridge, 1987); Jagdish Mehra and Helmut Rechenberg, The Historical Development of Quantum Theory, IV. The Fundamental Equations of Quantum Mechanics (New York, 1982); Donald F. Moyer, ’ Origins of Dirac’s Electron, 1925–1928, ’ in American Journal of Physics, 49 (1981) 944–949, ’ Evaluation of Dirac’s Electron, ’ ibid., 1055–1062, and “Vindication of Dirac’s Electron,” ibid., 1120–1135; Abdus Salam and Eugene P. Wigner, eds., Aspects of Quantum Theory (Cambridge, 1972); and J. G. Taylor, ed., Tribute to Paul Dirac (Bristol, 1987).
Olivier Darrigol
Cite this article
• MLA
• Chicago
• APA
"Dirac, Paul Adrien Maurice." Complete Dictionary of Scientific Biography. . 20 Aug. 2017 <>.
"Dirac, Paul Adrien Maurice." Complete Dictionary of Scientific Biography. . (August 20, 2017).
"Dirac, Paul Adrien Maurice." Complete Dictionary of Scientific Biography. . Retrieved August 20, 2017 from
Paul Adrien Maurice Dirac
Paul Adrien Maurice Dirac
The English physicist Paul Adrien Maurice Dirac (1902-1984) formulated a most general type of quantum mechanics and a relativistic wave equation for the electron which led to the prediction of positive electrons, the first known forms of antimatter.
Paul Adrien Maurice Dirac was born on Aug. 8, 1902, at Monk Royal in Bristol, England, the son of Charles Adrien Ladislas Dirac and Florence Hannah Holten Dirac. Paul received his secondary education at the old Merchant Venturers' College and, at the age of 16, entered Bristol University. He graduated 3 years later in electrical engineering. Unable to find employment, he studied mathematics for 2 years before moving to Cambridge as a research student and recipient of an 1851 Exhibition scholarship award. His student years (1923-1926) at Cambridge saw the emergence of the mathematical formulation of modern atomic physics in the hands of Louis de Broglie, Werner Heisenberg, Erwin Schrödinger, and Max Born. It was therefore natural that Dirac's attention should turn to a cultivation of mathematics most directly concerned with atomic physics.
Negative Kinetic Energy
Dirac's first remarkable contribution along these lines came before he earned his doctorate in 1926. In his paper "The Fundamental Equations of Quantum Mechanics" (1925), Dirac decided to extricate the fundamental point in Heisenberg's now famous paper. Before Heisenberg, computation of energy levels of optical and x-ray spectra consisted in a somewhat empirical extension of rules provided by Niels Bohr's theory of the atom. Heisenberg succeeded in grouping terms connected with energy levels in columns forming large squares and also indicated the marvelously simple ways in which any desired energy level could be readily calculated. Dirac found that what Heisenberg really wanted to achieve consisted in a most general type of operation on a "quantum variable" x which was done by "taking the difference of its Heisenberg products with some other quantum variable."
At that time neither Heisenberg nor Dirac had realized that the "Heisenberg products" corresponded to operations in matrix calculus, a fact which was meanwhile being proved by Born and Pascual Jordan in Göttingen. They showed that the noncommutative multiplication of the "Heisenberg quantities" could be summed up in the formula (p X q) □ (q X p) □ h/(2□□□1), where h is Planck's constant and p and q some canonically conjugate variables. Independently of them, Dirac also obtained the same formula, but through a more fundamental approach to the problem. Dirac's crucial insight consisted in finding that a very simple operation formed the basis of the formula in question. What had to be done was to calculate the value of the classical Poisson bracket [p, q] for p and q and multiply it by a modified form of Planck's constant.
That such a procedure yielded the proper values to be assigned to the difference of p X q and q X p was only one aspect of the success. The procedure also provided an outstanding justification of the principle of correspondence, tying into one logical whole the classical and modern aspects of physics. Dirac once remarked that the moment of that insight represented perhaps the most enthralling experience in his life.
But the most startling result of Dirac's equation for the electron was the recognition of the possibility of negative kinetic energy. In other words, his equations implied for the electron an entirely novel type of motion whereby energy had to be put into the electron in order to bring it to rest. The novelty was both conceptual and experimental and received a remarkably quick elucidation.
The experimental clarification came when C. D. Anderson, doing cosmic-ray research in R. A. Millikan's laboratory in Pasadena, Calif., obtained on Aug. 2, 1932, the photograph of an electron path, the curvature of which could be accounted for only if the electron had a positive charge. The positively charged electron, or positron, was, however, still unconnected with the negative energy states implied in Dirac's theory of the electron. The work needed in this respect was largely done by Dirac, though not without some promptings from others. A most lucid summary of the results was given by Dirac in the lecture which he delivered on Dec. 12, 1933, in Stockholm, when he received the Nobel Prize in physics jointly with Schrödinger.
World of Antimatter
The most startling consequences of Dirac's theory of the electron consisted in the opening up of the world of antimatter. Clearly, if negative electrons had their counterparts in positrons, it was natural to assume that protons had their counterparts as well. Here Dirac argued on the basis of the perfect symmetry that according to him had to prevail in nature. As a matter of fact, it was a lack of symmetry in Schrödinger's equation for the electron that Dirac tried to remedy by giving it a form satisfactory from the viewpoint of relativity.
All this should forcefully indicate that Dirac was a thinker of most powerful penetration who reached the most tangible conclusions from carrying to their logical extremes some utterly abstract principles and postulates. Thus by postulating the identity of all electrons, he was able to show that they had to obey one specific statistics. This fact in turn provided the long-sought clue for the particular features of the conduction of electricity in metals, a problem with which late classical physics and early quantum theory grappled in vain. This attainment of Dirac paralleled a similar, though less fundamental, work by Enrico Fermi, so that the statistics is now known as the Fermi-Dirac statistics.
This contribution of Dirac came during a marvelously creative period in his life, from 1925 to 1930. Its crowning conclusion was the publication of his Principles of Quantum Mechanics, a work still unsurpassed for its logical compactness and boldness. The latter quality is clearly motivated by Dirac's unlimited faith in the mathematical structuring of nature. The book is indeed a monument to his confidence that future developments will provide the exact physical counterparts that some of his mathematical symbols still lack.
A telling measure of Dirac's main achievements in physics was the recognition that greeted his work immediately. In 1932 he was elected a fellow of the Royal Society and given the most prestigious post in British science, the Lucasian chair of mathematics at Cambridge. He received the Royal Society's Royal Medal in 1939 and its Copley Medal in 1952. He was a member of many academies, held numerous honorary degrees, and was a guest lecturer in universities all over the world. He married Margaret Wigner, sister of Nobel laureate Eugene P. Wigner, in 1937.
The second half of Dirac's working life was occupied mainly with cosmology and the subject of "large numbers," or numbers with cosmic significance. In the 1972, he accepted a post as professor of physics at Florida State University, and he continued there until his death in Tallahassee on October 20, 1984.
Further Reading
Humorous details on Dirac's life can be found in George Gamow, Biography of Physics (1961), together with a not too technical discussion of Dirac's theory of holes. See also Niels H. de V. Heathcote, Nobel Prize Winners in Physics, 1901-1950 (1954). For a rigorous account of Dirac's role in quantum mechanics, the standard work is Max Jammer, The Conceptual Development of Quantum Mechanics (1966). Background works which discuss Dirac include James Jeans, Physics and Philosophy (1942), and Barbara Lovett Cline, The Questioners: Physicists and the Quantum Theory (1965).
Additional Sources
Dirac, Paul, The Principals of Quantum Mechanics, Clarendon Press, 1930.
Dirac, Paul, Spinors in Hilbert Space, University of Miami Center for Theoretical Studies, 1974.
Dirac, Paul, General Theory of Relativity, Wiley, 1975.
Kursunolgu, Behram N., and Eugene P. Wigner, eds., Reminiscences About a Great Physicist: Paul Adrien Maurice Dirac, Cambridge University Press, 1987. □
Cite this article
• MLA
• Chicago
• APA
"Paul Adrien Maurice Dirac." Encyclopedia of World Biography. . 20 Aug. 2017 <>.
"Paul Adrien Maurice Dirac." Encyclopedia of World Biography. . (August 20, 2017).
"Paul Adrien Maurice Dirac." Encyclopedia of World Biography. . Retrieved August 20, 2017 from
Dirac, Paul Adrien Maurice
Cite this article
• MLA
• Chicago
• APA
"Dirac, Paul Adrien Maurice." The Columbia Encyclopedia, 6th ed.. . 20 Aug. 2017 <>.
Dirac, Paul Adrien Maurice
Dirac, Paul Adrien Maurice (1902–84) English physicist. He made valuable contributions to the development of quantum theory. In 1928 Dirac introduced a notation for quantum equations that combined Schrödinger's use of differential calculus with Heisenberg's use of matrices. In 1930 he applied Einstein's theory of relativity to quantum mechanics in order to describe the spin of an electron. The resultant equation predicted the existence of antimatter. Dirac shared the 1933 Nobel Prize in physics with Schrödinger.
Cite this article
• MLA
• Chicago
• APA
"Dirac, Paul Adrien Maurice." World Encyclopedia. . 20 Aug. 2017 <>.
"Dirac, Paul Adrien Maurice." World Encyclopedia. . (August 20, 2017).
"Dirac, Paul Adrien Maurice." World Encyclopedia. . Retrieved August 20, 2017 from |
dad53d267955c555 | Sunday, June 28, 2015
Thinking outside the quantum box
Doctor: Don't tell me you're lost too.
Shardovan: No, but as you guessed, Doctor, we people of Castrovalva are too much part of this thing you call the occlusion.
Doctor: But you do see it, the spatial anomaly.
Shardovan: With my eyes, no — but, in my philosophy.
— Doctor Who, Castrovalva.
I've made no particular secret, on this blog, that I'm looking (in an adventuresome sort of way) for alternatives to quantum theory. So far, though, I've mostly gone about it rather indirectly, fishing around the edges of the theory for possible angles of attack without ever engaging the theory on its home turf. In this post I'm going to shave things just a bit closer — fishing still, but doing so within line-of-sight of the NO FISHING sign. I'm also going to explain why I'm being so indirect, which bears on what sort of fish I think most likely here.
To remind, in previous posts I've mentioned two reasons for looking for an alternative to quantum theory. Both reasons are indirect, considering quantum theory in the larger context of other theories of physics. First, I reasoned that when a succession of theories are getting successively more complicated, this suggests some wrong assumption may be shared by all of them (here). Later I observed that quantum physics and relativity are philosophically disparate from each other (here), a disparity that has been an important motivator for TOE (Theory of Everything) physicists for decades.
The earlier post looked at a few very minor bits of math, just enough to derive Bell's Inequality, but my goal was only to point out that a certain broad strategy could, in a sense, sidestep the nondeterminism and nonlocality of quantum theory. I made no pretense of assembling a full-blown replacement for standard quantum theory based on the strategy (though some researchers are attempting to do so, I believe, under the banner of the transactional interpretation). In the later post I was even less concrete, with no equations at all.
The quantum meme
How to fish
Why to fish
Hygiene again
The structure of quantum math
The structure of reality
The quantum meme
Why fish for alternatives away from the heart of the quantum math? Aside, that is, from the fact that any answers to be found in the heart of the math already have, presumably, plenty of eyeballs looking there for them. If the answer is to be found there after all, there's no lasting harm to the field in someone looking elsewhere; indeed, those who looked elsewhere can cheerfully write off their investment knowing they played their part in covering the bases — if it was at least reasonable to cover those bases. But going into that investigation, one wants to choose an elsewhere that's a plausible place to look.
Supposing quantum theory can be successfully challenged, I suggest it's quite plausible the successful challenge might not be found by direct assault (even though eventual confrontation would presumably occur, if it were really successful). Consider Thomas Kuhn's account of how science progresses. In normal science, researchers work within a paradigm, focusing their energies on problems within the paradigm's framework and thereby making, hopefully, rapid progress on those problems because they're not distracting themselves with broader questions. Eventually, he says, this focused investigation within the paradigm highlights shortcomings of the paradigm so they become impossible to ignore, researchers have a crisis of confidence in the paradigm, and after a period of distress to those within the field, a new paradigm emerges, through the process he calls a scientific revolution. I've advocated a biological interpretation of this, in which sciences are a variety of memetic organisms, and scientific revolution is the organisms' reproductive process. But if this is so, then scientific paradigms are being selected by Darwinian evolution. What are they being selected for?
Well, the success of science hinges on paradigms being selected for how effectively they allow us to understand reality. Science is a force to be reckoned with because our paradigms have evolved to be very good at helping us understand reality. That's why the scientific species has evolved mechanisms that promote empirical testing: in the long run, if you promote empirical testing and pass that trait on to your descendants, your descendants will be more effective, and therefore thrive. So far so good.
In theory, one could imagine that eventually a paradigm would come along so consistent with physical reality, and with such explanatory power, that it would never break down and need replacing. In theory. However, there's another scenario where a paradigm could get very difficult to break down. Suppose a paradigm offers the only available way to reason about a class of situations; and within that class of situations are some "chinks in the armor", that is, some considerations whose study could lead to a breakdown of the paradigm; but the only way to apply the paradigm is to frame things in a way that prevents the practitioner from thinking of the chinks-in-the-armor. The paradigm would thus protect itself from empirical attack, not by being more explanatory, but by selectively preventing empirical questions from being asked.
What characteristics might we expect such a paradigm to have, and would they be heritable? Advanced math that appears unavoidable would seem a likely part of such a complex. If learning the subject requires indoctrination in the advanced math, then whatever that math is doing to limit your thinking will be reliably done to everyone in the field; and if any replacement paradigm can only be developed by someone who's undergone the indoctrination, that will favor passing on the trait to descendant paradigms. General relativity and quantum theory both seem to exhibit some degree of this characteristic. But while advanced math may be an enabler, it might not be enough in itself. A more directly effective measure, likely to be enabled by a suitable base of advanced math, might be to make it explicitly impossible to ask any question without first framing the question in the form prescribed by the paradigm — as quantum theory does.
This suggests to me that the mathematical details of quantum theory may be a sort of tarpit, that pulls you in and prevents you from leaving. I'm therefore trying to look at things from lots of different perspectives in the general area without ever getting quite so close as to be pulled in. Eventually I'll have to move further and further in; but the more outside ideas I've tied lines to before then, the better I'll be able to pull myself out again.
How to fish
What I'm hoping to get out of this fishing expedition is new ideas, new ways of thinking about the problem. That's ideas, plural. It's not likely the first new idea one comes up with will be the key to unlocking all the mysteries of the universe. It's not even likely that just one new idea would ever do it. One might need a lot of new ideas, many of which wouldn't actually be part of a solution — but the whole collection of them, including all the ones not finally used, helps to get a sense of the overall landscape of possibilities, which may help in turning up yet more new ideas inspired from earlier ones, and indeed may make it easier to recognize when one actually does strike on some combination of ideas that produce a useful theory.
Hence my remark, in an aside in an earlier post, that I'm okay with absurd as long as it's different and shakes up my thinking.
Case in point. In the early 1500s, there was this highly arrogant and abrasive iconoclastic fellow who styled himself Philippus Aureolus Theophrastus Bombastus von Hohenheim; ostensibly our word "bombastic" comes from his name. He rejected the prevailing medical paradigm of his day, which was based on ancient texts, and asserted his superiority to the then-highly-respected ancient physician Celsus by calling himself "Paracelsus", which is the name you've probably heard of him under. He also shook up alchemical theory; but I mention him here for his medical ideas. Having rejected the prevailing paradigm, he was rather in the market for alternatives. He advocated observing nature, an idea that really began to take off after he shook things up. He advocated keeping wounds clean instead of applying cow dung to them, which seems a good idea. He proposed that disease is caused by some external agent getting into the body, rather than by an imbalance of humours, which sounds rather clever of him. But I'm particularly interested that he also, grasping for alternatives to the prevailing paradigm, borrowed from folk medicine the principle of like affects like. Admittedly, you couldn't do much worse than some of the prevailing practices of the day. But I'm fascinated by his latching on to like-effects-like, because it demonstrates how bits of replicative material may be pulled in from almost anywhere when trying to form a new paradigm. Having seen that, it figured later into my ideas on memetic organisms.
It also, along the way, flags out the existence of a really radically different way of picturing the structure of reality. Like-affects-like is a wildly different way of thinking, and therefore ought to be a great limbering-up exercise.
In fact, like-affects-like is, I gather, the principle underlying the anthropological phenomenon of magic — sympathetic magic, it's called. I somewhat recall an anthropologist expounding at length (alas, I wish I could remember where) that anthropologically this can be understood as the principle underlying all magic. So I got to thinking, what sort of mathematical framework might one use for this sort of thing? I haven't resolved a specific answer for the math framework, yet; but I've tried to at least set my thoughts in order.
What I'm interested in here is the mathematical and thus scientific utility of the like-affects-like principle, not its manifestation in the anthropological phenomenon of magic (as Richard Cavendish observed, "The religious impulse is to worship, the scientific to explain, the magical to dominate and command"). Yet the term "like affects like" is both awkward and vague; so I use the term sympathy for discussing it from a mathematical or scientific perspective.
How might a rigorous model of this work, structurally? Taking a stab at it, one might have objects, each capable of taking on characteristics with a potentially complex structure, and patterns which can arise in the characteristics of the objects. Interactions between the objects occur when the objects share a pattern. The characteristics of objects might be dispensed with entirely, retaining only the patterns, provided one specifies the structure of the range of possible patterns (perhaps a lattice of patterns?). There may be a notion of degrees of similarity of patterns, giving rise to varying degrees of interaction. This raises the question of whether one ought to treat similar patterns as sharing some sort of higher-level pattern and themselves interacting sympathetically. More radically, one might ask whether an object is merely an intersection of patterns, in which case one might aspire to — in some sense — dispense with the objects entirely, and have only a sort of web of patterns. Evidently, the whole thing hinges on figuring out what patterns are and how they relate to each other, then setting up interactions on that basis.
I distinguish between three types of sympathy:
• Pseudo-sympathy (type 0). The phenomenon can be understood without recourse to the sympathetic principle, but it may be convenient to use sympathy as a way of modeling it.
• Weak sympathy (type 1). The phenomenon may in theory arise from a non-sympathetic reality, but in practice there's no way to understand it without recourse to sympathy.
• Strong sympathy (type 2). The phenomenon cannot, even in theory, arise from a non-sympathetic reality.
All of which gives, at least, a lower bound on how far outside the box one might think. One doesn't have to apply the sympathetic principle in a theory, in order to benefit from the reminder to keep one's thinking limber.
(It is, btw, entirely possible to imagine a metric space of patterns, in which degree of similarity between patterns becomes distance between patterns, and one slides back into a geometrical model after all. To whatever extent the merit of the sympathetic model is in its different way of thinking, to that extent one ought to avoid setting up a metric space of patterns, as such.)
Why to fish
Asking questions is, broadly speaking, good. A line comes to mind from James Gleick's biography of Feynman (quoted favorably by Freeman Dyson): "He believed in the primacy of doubt, not as a blemish upon our ability to know but as the essence of knowing." Nevertheless, one does have to pick and choose which questions are worth spending most effort on; as mentioned above, the narrow focus of normal scientific research enables its often-rapid progress. I've been grounding my questions about quantum mechanics in observations about the character of the theory in relation to other theories of physics.
By contrast, one could choose to ground one's questions in reasoning about what sort of features reality can plausibly have. Einstein did this when maintaining that the quantum theory was an incomplete theory of the physical world — that it was missing some piece of reality. An example he cited is the Schrödinger's cat thought-experiment: Until observed, a quantum system can exist in a superposition of states. So, set up an experiment in which a quantum event is magnified into a macroscopic event — through a detector, the outcome of the quantum event causes a device to either kill or not kill a cat. Put the whole experimental apparatus, including the cat, in a box and close it so the outcome cannot be observed. Until you open the box, the cat is in a superposition of states, both alive and dead. Einstein reasoned that since the quantum theory alone would lead to this conclusion, there must be something more to reality that would disallow this superposition of cat.
The trouble with using this sort of reasoning to justify a line of research is, all it takes to undermine the justification is to say there's no reason reality can't be that strange.
Hence my preference for motivations based on the character of the theory, rather than the plausibility of the reality it depicts. My reasoning is still subjective — which is fine, since I'm motivating asking a question, not accepting an answer — but at least the reasoning is then not based on intuition about the nature of reality. Intuition specifically about physical reality could be right, of course, but has gotten a bad reputation — as part of the necessary process by which the quantum paradigm has secured its ecological niche — so it's better in this case to base intuition on some other criterion.
Hygiene again
To make sure I'm fully girded for battle — this is rough stuff, one can't be too well armed for it — I want to revisit some ideas I collected in earlier blog posts, and squeeze just a bit more out of them than I did before.
My previous thought relating explicitly to Theories of Everything was that, drawing an analogy with vau-calculi, spacetime geometry should perhaps be viewed not as a playing field on which all action occurs, but rather as a hygiene condition on the interactions that make up the universe. This analogy can be refined further. The role of variables in vau-calculi is coordinating causal connections between distant parts of the term. There are four kinds of variables, but unboundedly many actual variables of each kind; and α-renaming keeps these actual variables from bleeding into each other. A particular variable, though we may think of it as a very simple thing — a syntactic atom, in fact — is perhaps better understood as a distributed, complex-structured entity woven throughout the fabric of a branch of the term's syntax tree, humming with the dynamically maintained hygiene condition that keeps it separate from other variables. It may impinge on a large part of the α-renaming infrastructure, but most of its complex distributed structure is separate from the hygiene condition. The information content of the term is largely made up of these complex, distributed entities, with various local syntactic details decorating the syntax tree and regulating the rewriting actions that shape the evolution of the term. Various rewriting actions cause propagation across one (or perhaps more than one) of these distributed entities — and it doesn't actually matter how many rewriting steps are involved in this propagation, as for example even the substitution operations could be handled by gradually distributing information across a branch of the syntax tree via some sort of "sinking" structure, mirror to the binding structures that "rise" through the tree.
Projecting some of this, cautiously, through the analogy to physics, we find ourselves envisioning a structure of reality in which spacetime is a hygiene condition on interwoven, sprawling complex entities that impinge on spacetime but are not "inside" it; whose distinctness from each other is maintained by the hygiene condition; and whose evolution we expect to describe by actions in a dimension orthogonal to spacetime. The last part of which is interestingly suggestive of my other previous post on physics, where I noted, with mathematical details sufficient to make the point, that while quantum physics is evidently nondeterministic and nonlocal as judged relative to the time dimension, one can recover determinism and locality relative to an orthogonal dimension of "meta-time" across which spacetime evolves.
One might well ask why this hygiene condition in physics should take the form of a spacetime geometry that, at least at an intermediate scale, approximates a Euclidean geometry of three space and one time dimension. I have a thought on this, drawing from another of my irons in the fire; enough, perhaps, to move thinking forward on the question. This 3+1 dimension structure is apparently that of quaternions. And quaternions are, so at least I suspect (I've been working on a blog post exploring this point), the essence of rotation. So perhaps we should think of our hygiene condition as some sort of rotational constraint, and the structure of spacetime follows from that.
I also touched on Theories of Everything in a recent post while exploring the notion that nature is neither discrete nor continuous but something between (here). If there is a balance going on between discrete and continuous facets of physical worldview, apparently the introduction of discrete elementary particles is not, in itself, enough discreteness to counterbalance the continuous feature provided by the wave functions of these particles, and the additional feature of wave-function collapse or the like is needed to even things out. One might ask whether the additional discreteness associated with wave-function collapse could be obviated by backing off somewhat on the continuous side. The uncertainty principle already suggests that the classical view of particles in continuous spacetime — which underlies the continuous wave function (more about that below) — is an over-specification; the need for additional balancing discreteness might be another consequence of the same over-specification.
Interestingly, variables in λ-like calculi are also over-specified: that's why there's a need for α-renaming in the first place, because the particular name chosen for a variable is arbitrary as long as it maintains its identity relative to other variables in the term. And α-renaming is the hygiene device analogized to geometry in physics. Raising the prospect that to eliminate this over-specification might also eliminate the analogy, or make it much harder to pin down. There is, of course, Curry's combinatorial calculus which has no variables at all; personally I find Church's variable-using approach easier to read. Tracing that through the analogy, one might conjecture the possibility of constructing a Theory of Everything that didn't need the awkward additional discreteness, by eliminating the distributed entities whose separateness from each other is maintained by the geometrical hygiene condition, thus eliminating the geometry itself in the process. Following the analogy, one would expect this alternative description of physical reality to be harder to understand than conventional physics. Frankly I have no trouble believing that a physics without geometry would be harder to understand.
The idea that quantum theory as a model of reality might suffer from having had too much put into it, does offer a curious counterpoint to Einstein's suggestion that quantum theory is missing some essential piece of reality.
The structure of quantum math
The structure of the math of quantum theory is actually pretty simple... if you stand back far enough. Start with a physical system. This is a small piece of reality that we are choosing to study. Classically, it's a finite set of elementary things described by a set of parameters. Hamilton (yes, that's the same guy who discovered quaternions) proposed to describe the whole behavior of such a system by a single function, since called a Hamiltonian function, which acts on the parameters describing the instantaneous state of the system together with parameters describing the abstract momentum of each state parameter (essentially, how the parameters change with respect to time). So the Hamiltonian is basically an embodiment of the whole classical dynamics of the system, treated as a lump rather than being broken into separate descriptions of the individual parts of the system. Since quantum theory doesn't "do" separate parts, instead expecting everything to affect everything else, it figures the Hamiltonian approach would be particularly compatible with the quantum worldview. Nevertheless, in the classical case it's still possible to consider the parts separately. For a system with a bunch of parts, the number of parameters to the Hamiltonian will be quite large (typically, at least six times the number of parts — three coordinates for position and three for momentum of each part).
Now, the quantum state of the system is described by a vector over a complex Hilbert space of, typically, infinite dimension. Wait, what? Yes, that's an infinite number of complex numbers. In fact, it might be an uncountably infinite number of complex numbers. Before you completely freak out over this, it's only fair to point out that if you have a real-valued field over three-dimensional space, that's an uncountably infinite number of real numbers (the number of locations in three-space being uncountably infinite). Still, the very fact that you're putting this thing in a Hilbert space, which is to say you're not asking for any particular kind of simple structure relating the different quantities, such as a three-dimensional Euclidean continuum, is kind of alarming. Rather than a smooth geometric structure, this is a deliberately disorganized mess, and honestly I don't think it's unfair to wish there were some more coherent reality "underneath" that gives rise to this infinite structure. Indeed, one might suspect this is a major motive for wanting a hidden variable theory — not wishing for determinism, or wishing for locality, but just wishing for a simpler model of what's going on. David Bohm's hidden variable theory, although it did show one could recover determinism with actual classical particles "underneath", did so without simplifying the mathematics — the mathematical structure of the quantum state was still there, just given a makeover as a potential field. In my earlier account of this bit of history, I noted that Einstein, seeing Bohm's theory, remarked, "This is not at all what I had in mind." I implied that Einstein didn't like Bohm's theory because it was nonlocal; but one might also object that Bohm's theory doesn't offer a simpler underlying reality, rather a more complicated one.
The elements of the vector over Hilbert space are observable classical states of the system; so this vector is indexed by, essentially, the sets of possible inputs to the Hamiltonian. One can see how, step by step, we've ended up with a staggering level of complexity in our description, which we cope with by (ironically) not looking at it. By which I mean, we represent this vast amorphous expanse of information by a single letter (such as ψ), to be manipulated as if it were a single entity using operations that perform some regimented, impersonal operation on all its components that doesn't in general require it to have any overall shape. I don't by any means deride such treatments, which recover some order out of the chaos; but it's certainly not reassuring to realize how much lack of structure is hidden beneath such neat-looking formulae as the Schrödinger equation. And the amorphism beneath the elegant equations also makes it hard to imagine an alternative when looking at the specifics of the math (as suspected based on biological assessment of the evolution of physics).
The quantum situation gets its structure, and its dynamics, from the Hamiltonian, that single creature embodying the whole of the rules of classical behavior for the system. The Schrödinger equation (or whatever alternative plays its role) governs the evolution of the quantum state vector over time, and contains within it a differential operator based on the classical Hamiltonian function.
iℏ Ψ
= Ĥ Ψ .
One really wants to stop and admire this equation. It's a linear partial differential equation, which is wonderful; nonlinearity is what gives rise to chaos in the technical sense, and one would certainly rather deal with a linear system. Unfortunately, the equation only describes the evolution of the system so long as it remains a purely quantum system; the moment you open the box to see whether the cat is dead, this wave function collapses into observation of one of the classical states indexing the quantum state vector, with (to paint in broad strokes) the amplitudes of the complex numbers in the vector determining the probability distribution of observed classical states.
It also satisfies James Clerk Maxwell's General Maxim of Physical Science, which says (as recounted by Ludwik Silberstein) that when we take the derivatives of our system with respect to time, we should end up with expressions that do not themselves explicitly involve time. When this is so, the system is "complete", or, "undisturbed". (The idea here is that if the rules governing the system change over time, it's because the system is being affected by some other factor that is varying over time.)
The equation is, indeed, highly seductive. Although I'm frankly on guard against it, yet here I am, being drawn into making remarks on its properties. Back to the question of structure. This equation effectively segregates the mathematical description of the system into a classical part that drives the dynamics (the Hamiltonian), and a quantum part that smears everything together (the quantum state vector). The wave function Ψ, described by the equation, is the adapter used to plug these two disparate elements together. The moment you start contemplating the equation, this manner of segregating the description starts to seem inevitable. So, having observed these basic elements of the quantum math, let us step back again before we get stuck.
The key structural feature of the quantum description, in contrast to classical physics, is that the parts can't be considered separately. This classical separability produced the sense of simplicity that, I speculated above, could be an ulterior motive for hidden variable theories. The term for this is superposition of states, i.e., a quantum state that could collapse into any of multiple classical states, and therefore must contain all of those classical states in its description.
A different view of this is offered by so-called quantum logic. The idea here (notably embraced by physicist David Finkelstein, who I've mentioned in an earlier post because he was lead author of some papers in the 1960s on quaternion quantum theory) is that quantum theory is a logic of propositions about the physical world, differing fundamentally from classical propositional logic because of the existence of superposition as a propositional principle. There's a counterargument that this isn't really a "logic", because it doesn't describe reasoning as such, just the behavior of classical observations when applied as a filter to quantum systems; and indeed one can see that something of the sort is happening in the Schrödinger equation, above — but that would be pulling us back into the detailed math. Quantum logic, whatever it doesn't apply to, does apply to observational propositions under the regime of quantum mechanics, while remaining gratifyingly abstracted from the detailed quantum math.
Formally, in classical logic we have the distributive law
P and (Q or R) = (P and Q) or (P and R) ;
but in quantum logic, (Q or R) is superpositional in nature, saying that we can eliminate options that are neither, yet allowing more than the union of situations where one holds and situations where the other holds; and this causes the distributive law to fail. If we know P, and we know that either Q or R (but we may be fundamentally unable to determine which), this is not the same as knowing that either both P and Q, or both P and R. We aren't allowed to refactor our proposition so as to treat Q separately from R, without changing the nature of our knowledge.
[note: I've fixed the distributive law, above, which I botched and didn't even notice till, thankfully, a reader pointed it out to me. Doh!]
One can see in this broadly the reason why, when we shift from classical physics to quantum physics, we lose our ability to consider the underlying system as made up of elementary things. In considering each classical elementary thing, we summed up the influences on that thing from each of the other elementary things, and this sum was a small tidy set of parameters describing that one thing alone. The essence of quantum logic is that we can no longer refactor the system in order to take this sum; the one elementary thing we want to consider now has a unique relationship with each of the other elementary things in the system.
Put that way, it seems that the one elementary thing we want to consider would actually have a close personal relationship with each other elementary thing in the universe. A very large Rolodex indeed. One might object that most of those elementary things in the universe are not part of the system we are considering — but what if that's what we're doing wrong? Sometimes, a whole can be naturally decomposable into parts in one way, but when you try to decompose it into parts in a different way you end up with a complicated mess because all of your "parts" are interacting with each other. I suggested, back in my first blog post on physics, that there might be some wrong assumption shared by both classical and quantum physics; well, the idea that the universe is made up of elementary particles (or quanta, whatever you prefer to call them) is something shared by both theories. The quantum math (Schrödinger equation again, above) has this classical decomposition built into its structure, pushing us to perceive the subsequent quantum weirdness as intrinsic to reality, or perhaps intrinsic to our observation of reality — but what if it's rather intrinsic to that particular way of slicing off a piece of the universe for consideration?
The quantum folks have been insisting for years that quantum reality seems strange only because we're imposing our intuitions from the macroscopic world onto the quantum-scale world where it doesn't apply. Okay... Our notion that the universe is made up of individual things is certainly based on our macroscopic experience. What if it breaks down sooner than we thought — what if, instead of pushing the idea of individual things down to a smaller and smaller scale until they sizzle apart into a complex Hilbert space, we should instead have concluded that individual things are something of an illusion even at macroscopic scales?
The structure of reality
One likely objection is that no matter how you split up reality, you'd still have to observe it classically and the usual machinery of quantum mechanics would apply just the same. There are at least a couple of ways — two come to mind atm — for some differently shaped 'slice' of reality to elude the quantum machinery.
• The alternative slice might not be something directly observable.
Here an extreme example comes in handy (as hoped). Recall the sympathetic hypothesis, above. A pattern would not be subject to direct observation, any more than a Platonic ideal like "table" or "triangle" would be. (Actually, it seems possible a pattern would be a Platonic ideal.)
This is also reminiscent of the analogy with vau-calculus. I noted above that much of the substance of a calculus term is made up of variables, where by a variable I meant the entire dynamically interacting web delineated by a variable binding construct and all its matching variable instances. A variable in this sense isn't, so to speak, observable; one can observe a particular instance of a variable, but a variable instance is just an atom, and not particularly interesting.
• The alternative slice might be something quantum math can't practically cope with. Quantum math is very difficult to apply in practice; some simple systems can be solved, but others are intractable. (It's fashionable in some circles to assume more powerful computers will solve all math problems. I'm reminded of a quote attributed to Eugene Wigner, commenting on a large quantum calculation: "It is nice to know that the computer understands the problem. But I would like to understand it, too.") It's not inconceivable that phenomena deviating from quantum predictions are "hiding in plain sight". My own instinct is that if this were so, they probably wouldn't be just on the edge of what we can cope with mathematically, but well outside that perimeter.
This raises the possibility that quantum mechanics might be an idealized approximation, holding asymptotically in a degenerate case — in somewhat the same way that Newtonian mechanics holds approximately for macroscopic problems that don't involve very high velocities.
We have several reasons, by this point, to suspect that whatever it is we're contemplating adding to our model of reality, it's nonlocal (that is, nonlocal relative to the time dimension, as is quantum theory). On one hand, bluntly, classical physics has had its chance and not worked out; we're already conjecturing that insisting on a classical approach is what got us into the hole we're trying to get out of. On the other hand, under the analogy we're exploring with vau-calculus, we've already noted that most of the term syntax is occupied by distributed variables — which are, in a deep sense, fundamentally nonlocal. The idea of spacetime as a hygiene condition rather than a base medium seems, on the face of it, to call for some sort of nonlocality; in fact, saying reality has a substantial component that doesn't follow the contours of spacetime is evidently equivalent to saying it's nonlocal. Put that way, saying that reality can be usefully sliced in a way that defies the division into elementary particles/things is also another way of saying it's nonlocal, since when we speak of dividing reality into elementary "things", we mean, things partitioned away from each other by spacetime. So what we have here is several different views of the same sort of conjectured property of reality. Keeping in mind, multiple views of a single structure is a common and fruitful phenomenon in mathematics.
I'm inclined to doubt this nonlocality would be of the sort already present in quantum theory. Quantum nonlocality might be somehow a degenerate case of a more general principle; but, again bluntly, quantum theory too has had its chance. Moreover, it seems we may be looking for something that operates on macroscopic scales, and quantum nonlocality (entanglement) tends to break down (decohere) at these scales. This suggests the prospect of some form of robust nonlocality, in contrast to the more fragile quantum effects.
So, at this point I've got in my toolkit of ideas (not including sympathy, which seems atm quite beyond the pale, limited to the admittedly useful role of devil's advocate):
• a physical structure substantially not contained within spacetime.
• space emergent as a hygiene condition, perhaps rotation-related.
• robust nonlocality, with quantum nonlocality perhaps as an asymptotic degenerate case.
• some non-spacetime dimension over which one can recover abstract determinism/locality.
• decomposition of reality into coherent "finite slices" in some way other than into elementary things in spacetime.
• slices may be either non-observable or out of practical quantum scope.
• the structural role of the space hygiene condition may be to keep slices distinct from each other.
• conceivably an alternative decomposition of reality may allow some over-specified elements in classical descriptions to be dropped entirely from the theory, at unknown price to descriptive clarity.
I can't make up my mind if this is appallingly vague, or consolidating nicely. Perhaps both. At any rate, the next phase of this operation would seem likely to shift further along the scale toward identifying concrete structures that meet the broad criteria. In that regard, it is probably worth remarking that current paradigm physics already decomposes reality into nonlocal slices (though not in the sense suggested here): the types of elementary particles. The slices aren't in the spirit of the "finite" condition, as there are only (atm) seventeen of them for the whole of reality; and they may, perhaps, be too closely tied to spacetime geometry — but they are, in themselves, certainly nonlocal. |
90b50f50768c794b | Nicholas Saunders
Divine Action and Modern Science
Saunders, Nicholas, Divine Action and Modern Science, Cambridge University Press, 2002, 234pp, $22.00 (pbk), ISBN 0521524164.
Reviewed by Thomas Tracy , Bates College
The last twenty years have seen a remarkable renewal of interest in the relation of religion and science. One particularly difficult tangle of issues has to do with the idea, deeply rooted in the theistic traditions, that God acts in the world. What is the relation between theological depictions of the world as the scene of divine action and scientific descriptions of the world as an intelligible structure of natural law? Can God be understood to act entirely in and through the regular structures of nature or does a robust account of divine action also require the affirmation that God acts to redirect the course of events in the world, bringing about effects that would not have occurred had God not so acted? If we say the latter, are we committed to the claim that God at least sometimes performs miracles, in the familiar (if truncated) modern sense of an event caused by God that “violates” the laws of nature?
Saunders begins by noting the Biblical roots and theological prominence of the idea of divine action in the world. For the writers of the Hebrew Bible there was no notion of nature as an autonomous system and God as an external agent; rather the world around us is an expression of God’s vital activity and purposes. This Biblical talk of divine action poses problems for modern interpreters, however, as was evident in the embarrassing predicament of the Biblical Theology movement of the 1950’s, which proclaimed that God is made known through mighty acts in history, and yet which was unable to give a satisfactory account of what God has done. The difficulty for modern theologians centers on the idea of particular, or special, divine action. Special divine action is often contrasted to God’s general action of creating and sustaining the universe as a whole. If we say that God’s purposes are realized entirely through the natural order that God creates, then there need be no conflict with what the sciences tell us about the law-governed processes constituting that natural order. Special divine action, however, involves bringing about “genuine physical effects that would not have occurred had God not chosen to act” (p. 21). This appears to require that God intervene within the natural order to turn events in a direction they would not otherwise have gone.
Modern theologians have notoriously grown wary of miracles, understood as divine acts that contravene the laws of nature. There are many reasons for this: some are distinctively theological (e.g., concerns about God’s consistency in creation), but others reflect responses to the methods and findings of the natural sciences, and especially to modern conceptions of the integrity of natural law. Saunders gives particular attention to the latter, surveying a number of philosophical analyses of the concept of a “law of nature,” and focusing especially on “necessitarian” accounts, which are presupposed in debates about determinism. He borrows from William James a working definition of determinism according to which it claims (quoting James) that “those parts of the universe already laid down absolutely appoint and decree what the other parts shall be” (Saunders, p. 85). It appears that in a deterministic universe special divine action would have to take the form of a miraculous intervention in the order of nature. If, however, the structures of nature are indeterministic, it may be possible for God to bring about particular effects in the world without contravening natural law precisely because those laws do not in every case fully specify each succeeding state. This would be a form of non-interventionist special divine action.
There currently are two leading options for developing a position of this kind. Each relies upon the interpretation of a contemporary scientific theory, quantum mechanics in one case and chaos theory in the other. A critique of these proposals is the heart of Saunders’s book, and he opens these chapters with an historical survey of the theological uses of quantum mechanics. The best known early proponent of this approach is William Pollard in Chance and Providence (London: Farber and Farber, 1958), but a number of thinkers have explored variants of this idea in the last decade (see, e.g., the essays collected in Robert John Russell, Philip Clayton, Kirk Wegter-McNelly, John Polkinghorne, eds., Quantum Mechanics: Scientific Perspectives on Divine Action [Vatican Observatory Publications and Center for Theology and the Natural Sciences, 2001]). Saunders argues that this general approach faces a number of important objections, and he is surely right about this. His critique is marred, however, by understating the extent to which most of these objections have been recognized and considered in recent discussions by the authors he cites. He contends, for example, that these accounts “all claim ’quantum events’ as a locus of SDA [special divine action] and yet do not explain how this might be the case, or even what they take to be an ’event’ in quantum mechanics” (p. 129). In fact, most of the authors he considers make it clear that they are talking about so-called “measurement events” in which (according to a widely held version of the Copenhagen interpretation of quantum theory) the wave function describing the probabilistic properties of a quantum entity (e.g., an electron) collapses non-deterministically to a single value for the “measured” (i.e., irreversibly registered) property.
Nonetheless, Saunders’s analysis of the issues is perceptive and helpful, not least because it is informed by a detailed grasp of the relevant science. He points out that quantum systems evolve deterministically according to the Schrödinger equation, and that the only point of possible indeterminism is in probabilistic state reduction, as we just noted. Further, we can conclude that the unpredictability of the outcome of this event reflects ontological indeterminism, rather than merely epistemic uncertainty, only if we adopt a particular interpretive approach to quantum theory. This interpretation takes the probabilistic character of the quantum formalism to reflect a superposition of properties in the quantum system, and it holds that the collapse of this superposition has necessary but not sufficient conditions in the history of the system and its environment. Although views of this kind dominate current discussion, there are well-developed deterministic alternatives, (for example, the pilot wave hypothesis developed by David Bohm, and some forms of the “many worlds” interpretation).
Saunders argues that even if we adopt a version of the “orthodox” indeterministic interpretation, there is little prospect of successfully developing an account of special divine action at the quantum level. He identifies four possible ways in which, given this interpretation of quantum mechanics, God could act upon a quantum system. 1) God might alter the wavefunction between measurements; 2) God might make a measurement on a system; 3) God might alter the probabilities for realizing particular outcomes; 4) God might determine the outcome of measurement. Saunders considers and rejects each of these alternatives, though it is clear that only the fourth is relevant to the project of conceiving of special divine action in a way that does not contravene the order of nature. His formulation of this fourth option is curious. “The final approach to quantum SDA in the ’orthodox’ interpretation of quantum measurement is the assertion that God simply ’ignores’ the probabilities predicted by the orthodox measurement theory and controls the outcomes of particular measurements” (p. 154). A theologian interested in non-interventionist special divine action will not say that God ignores the probability distributions predicted by quantum theory. Rather, the thesis would be that God might act in the world by determining quantum events within the ordinary probability patterns, which do, after all, permit wide variation in particular outcomes from instance to instance. If some of these quantum events were located within natural structures that amplify them in such a way that they have significant consequences on the macroscopic level, then God could affect the larger course of events without contravening any statistical or deterministic laws of nature.
Clearly, a proposal of this sort is highly speculative and intimately tied to some of the most unsettled and unsettling puzzles in the interpretation of quantum theory. The question about the amplification of quantum events, for example, is crucial; if indeterministic quantum chance is entirely subsumed within higher level deterministic regularities, then it will be of no use to the theologian looking for a means of non-interventionist special divine action. Saunders does not press this point, however. Rather, he argues that if God were to determine quantum events, then the probability patterns “either are a deception in that they have no relationship with physical reality whatsoever, or they are a representation of the chance of God acting in the same way on a subsequent occasion. Both of these conclusions are unsatisfactory…” (p. 155). The first half of this dilemma is easily dispelled. If God chooses to determine events that the causal structures of nature leave undetermined, there is no divine deception involved; quantum systems really do display these stochastic regularities, and the fact that God establishes them through direct divine action no more undercuts their standing as physical fact than if they were determined by a (non-local) hidden variable (as in Bohm’s theory). Saunders argues for the unacceptability of the second alternative in this dilemma by contending that if we treat quantum-measurement probabilities as reflecting regularities in divine action, then this commits us to a “regularitarian,” or neo-Humean, conception of natural laws. This is incompatible with the idea that a divine act could violate a law of nature, and therefore the very distinction between interventionist and non-interventionist divine action collapses. Each of the links in this argument is problematic; it is not clear how the theological view in question entails a general commitment to a regularitarian conception of natural law or why such a view would make it impossible to speak of violations of natural law (understood, following Hume in his essay on miracles, as events that fall outside well-evidenced patterns of constant conjunction). Nor is it clear why, if the argument were successful, a theologian concerned with special divine action should be troubled by this result, since the worry about law-violating interventions would then disappear. This is not to deny that there are a number of important problems facing the suggestion that God might act to determine some or all undetermined transitions at the quantum level; as we have seen, there are uncertainties of interpretation in quantum mechanics, fundamental puzzles associated with measurement and wavefunction collapse in contemporary varieties of the Copenhagen interpretation, and open questions about the amplification of quantum events. (There are also, of course, important theological misgivings that can be raised about this mode of divine action.) Saunders skillfully analyzes the scientific issues, but his discussion does not justify the conclusion that “non-interventionist quantum SDA is not theoretically possible” (p. 172).
A second approach to non-interventionist special divine action appeals to chaos theory, and is especially associated with the work of John Polkinghorne. Saunders’s discussion of mathematical chaos theory is technically sophisticated and illuminating. He gives particular attention to Edward Lorenz’s attempt to model the behavior of the atmosphere, which led to the discovery that his mathematical model displayed an extraordinarily sensitive dependence on the precise specification of initial conditions. Vanishingly small differences in initial conditions result in dramatically different states of the system on very short time scales. As a result, even though the non-linear mathematical equations describing the system develop deterministically, its future behavior is unpredictable for any finite intelligence. We can, however, recognize an overall pattern in the ensemble of possible paths of development (the “phase space”) marked out by a chaotic system. In dissipative systems, in which energy is lost over time, trajectories through the phase space will tend to contract and converge, generating a pattern known as an attractor. In so-called “strange attractors” the converging paths fold in on themselves so tightly that there is, at the limit, no energy difference between them, though they do not cross or join.
Does chaos theory provide a set of concepts useful to the theologian interested in non-interventionist special divine action in the world? The immediate answer would appear to be that it does not. The unpredictability of chaotic systems is generated out of a smoothly deterministic mathematics, and so does not provide the causal openness that special divine action appears to require. Precisely this point has often been made in response to John Polkinghorne’s appeals to chaos theory in his accounts of divine action. Saunders argues that this criticism of Polkinghorne misses the broader metaphysical thesis at work in Polkinghorne’s proposal, though Saunder’s own quite careful and effective criticism of Polkinghorne’s position also relies on noting the deterministic character of chaos theory. Polkinghorne holds that the surprising emergence of unpredictability in the midst of classical determinism provides the motivation for a metaphysical conjecture: namely, that the deterministic character of our understanding of chaotic systems is an artifact of our theory making, with its simplification and abstraction, and not a feature of the structures in the world that we are attempting to describe. Those structures are more supple, flexible, and sensitively interrelated than our theory can yet capture. Saunders goes on to show, however, that Polkinghorne’s account of divine action relies on aspects of chaos theory that arise precisely by virtue of its deterministic mathematics. “The only reason that sensitive dependence and strange attractors exist is precisely because the mathematics of chaos theory are deterministic” (p. 192). Polkinghorne suggests that God acts not by “tweaking” the initial conditions of chaotic systems but rather through a non-energetic input of “active information” that selects among nearby paths through the phase space of the system. Saunders replies that “active information input relies again on the determinism of mathematical chaos to produce the required fractal structure in attractors, the required infinite limit of that structure, and the corresponding region in which energy differences between alternative possible trajectories tends to zero” (p. 194). The dependence of Polkinghorne’s account upon properties of chaotic systems that arise only within a deterministic mathematics crucially undercuts his metaphysical conjecture that the actual structures modeled by this theory are indeterministic.
At the end of the book, Saunders turns briefly to the views of Arthur Peacocke, whose approach provides an interesting contrast to those considered so far. He too thinks it important to affirm that God acts to affect the ongoing course of events in the world. But he denies that God does so by exploiting causal incompleteness in the structure of natural processes; indeed, he regards any divine action inserted among natural causes (whether it interrupts a natural causal chain or occurs at a point of natural indeterminism) as an “intervention.” Instead, Peacocke contends that God acts by means of “whole-part,” or “top-down” constraints upon the world-as-a-whole. Nature, he notes, is organized as a hierarchy of increasingly complex systems in which lower level structures support and are incorporated within higher levels of organization. Peacocke, as a panentheist, holds that this entire structure is incorporated within the being of God, though God is not simply identical with the world. God, then, can be thought of as the highest level system-of-all-systems that embraces all the structures of nature, and God can be understood to act not as a “triggering cause” at lower levels of the system but rather as a “structuring cause” at the highest level. That is, God affects the operation of the world system in the way higher level organizational properties of a whole constrain the operation of the parts.
Saunders finds Peacocke’s position to be “the most promising current theory of SDA” (p. 213), though he acknowledges that it operates at a high level of abstraction. Following on the heels of Saunders’s detailed critical analysis of theological appeals to quantum mechanics and chaos theory, his brief discussion of Peacocke is something of an anti-climax. Peacocke’s proposal is subtle and appealing, but there clearly are critical questions that need to be raised about it. It is not apparent, for example, how Peacocke’s God could bring about particular changes in the course of events by acting as a structural constraint on the system as a whole. If God is to affect the structure of the system, then God must modify the relation of its parts, and this requires that God act upon the parts in a way that will show up in their causal history; it appears that divine action among natural causes cannot be avoided after all.
Saunders’s discussion is rich in helpful detail. His command of the relevant science allows him to fill in crucial background information and clear away misunderstandings. The result is a valuable volume that contributes significantly to focusing and deepening the engagement of theology with the natural sciences. |
c88b5fb189617a5c | . – Quantum Mechanics
Text under revision. Not yet approved by academic staff.
To understand the conceptual crisis and fundamental experiments that led to the
formulation of quantum mechanics. To understand its axiomatic bases. To learn to
solve nonrelativistic quantum mechanical problems using perturbative and nonperturbative methods.
1 - The crisis of classical physics: Photoelectric effect, specific heat of solids, black
body, atomic spectra, De Broglie hypothesis and Bohr model.
2 - Schrödinger equation: Wave–particle duality. Statistical interpretation.
Stationary state equation. Norm conservation. Current density. Free and bound
states. Position, momentum and energy observables. Properties of operators
associated with observables.
3- Uncertainty principle: Compatible and incompatible observables. Minimum
uncertainty package and relationship with the uncertainty principle. Thought
4 – Solvable models: The free particle. Spectrum, improper eigenfunctions and
comparison with the classical case. Piecewise constant potential, potential barrier.
Reflection and transmission coefficient. Potential step. Resonance scattering. The
harmonic oscillator: eigenvalues and eigenfunctions. Creation and annihilation
operators. Coherent states: properties and classical limit. Two-body problem:
classical motion. Kepler problem. Angular equation (spherical harmonics). Radial
equation. Coulomb potential (hydrogen atom). Bound states.
5- The Stern–Gerlach experiment and spin: matrix representation of spin operators.
Commutation rules.
6 - The physical foundations and formal rudiments of quantum mechanics: The
general principles of the theory: observables and operators. States and
representations. Dirac notation. Sets of compatible observables and maximum
information on the state of a system. Position and momentum operators. The
translation operator. Discrete and continuous spectra. The time evolution operator.
Schrödinger and Heisenberg representations. Ehrenfest theorem. Conserved
quantities. Correlation amplitude and time-energy uncertainty relation.
7 - Perturbation theory: time-independent for a discrete spectrum (degenerate and
non-degenerate). Time-dependent. Interaction picture. Dyson expansion. Twostate problem. Rabi oscillations. Fermi's golden rule. Time-periodic potential.
Variational method.
8 - General theory of angular momentum: eigenfunctions and eigenvalues.
Spherical harmonics, properties. Comparison with the classical case. Addition of
angular momenta. Clebsch-Gordan coefficients.
9 – Identical particles: exchange operator. Properties. Completely symmetric wave
function and completely anti-symmetric wave function. Slater determinant. Spinstatistics connection. Bosons and fermions. Pauli exclusion principle.
1. L.D. LANDAU - L. LIFSHITZ, Quantum Mechanics, Dover New York, 2000.
2. C. COHEN-TANNOUDJI - B. DIU - F. LALOE, Quantum Mechanics, Vol. I, II, Wiley and Sons,
Paris, 2005.
3. P. CALDIROLA - R. CIRELLI - G.M. PROSPERI, Introduzione alla fisica teorica, Utet.
4. J. SAKURAI, Meccanica quantistica moderna, Zanichelli, Bologna, 1996.
Lectures (66 hours) and exercises (30 hours).
Written and oral examination.
Further information can be found on the lecturer's webpage
http://www2.unicatt.it/unicattolica/docenti/index.html or on the Faculty notice board. |
d60a6d3899bb26c4 | Quantum mechanics and de Broglie's concept
Quantum mechanics is a probabilistic theory that has been developed in an abstract phase space on the scale of the atom size $\sim 10^{-10}$ m. The formalism of quantum mechanics is based on the Schrödinger and Dirac equations. Quantum mechanics can predict/calculate stable energy levels for such quantum systems, as atoms or free electrons, in the presence of applied electric and magnetic fields, etc.
In 1952 after the publication of works by David Bohm [1], which repeated de Broglie's ideas from 1927 on the particle as a pilot-wave, Louis de Broglie came back to his earlier consideration of quantum mechanics and during the next more than 30 years he worked on a double solution theory [2] for the Schrödinger equation. De Broglie strongly believed that the prevailing quantum mechanical formalism should be replaced by a more fundamental theory. In such a theory Schrödinger's wave function $\psi$ should have a strong physical meaning instead of Max Born's interpretation that the module $|\psi (\vec r)|$ prescribes a probability for the particle to occupy a position in a point $\vec r$. Many scientists including high-level scholars turned their back on Louis de Broglie, considering him crazy. Nevertheless, some eager researchers still tried to follow-up on de Broglie's and Bohm's thoughts, developing ideas that are based on so-called hidden variables. Among new interesting results, which were obtained by de Broglie in the 1960s, we can refer to works [3] where he showed that the motion of a particle should be accompanied by a variation in its mass.
Conventional quantum mechanics is not lacking conceptual difficulties; they are discussed by de Broglie [4] (see also comments by G. Lochak in the book). Some principal difficulties of quantum mechanics, such as long-range action, etc., have been discussed in papers [5,6,7].
Thus, de Broglie relationships $E = h \nu$ and $\lambda = h/p$, which allow the derivation [4] of the Schrödinger wave equation, de Broglie's concept on the motion of a particle guided by a real wave that spreads in a sub quantum medium and de Broglie's idea on a particle that moves with a variation in mass, allow us to suggest a submicroscopic mechanics for quantum particles. In addition to de Broglie's ideas, submicroscopic mechanics includes: 1) notions and peculiarities of solid state physics and 2) a rigorous mathematical background for the structure of our ordinary physical space in which all quantum mechanical phenomena occur.
[1] D. Bohm, A suggested interpretation of the quantum theory, in terms of ”hidden” variables. I. Physical Review 85, 166-179; (1952). A suggested interpretation of the quantum theory, in terms of ”hidden” variables. II, Physical Review 85, 180-193 (1952).
[2] L. de Broglie, Interpretation of quantum mechanics by the double solution theory, Annales de la Fondation Louis de Broglie 12, no. 4, 399- 421 (1987).
[3] L. de Broglie, Sur la dynamique du corps à masse propre variable et la formule de transformation relativiste de la chaleaur, Comptes Rendus 264 B (16), 1173-1175 (1967); On the basis of wave mechanics, Comptes Rendus 277 B, no. 3, 71-73 (1973).
[4] L. de Broglie, Les incertitudes d'Heisenberg et l'interpretation probabiliste de la mechanique ondulatoire (Gauthier-Villars, Bordas, Paris, 1982), ch. 2, sect. 4. Russian translation: Соотношения неопределенностей Гейзенберга и вероятностная интерпретация волновой механики /Heisenberg’s uncertainty relations and the probabilistic interpretation of wave mechanics/ (Mir, Moscow, 1986), pp. 50-52.
[5] V. Krasnoholovets, On the way to submicroscopic description of nature, Indian Journal of Theoretical Physics 49, no. 2, pp. 81-95 (2001) (also arXiv.org e-print archive http://arXiv.org/abs/quant-ph/9908042).
[6] V. Krasnoholovets, Can quantum mechanics be cleared from conceptual difficulties?, http://arXiv.org/abs/quant-ph/0210050.
[7] V. Krasnoholovets, On the origin of conceptual difficulties of quantum mechanics, in Developments in Quantum Physics, eds. F. Columbus and V. Krasnoholovets (Nova Science Publishers Inc., New York, 2004), pp. 85-109 (also http://arXiv.org/abs/physics/0412152).
|
ceb3457916f013f4 | How to Solve the Particle in a Box Problem
In quantum mechanics, the particle in a box is a conceptually simple problem in position space that illustrates the quantum nature of particles by only allowing discrete values of energy. In this problem, we start from the Schrödinger equation, find the energy eigenvalues, and proceed to impose normalization conditions to derive the eigenfunctions associated with those energy levels.
1. 1
Begin with the time-independent Schrödinger equation. The Schrödinger equation is one of the fundamental equations in quantum mechanics that describes how quantum states evolve in time. The time-independent equation is an eigenvalue equation, and thus, only certain eigenvalues of energy exist as solutions.
2. 2
Substitute the Hamiltonian of a free particle into the Schrödinger equation.
• In the one-dimensional particle in a box scenario, the Hamiltonian is given by the following expression. This is familiar from classical mechanics as the sum of the kinetic and potential energies, but in quantum mechanics, we assume that position and momentum are operators.
• In position space, the momentum operator is given by
• Meanwhile, we let inside the box and everywhere else. Because in the region that we are interested in, we may now write this equation as a linear differential equation with constant coefficients.
• Rearranging terms and defining a constant we arrive at the following equation.
3. 3
Solve the above equation. This equation is familiar from classical mechanics as the equation describing simple harmonic motion.
• The theory of differential equations tells us that the general solution to the above equation is of the following form, where and are arbitrary complex constants and is the width of the box. We are choosing coordinates such that one end of the box lies at for simplicity of calculations.
• Of course, the solution is valid only up to an overall phase, which does change with time, but phase changes do not affect any of our observables, including energy. Therefore, for our purposes, we will write the wavefunction as only varying with position hence the usage of the time-independent Schrödinger equation.
4. 4
Impose boundary conditions. Remember that everywhere outside the box, so the wavefunction must vanish at the ends.
• This is a system of linear equations, so we may write this system in matrix form.
5. 5
Take the determinant of the matrix and evaluate. In order for the above homogeneous equation to have nontrivial solutions, the determinant must vanish. This is a standard result from linear algebra. If you are not familiar with this background, you may treat this as a theorem.
• The sine function is 0 only when its argument is an integer multiple of
• Recall that We may then solve for
• These are the energy eigenvalues of the particle in a box. Because is an integer, the energy of this system can only take on discrete values. This is a chiefly quantum mechanical phenomenon, quite unlike classical mechanics, where a particle can take on continuous values for its energy.
• The energy of the particle can only take on positive values, even at rest. The ground-state energy is called the zero-point energy of the particle. The energy corresponding to is not allowed because this physically represents that no particle is in the box. Because the energies increase quadratically, higher energy levels are spread out more than lower energy levels.
• We will now proceed to derive the energy eigenfunctions.
6. 6
Write out the wavefunction with the unknown constant. We know from the constraint of the wavefunction at that (see the first equation in step 4). Therefore, the wavefunction will only contain one term from the general solution of the differential equation. Below, we substitute
7. 7
Normalize the wavefunction. Normalizing will determine the constant and will ensure that the probability of finding the particle in the box is 1. Since can only be an integer, it is convenient to set here, as the only purpose of substituting a value is to obtain an expression for It is helpful to know the integral when normalizing.
8. 8
Arrive at the wavefunction. This is the description of a particle inside a box, surrounded by infinite potential energy walls. While can take on a negative value, the result would simply negate the wavefunction and result in a phase change, not an entirely different state. We can clearly see why only discrete energies are allowed here, because the box only allows those wavefunctions with nodes at and
Community Q&A
Ask a Question
200 characters left
• When normalizing, substituting an appropriate integer for and performing the resulting u-substitution will always return the correct answer for since the change in the derivative is compensated by the change in the boundary. Verify this by setting or any other positive integer, and normalizing again.
0 Helpful? 0
About this wikiHow
17 votes - 70%
Click a star to vote
Co-authors: 8
Views: 17,518
Categories: Featured Articles | Physics
70% of people told us that this article helped them.
Did this article help you? |
6c6911bed76317ad | You are currently browsing the tag archive for the ‘compact attractor’ tag.
I’ve just uploaded to the arXiv my paper “A global compact attractor for high-dimensional defocusing non-linear Schrödinger equations with potential“, submitted to Dynamics of PDE. This paper continues some earlier work of myself in an attempt to understand the soliton resolution conjecture for various nonlinear dispersive equations, and in particular, nonlinear Schrödinger equations (NLS). This conjecture (which I also discussed in my third Simons lecture) asserts, roughly speaking, that any reasonable (e.g. bounded energy) solution to such equations eventually resolves into a superposition of a radiation component (which behaves like a solution to the linear Schrödinger equation) plus a finite number of “nonlinear bound states” or “solitons”. This conjecture is known in many perturbative cases (when the solution is close to a special solution, such as the vacuum state or a ground state) as well as in defocusing cases (in which no non-trivial bound states or solitons exist), but is still almost completely open in non-perturbative situations (in which the solution is large and not close to a special solution) which contain at least one bound state. In my earlier papers, I was able to show that for certain NLS models in sufficiently high dimension, one could at least say that such solutions resolved into a radiation term plus a finite number of “weakly bound” states whose evolution was essentially almost periodic (or almost periodic modulo translation symmetries). These bound states also enjoyed various additional decay and regularity properties. As a consequence of this, in five and higher dimensions (and for reasonable nonlinearities), and assuming spherical symmetry, I showed that there was a (local) compact attractor K_E for the flow: any solution with energy bounded by some given level E would eventually decouple into a radiation term, plus a state which converged to this compact attractor K_E. In that result, I did not rule out the possibility that this attractor depended on the energy E. Indeed, it is conceivable for many models that there exist nonlinear bound states of arbitrarily high energy, which would mean that K_E must increase in size as E increases to accommodate these states. (I discuss these results in a recent talk of mine.)
In my new paper, following a suggestion of Michael Weinstein, I consider the NLS equation
i u_t + \Delta u = |u|^{p-1} u + Vu
where u: {\Bbb R} \times {\Bbb R}^d \to {\Bbb C} is the solution, and V \in C^\infty_0({\Bbb R}^d) is a smooth compactly supported real potential. We make the standard assumption 1 + \frac{4}{d} < p < 1 + \frac{4}{d-2} (which is asserting that the nonlinearity is mass-supercritical and energy-subcritical). In the absence of this potential (i.e. when V=0), this is the defocusing nonlinear Schrödinger equation, which is known to have no bound states, and in fact it is known in this case that all finite energy solutions eventually scatter into a radiation state (which asymptotically resembles a solution to the linear Schrödinger equation). However, once one adds a potential (particularly one which is large and negative), both linear bound states (solutions to the linear eigenstate equation (-\Delta + V) Q = -E Q) and nonlinear bound states (solutions to the nonlinear eigenstate equation (-\Delta+V)Q = -EQ - |Q|^{p-1} Q) can appear. Thus in this case the soliton resolution conjecture predicts that solutions should resolve into a scattering state (that behaves as if the potential was not present), plus a finite number of (nonlinear) bound states. There is a fair amount of work towards this conjecture for this model in perturbative cases (when the energy is small), but the case of large energy solutions is still open.
In my new paper, I consider the large energy case, assuming spherical symmetry. For technical reasons, I also need to assume very high dimension d \geq 11. The main result is the existence of a global compact attractor K: every finite energy solution, no matter how large, eventually resolves into a scattering state and a state which converges to K. In particular, since K is bounded, all but a bounded amount of energy will be radiated off to infinity. Another corollary of this result is that the space of all nonlinear bound states for this model is compact. Intuitively, the point is that when the solution gets very large, the defocusing nonlinearity dominates any attractive aspects of the potential V, and so the solution will disperse in this case; thus one expects the only bound states to be bounded. The spherical symmetry assumption also restricts the bound states to lie near the origin, thus yielding the compactness. (It is also conceivable that the localised nature of V also restricts bound states to lie near the origin, even without the help of spherical symmetry, but I was not able to establish this rigorously.)
Read the rest of this entry »
RSS Google+ feed
Get every new post delivered to your Inbox.
Join 3,712 other followers |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.