chash
stringlengths
16
16
content
stringlengths
267
674k
f8c6f1277535302a
The Life of Psi Philosophical Musings on the Foundations of Physics and Chemistry Learn More Schrödinger's Cat and the Measurement Problem Quantum mechanics is weird. Very weird. According to quantum theory, particles can be at two positions at the same time. Other particles tend to behave like waves, whereas waves, in turn, sometimes act like particles. Still other particles suddenly pop into existence, out of "nothingness" — an actual creatio ex nihilo Not surprisingly then, even some of the greatest scientists, such as Niels Bohr and Richard Feynman, claimed that they did not understand quantum theory. Einstein believed that some of the implications of quantum mechanics were so strange, and counter-intuitive, he fiercely rejected them till the end of his life. Schrödinger finally, after devoting most of his life to the development of quantum mechanics, admitted: "I don't like it, and I'm sorry I ever had anything to do with it." "Everyone who is not shocked by quantum theory has not understood it" — Niels Bohr But in the 100 years since then, most scientists have learned to live with the mind bending, deeply disturbing claims of quantum mechanics. We somehow grew accustomed to the thought that the quantum world (or the world of the very small) is just hugely different from our familiar world (the world of the normally sized). There are other problems with quantum theory though, which aren't simply a result of the theory's intrinsic weirdness. These problems aren't a consequence of our inability to imagine the whereabouts of the quantum world. They are as real as any other scientific problem, and in desperate need of a solution. One of the most crucial problems of quantum mechanics is known as the measurement problem and will form the subject matter of this post. The measurement problem, in essence, boils down to the inevitable clash between the two dynamical laws of quantum mechanics: the linear, deterministic evolution, as described by Schrödinger's equation, and the non-linear, indeterministic (probabilistic) evolution, as described by the collapse postulate (see further). There's probably no better way to illustrate this problem than by looking at Erwin Schrödinger's tasteless thought experiment of the cat in a box. Now, before delving into the details of Schrödinger's Gedankenexperiment, I first have to say something about radioactive atoms and their quantum behaviour. Quantum superpositions Half LifeRadioactive atoms are, by their very nature, highly unstable and they tend to disintegrate (decay) into smaller, more stable, fragments. Bi-212, for example, a radioactive isotope of bismuth, has a half-life of 60 minutes (60.55 minutes to be exact), which means that approximately half of the Bi-212 atoms will have decayed after a period of 60 minutes. Now, imagine observing a single, radioactive Bi-212 atom which started in its undecayed form. After an hour, there will be a 50% chance that the atom has decayed, and another 50% chance that it is still intact. Let us denote the decayed and the undecayed form by the kets \left|decayed\right\rangle and \left|undecayed\right\rangle. In that case, the evolution of the Bi-212 atom during that one hour period can be represented as follows: (Don't mind the \frac{1}{\sqrt{2}} in front of the brackets. I'll come back to this in a moment.) The above evolution is governed by the Schrödinger equation. One of the equation's most important properties is that it is perfectly deterministic. That is, given the initial state of the atom at time t=0 (i.e. \left|undecayed\right\rangle), one can deterministically predict (that is, with absolute certainty) what its final state will be at time t=60 (i.e. \left|undecayed\right\rangle + \left|decayed\right\rangle). This final state is also called a quantum superposition as the atom is in a superposition of being both decayed and undecayed at the same time. Schrödinger's cat Now, let's return to Schrödinger's devilish experiment. Imagine a cat in a box, along with one single Bi-212 atom, a Geiger counter, a release mechanism with hammer, and a vial of prussic acid (hydrogen cyanide, HCN). The box is sealed, and left untouched for exactly one hour. The idea is as simple as it is cruel: as long as the atom remains intact, nothing happens, and the cat lives. But as soon as the Bi-212 atom decays, the Geiger counter will detect the radioactive radiation and set the release mechanism in march, causing the hammer to fall and shatter the vial into pieces. The hydrogen cyanide gas is released, and the cat dies a gruesome death. Schrödinger's Cat 3 Now, after one hour, the state of the atom has evolved to the right hand side of Eq. \eqref{A} — a superposition of being both decayed and undecayed. As a consequence, the Geiger counter, too, will find itself in a superposition of having both detected and not detected the decay, which in turn leaves the hammer in a superposition of having fallen and not fallen. This causes the vial to be in a superposition of being smashed to pieces and still intact, which finally leads to the strange and quite absurd result of the cat being in a superposition of being both dead and alive at the same time! Schrödinger's Cat 1 This is clearly nonsense! After all, every sensible human being knows that when one would open the box, only one of two possibilities will occur: 1. The atom has not yet decayed, and the cat is still alive; 2. The atom did decay, and the cat is dead. After one hour, both possibilities will occur with an equal chance of 50%. But, in any case, one will always see the cat either alive or dead, never both alive and dead. This, in essence, is the measurement problem of quantum mechanics. Somehow, quantum mechanics seems to force us into believing that cats can find themselves in zombie superpositions of being simultaneously dead and alive, which is in flat contradiction with our common sense. So what is going wrong here? Before we continue, here's how Schrödinger first coined his paradox in 1935: "A cat is penned up in a steel chamber, along with the following device (which must be secured against direct interference by the cat); in a Geiger counter, there is a tiny bit of radioactive substance, so small, that perhaps in the course of the hour one of the atoms decays, but also, with equal probability, perhaps none; if it happens, the counter tube discharges and through a relay releases a hammer that shatters a small flask of hydrocyanic acid. If one has left this entire system to itself for an hour, one would say that the cat still lives if meanwhile no atom has decayed. The \psi-function [i.e. state] of the entire system would express this by having in it the living and dead cat (pardon the expression) mixed or smeared out in equal parts." Schrödinger devised this hypothetical experiment to illustrate some of the flaws and limits of the Copenhagen interpretation of quantum mechanics (see further). Quantum entanglement If we want to gain a deeper insight into the measurement problem, we will need some formalism. Now, don't click away! The only concepts we need are those of quantum superpositions (defined above) and quantum entanglement, both of which lie at the basis of the paradoxical outcome described above. Now, as the Geiger counter interacts with the Bi-212 atom, its state gets entangled with the state of the atom. This is how it works within the mathematical formalism of quantum mechanics: Let the ket \left|ready\right\rangle represent the initial, ready-to-measure, state of the Geiger counter. The moment it detects radiation, an audible "click" is produced, and the \left|ready\right\rangle state evolves to the ket \left|``click. Alternatively, if no radiation is measured, no sound is produced, and the \left|ready\right\rangle state evolves to the ket \left|no\;``click. The measurement process of a decaying Bi-212 atom by the Geiger counter can then be written as: and similarly: Now consider what would happen if the Geiger counter would interact with the superposed state on the RHS of Eq. \eqref{A}. The pre-measurement state is as follows: Working out the brackets, we obtain: Due to the linearity of the Schrödinger equation, each of these two terms will evolve according to the above equations \eqref{B} and \eqref{C}, and the resulting, final, state becomes: The first term represents the situation where the Bi-212 atom did not decay, and the Geiger counter did not record anything, whereas the second term represents the situation in which the Bi-212 atom did decay, and the Geiger counter produced an audible click. In a similar vein, the release mechanism, the hammer, the vial, and the cat will all become entangled with the atom and the Geiger counter. The initial state at t=0 could then be written as: Following Eq. \eqref{A}, the final state at t=60 becomes: \begin{equation}\frac{1}{\sqrt{2}}\left|undecayed\right\rangle_{atom}\left|no\;``click"\right\rangle_{Geiger}\left|not\;fallen\right\rangle_{hammer}\left|intact\right\rangle_{vial}\left|alive\right\rangle_{cat}\\ +\frac{1}{\sqrt{2}}\left|decayed\right\rangle_{atom}\left|``click"\right\rangle_{Geiger}\left|fallen\right\rangle_{hammer}\left|shattered\right\rangle_{vial}\left|dead\right\rangle_{cat}.\label{D}\end{equation} The collapse postulate Once again, we end up with a non-classical superposition of two classical states — one where the cat is still alive, and the other where the cat has died. Since one never observes such superposed states, the idea was introduced by Dirac and von Neumann that the quantum superposition in Eq. \eqref{D} would collapse onto one of the two classical terms whenever an observation was made (that is, whenever one opens the box). This, in short, is the collapse postulate — an essential ingredient of the (orthodox) Copenhagen interpretation of quantum mechanics. Quantum collapses happen with a probability \mathfrak{P} given by the square of the coefficient (or probability amplitude) that precedes the term onto which the state collapses. Since \left|\frac{1}{\sqrt{2}}\right|^2=\frac{1}{2}, each term will occur with equal probability \mathfrak{P}=\frac{1}{2}, and this is exactly what we predicted above for the two possible determinate outcomes. Notice that the collapse postulate is non-linear and indeterministic (that is, probabilistic). The typically random behaviour of quantum events thus finds its origin in this second dynamical law of quantum theory. As long as no observation is made, the system's state evolves in a deterministic fashion according to Schrödinger's equation (the first dynamical law). It is the act of observation which forces the system to take up one of the two possible classical outcomes. The measurement problem At first sight, the above "solution" in terms of quantum collapses seems to correctly account for our determinate observations. But it also raises many deep questions which have haunted scientists and philosophers alike for the last century: 1. First of all, there is the peculiar role of the conscious observer in quantum mechanics. After all, it seems that superpositions can exist as long as we do not observe them. A cat can be in a superposition of being dead and alive, as long as we don't look. But as Einstein said in a letter to Schrödinger: "Nobody really doubts that the presence or absence of the cat is something independent of the act of observation." In a similar vein, Einstein once asked the question: "Is the moon really there when we don't observe it?" Although the Copernican revolution distanced itself from any belief in a geocentric, human-centered universe, it seems the quantum revolution has placed humans back at the center of everything. Moon 2 "I like to think that the moon is there even if I am not looking at it." — Albert Einstein 2. This naturally brings us to the second question: What counts as an observation/measurement? Or, to put it differently, when exactly does a quantum superposition collapse? Is the measurement by the Geiger counter enough to collapse the superposition of the Bi-212 atom in Eq. \eqref{A}? Or does the collapse of the wavefunction only occur when the cat becomes entangled with the entire system as in Eq. \eqref{D}? Perhaps quantum collapses only occur at the level of the human observer? In which case one could ask ourselves the question whether the state collapses as soon as the first photon reaches our retina, or only at the point where we become consciously aware of the state of the cat. 3. A final and closely related problem concerns the microscopic to macroscopic transition, or the transition from the quantum to the classical. Once again, it is not clear where the boundary lies. Interpreting the quantum world Over the years, many different interpretations of quantum mechanics have been proposed in an attempt to answer some of the above-mentioned questions. Besides 1. the Copenhagen interpretation, there are 2. the hidden variable interpretations by de Broglie and Bohm, 3. the dynamical collapse interpretations such as the GRW interpretation by Ghirardi, Rimini and Weber, and 4. the no-collapse interpretations which reject von Neumann's collapse postulate such as Everett's many-worlds interpretation or the many-minds interpretation. One of the craziest of interpretations is definitely the many-worlds interpretation which posits that upon opening the box, the universe splits in two worlds corresponding to the two terms in Eq. \eqref{D}. In the first world, the Bi-212 atom remained intact and the cat lives; in the second world, the atom decayed and the cat is dead. Schrödinger's Cat 4 In this interpretation, everything that is physically possible, also happens. But every particular outcome happens in a different world (or branch), which necessarily leads to an infinitude of worlds, or parallel universes. However, although all branches are equally real, they cannot interfere with each other. So an observer in one world cannot know what is happening in the other worlds. But these are stories for another post! Indeed, one of the aims of this blog is to pass all of the above-mentioned interpretations in review during future posts, along with a careful and critical examination of their strengths and weaknesses. So stay tuned! Please, do not hesitate to comment on the above post by leaving a reply below. Feel free also to subscribe to by entering your email in the top-right corner in order to receive notifications of new posts by email. Pieter Thyssen Whereas his left brain was trained as a theoretical scientist, his right brain prefers the piano. At work, Pieter builds time machines (on paper) and loves to dabble in the history and philosophy of science. He often gets stuck in another dimension, contemplating time travel and parallel universes, or thinking about ways to save Schrödinger's cat (maybe). He explores the world on foot, and takes life one cup of (Arabica) coffee at a time. Follow him on Twitter @PieterThyssen or at You can reach Pieter via email at   8 comments for “Schrödinger's Cat and the Measurement Problem 1. Dave Tett September 12, 2013 at 2:03 pm Great stuff Pieter, clear concise and some brilliant illustrations! • September 12, 2013 at 4:51 pm Thanks Dave! I'm really happy you're the first person to comment on my blog. By the way, I'm already preparing my second post, which will be about nothing less than ... Mr. Bertlmann's socks 😉 PS: By subscribing to The Life of Psi, you will be automatically notified about future posts by email. 2. September 16, 2013 at 8:00 am Leuk, Pieter! :-) • Lukas Vandersteene December 16, 2013 at 4:54 pm hmmm ... In Latin I would say: "nihil est tam absurdum vel ridiculum ut philosophus quidam non dixerit" free translation: Er is niets zo ongerijmd of belachelijk of het is nog niet beweerd door een filosoof. Nothing is so absurd or ridiculous to not find a philosopher who claimed it. or in the original : "Nihil tam absurde dici potest quod non dicatur ab aliquo philosophorum" Cicero, in: De Divinatione 2.58. 3. Tori May 6, 2015 at 5:59 pm This is absolutely fantastic- this whole article explains things in a tangible way and I am quite fond of your writing style. I am getting a Schrodinger's cat tattoo- the dead cat with its corresponding equation on my left hip and the cat that is alive, along with its corresponding equation on my left hip. I am very fond of this concept but you took it above and beyond and I shall be continually fascinated by everything physics. Thank you for taking the time to write this! 4. Kofi Agyemang December 6, 2015 at 1:31 am Hi, I am replying to this article two years later but I was wondering if you could explain to me quantum teleportation and why it's not an example of "spooky action at a distance". Thanks Leave a Reply
b32b0fb96b8b8b22
Psychology Wiki Schrödinger equation 34,203pages on this wiki Add New Page Talk0 Share Ad blocker interference detected! Psychology Wiki does not yet have a page about Schrödinger equation, even though this subject is highly linked to it (This is due to the initial use of content from Wikipedia). If not, you may wish to see Wikipedia's article on Schrödinger equation. In the standard interpretation of quantum mechanics, the quantum state, also called a wavefunction or state vector, is the most complete description that can be given to a physical system. Solutions to Schrödinger's equation describe not only atomic and subatomic systems, atoms and electrons, but also macroscopic systems, possibly even the whole universe. The equation is named after Erwin Schrödinger, who constructed it in 1926. The Schrödinger equation Edit The Schrödinger equation takes several different forms, depending on the physical situation. This section presents the equation for the general case and for the simple case encountered in many textbooks. For a general quantum system: For a single particle in three dimensions: • is the particle's position in three-dimensional space, • is the wavefunction, which is the amplitude for the particle to have a given position r at any given time t. • is the mass of the particle. • is the time independent external with respect to the particle potential energy of the particle at each position r (see Self-action in a system of elementary particles). • is the Laplace operator. Historical background and development Edit Einstein interpreted Planck's quanta as photons, particles of light, and proposed that the energy of a photon is proportional to its frequency, a mysterious wave-particle duality. Since energy and momentum are related in the same way as frequency and wavenumber in relativity, it followed that the momentum of a photon is proportional to its wavenumber. DeBroglie hypothesized that this is true for all particles, for electrons as well as photons, that the energy and momentum of an electron are the frequency and wavenumber of a wave. Assuming that the waves travel roughly along classical paths, he showed that they form standing waves only for certain discrete frequencies, discrete energy levels which reproduced the old quantum condition. Following up on these ideas, Schrödinger decided to find a proper wave equation for the electron. He was guided by Hamilton's analogy between mechanics and optics, encoded in the observation that the zero-wavelength limit of optics resembles a mechanical system--- the trajectories of light rays become sharp tracks which obey an analog of the principle of least action. Hamilton believed that mechanics was the zero-wavelength limit of wave propagation, but did not formulate an equation for those waves. This is what Schrödinger did, and a modern version of his reasoning is reproduced in the next section. The equation he found is (in natural units): : Using this equation, Schrödinger computed the spectral lines for hydrogen by treating a hydrogen atom's single negatively charged electron as a wave, , moving in a potential well, V, created by the positively charged proton. This computation reproduced the energy levels of the Bohr model. But this was not enough, since Sommerfeld had already seemingly correctly reproduced relativistic corrections. Schrödinger used the relativistic energy momentum relation to find what is now known as the Klein–Gordon equation in a Coulomb potential: He found the standing-waves of this relativistic equation, but the relativistic corrections disagreed with Sommerfeld's formula. Discouraged, he put away his calculations and secluded himself in an isolated mountain cabin with a lover. While there, Schrödinger decided that the earlier nonrelativistic calculations were novel enough to publish, and decided to leave off the problem of relativistic corrections for the future. He put together his wave equation and the spectral analysis of hydrogen in a paper in 1926. The paper was enthusiastically endorsed by Einstein, who saw the matter-waves as the visualizable antidote to what he considered to be the overly formal matrix mechanics. The Schrödinger equation tells you the behaviour of , but does not say what is. Schrödinger tried unsuccessfully, in his fourth paper, to interpret it as a charge density. In 1926 Max Born, just a few days after Schrödinger's fourth and final paper was published, successfully interpreted as a probability amplitude. Schrödinger, though, always opposed a statistical or probabilistic approach, with its associated discontinuities; like Einstein, who believed that quantum mechanics was a statistical approximation to an underlying deterministic theory, Schrödinger was never reconciled to the Copenhagen interpretation. Derivation Edit (1) The total energy E of a particle is This is the classical expression for a particle with mass m where the total energy E is the sum of the kinetic energy, , and the potential energy V. The momentum of the particle is p, or mass times velocity. The potential energy is assumed to vary with position, and possibly time as well. Note that the energy E and momentum p appear in the following two relations: (2) Einstein's light quanta hypothesis of 1905, which asserts that the energy E of a photon is proportional to the frequency f of the corresponding electromagnetic wave: where the frequency f of the quanta of radiation (photons) are related by Planck's constant h,:::and is the angular frequency of the wave. (3) The de Broglie hypothesis of 1924, which states that any particle can be associated with a wave, represented mathematically by a wavefunction Ψ, and that the momentum p of the particle is related to the wavelength λ of the associated wave by: where is the wavelength and is the wavenumber of the wave. Expressing p and k as vectors, we have Schrödinger's great insight, late in 1925, was to express the phase of a plane wave as a complex phase factor: and to realize that since and similarly since we find: so that, again for a plane wave, he obtained: And by inserting these expressions for the energy and momentum into the classical formula we started with we get Schrödinger's famed equation for a single particle in the 3-dimensional case in the presence of a potential V: The particle is described by a wave; the frequency is the energy E of the particle, while the momentum p is the wavenumber k. Because of special relativity, these are not two separate assumptions. The total energy is the same function of momentum and position as in classical mechanics: where the first term T(p) is the kinetic energy and the second term V(x) is the potential energy. Schrödinger required that a Wave packet at position x with wavenumber k will move along the trajectory determined by Newton's laws in the limit that the wavelength is small. Consider first the case without a potential, V=0. So that a plane wave with the right energy/frequency relationship obeys the free Schrödinger equation: and by adding together plane waves, you can make an arbitrary wave. When there is no potential, a wavepacket should travel in a straight line at the classical velocity. The velocity v of a wavepacket is: which is the momentum over the mass as it should be. This is one of Hamilton's equations from mechanics: after identifying the energy and momentum of a wavepacket as the frequency and wavenumber. To include a potential energy, consider that as a particle moves the energy is conserved, so that for a wavepacket with approximate wavenumber k at approximate position x the quantity must be constant. The frequency doesn't change as a wave moves, but the wavenumber does. So where there is a potential energy, it must add in the same way: This is the time dependent Schrödinger equation. It is the equation for the energy in classical mechanics, turned into a differential equation by substituting: Schrödinger studied the standing wave solutions, since these were the energy levels. Standing waves have a complicated dependence on space, but vary in time in a simple way: substituting, the time-dependent equation becomes the standing wave equation: Which is the original time-independent Schrödinger equation. In a potential gradient, the k-vector of a short-wavelength wave must vary from point to point, to keep the total energy constant. Sheets perpendicular to the k-vector are the wavefronts, and they gradually change direction, because the wavelength is not everywhere the same. A wavepacket follows the shifting wavefronts with the classical velocity, with the acceleration equal to the force divided by the mass. An easy modern way to verify that Newton's second law holds for wavepackets is to take the Fourier transform of the time dependent Schrödinger equation. For an arbitrary polynomial potential this is called the Schrödinger equation in the momentum representation: The group-velocity relation for the fourier transformed wave-packet gives the second of Hamilton's equations. Versions Edit There are several equations which go by Schrödinger's name: Where is a linear operator acting on the wavefunction . takes as input one and produces another in a linear way, a function-space version of a matrix multiplying a vector. For the specific case of a single particle in one dimension moving under the influence of a potential V. and the operator H can be read off: it is a combination of the operator which takes the second derivative, and the operator which pointwise multiplies by V(x). When acting on it reproduces the right hand side. For a particle in three dimensions, the only difference is more derivatives: and for N particles, the difference is that the wavefunction is in 3N-dimensional configuration space, the space of all possible particle positions. This last equation is in a very high dimension, so that the solutions are not easy to visualize. This is the equation for the standing waves, the eigenvalue equation for H. In abstract form, for a general quantum system, it is written: For a particle in one dimension, But there is a further restriction--- the solution must not grow at infinity, so that it has a finite L^2-norm: For example, when there is no potential, the equation reads: which has oscillatory solutions for E>0 (the C's are arbitrary constants): and exponential solutions for E<0 For a constant potential V the solution is oscillatory for E>V and exponential for E<V, corresponding to energies which are allowed or disallowed in classical mechanics. Oscillatory solutions have a classically allowed energy and correspond to actual classical motions, while the exponential solutions have a disallowed energy and describe a small amount of quantum bleeding into the classically disallowed region, to quantum tunneling. If the potential V grows at infinity, the motion is classically confined to a finite region, which means that in quantum mechanics every solution becomes an exponential far enough away. The condition that the exponential is decreasing restricts the energy levels to a discrete set, called the allowed energies. A solution of the time independent equation is called an energy eigenstate with energy E: To find the time dependence of the state, consider starting the time-dependent equation with an initial condition . The time derivative at t=0 is everywhere proportional to the value: So that at first the whole function just gets rescaled, and thus maintains the property that its time derivative is proportional to itself, for being linear. So for all times, So that the solution of the time-dependent equation with this initial condition is: This is a restatement of the fact that solutions of the time-independent equation are the standing wave solutions of the time dependent equation. They only get multiplied by a phase as time goes by, and otherwise are unchanged. Since is time-independent they are called stationary states. The nonlinear Schrödinger equation is the partial differential equation for the complex field ψ. This equation arises from the Hamiltonian with the Poisson brackets It must be noted that this is a classical field equation. Unlike its linear counterpart, it never describes the time evolution of a quantum state. Properties Edit The Schrödinger equation describes the time evolution of a quantum state, and must determine the future value from the present value. A classical field equation can be second order in time derivatives, the classical state can include the time derivative of the field. But a quantum state is a full description of a system, so that the Schrödinger equation is always first order in time. The Schrödinger equation is linear in the wavefunction: if and are solutions to the time dependent equation, then so is , where a and b are any complex numbers. In quantum mechanics, the time evolution of a quantum state is always linear, for fundamental reasons. Although there are nonlinear versions of the Schrödinger equation, these are not equations which describe the evolution of a quantum state, but classical field equations like Maxwell's equations or the Klein-Gordon equation. The Schrödinger equation itself can be thought of as the equation of motion for a classical field not for a wavefunction, and taking this point of view, it describes a coherent wave of nonrelativistic matter, a wave of a Bose condensate or a superfluid with a large indefinite number of particles and a definite phase and amplitude. The time-independent equation is also linear, but in this case linearity has a slightly different meaning. If two wavefunctions and are solutions to the time-independent equation with the same energy E, then any linear combination of the two is a solution with energy E. Two different solutions with the same energy are called degenerate. In an arbitrary potential, there is one obvious degeneracy: if a wavefunction solves the time-independent equation, so does . By taking linear combinations, the real and imaginary part of are each solutions. So that restricting attention to real valued wavefunctions does not affect the time-independent eigenvalue problem. In the time-dependent equation, complex conjugate waves move in opposite directions. Given a solution to the time dependent equation , the replacement: produces another solution, and is the extension of the complex conjugation symmetry to the time-dependent case. The symmetry of complex conjugation is called time-reversal. The Schrödinger equation is Unitary, which means that the total norm of the wavefunction, the sum of the squares of the value at all points: has zero time derivative. The derivative of is according to the complex conjugate equations where the operator is defined as the continuous analog of the Hermitian conjugate, For a discrete basis, this just means that the matrix elements of the linear operator H obey: The derivative of the inner product is: and is proportional to the imaginary part of H. If H has no imaginary part, if it is self-adjoint, then the probability is conserved. This is true not just for the Schrödinger equation as written, but for the Schrödinger equation with nonlocal hopping: so long as: the particular choice: reproduces the local hopping in the ordinary Schrödinger equation. On a discrete lattice approximation to a continuous space, H(x,y) has a simple form: whenever x and y are nearest neighbors. On the diagonal where n is the number of nearest neighbors. If the potential is bounded from below, the eigenfunctions of the Schrödinger equation have energy which is also bounded from below. This can be seen most easily by using the variational principle, as follows. (See also below.) For any linear operator bounded from below, the eigenvector with the smallest eigenvalue is the vector that minimizes the quantity over all which are normalized: For the Schrödinger Hamiltonian bounded from below, the smallest eigenvalue is called the ground state energy. That energy is the minimum value of (we used an integration by parts). The right hand side is never smaller than the smallest value of ; in particular, the ground state energy is positive when is everywhere positive. For potentials that are bounded below and are not infinite in such a way that will divide space into regions which are inaccessible by quantum tunneling, there is a ground state which minimizes the integral above. The lowest energy wavefunction is real and nondegenerate and has the same sign everywhere. To prove this, let the ground state wavefunction be . The real and imaginary parts are separately ground states, so it is no loss of generality to assume the is real. Suppose now, for contradiction, that changes sign. Define to be the absolute value of . η = | ψ | The potential and kinetic energy integral for is equal to psi, except that has a kink wherever changes sign. The integrated-by-parts expression for the kinetic energy is the sum of the squared magnitude of the gradient, and it is always possible to round out the kink in such a way that the gradient gets smaller at every point, so that the kinetic energy is reduced. This also proves that the ground state is nondegenerate. If there were two ground states and not proportional to each other and both everywhere nonnegative then a linear combination of the two is still a ground state, but it can be made to have a sign change. For one-dimensional potentials, every eigenstate is nondegenerate, because the number of sign changes is equal to the level number. Already in two dimensions, it is easy to get a degeneracy--- for example, if a particle is moving in a separable potential: V(x,y) = U(x) + W(y), then the energy levels are the sum of the energies of the one-dimensional problem. It is easy to see that by adjusting the overall scale of U and W that the levels can be made to collide. For standard examples, the three-dimensional harmonic oscillator and the central potential, the degeneracies are a consequence of symmetry. The probability density of a particle is . The probability flux is defined as: The probability flux satisfies the continuity equation: where is the probability density and measured in units of (probability)/(volume) = r. This equation is the mathematical equivalent of the probability conservation law. For a plane wave: So that not only is the probability of finding the particle the same everywhere, but the probability flux is as expected from an object moving at the classical velocity . The reason that the Schrödinger equation admits a probability flux is because all the hopping is local and forward in time. There are many linear operators which act on the wavefunction, each one defines a Heisenberg matrix when the energy eigenstates are discrete. For a single particle, the operator which takes the derivative of the wavefunction in a certain direction: Is called the momentum operator. Multiplying operators is just like multiplying matrices, the product of A and B acting on is A acting on the output of B acting on . An eigenstate of p obeys the equation: for a number k, and for a normalizable wavefunction this restricts k to be real, and the momentum eigenstate is a wave with frequency k. The position operator x multiplies each value of the wavefunction at the position x by x: So that in order to be an eigenstate of x, a wavefunction must be entirely concentrated at one point: In terms of p, the Hamiltonian is: It is easy to verify that p acting on x acting on psi: while x acting on p acting on psi reproduces only the first term: so that the difference of the two is not zero: or in terms of operators: Since the time derivative of a state is: while the complex conjugate is The time derivative of a matrix element obeys the Heisenberg equation of motion. This establishes the equivalence of the Schrödinger and Heisenberg formalisms, ignoring the mathematical fine points of the limiting procedure for continuous space. The Schrödinger equation satisfies the correspondence principle. In the limit of small wavelength wavepackets, it reproduces Newton's laws. This is easy to see from the equivalence to matrix mechanics. All operators in Heisenberg's formalism obey the quantum analog of Hamilton's equations: So that in particular, the equations of motion for the X and P operators are: in the Schrödinger picture, the interpretation of this equation is that it gives the time rate of change of the matrix element between two states when the states change with time. Taking the expectation value in any state shows that Newton's laws hold not only on average, but exactly, for the quantities: The Schrödinger equation does not take into account relativistic effects; as a wave equation, it is invariant under a Galilean transformation, but not under a Lorentz transformation. But in order to include relativity, the physical picture must be altered in a radical way. The Klein–Gordon equation uses the relativistic mass-energy relation: to produce the differential equation: which is relativistically invariant, but second order in , and so cannot be an equation for the quantum state. This equation also has the property that there are solutions with both positive and negative frequency, a plane wave solution obeys: which has two solutions, one with positive frequency the other with negative frequency. This is a disaster for quantum mechanics, because it means that the energy is unbounded below. A more sophisticated attempt to solve this problem uses a first order wave equation, the Dirac equation, but again there are negative energy solutions. In order to solve this problem, it is essential to go to a multiparticle picture, and to consider the wave equations as equations of motion for a quantum field, not for a wavefunction. The reason is that relativity is incompatible with a single particle picture. A relativistic particle cannot be localized to a small region without the particle number becoming indefinite. When a particle is localized in a box of length L, the momentum is uncertain by an amount roughly proportional to h/L by the uncertainty principle. This leads to an energy uncertainty of hc/L, when |p| is large enough so that the mass of the particle can be neglected. This uncertainty in energy is equal to the mass-energy of the particle when and this is called the Compton wavelength. Below this length, it is impossible to localize a particle and be sure that it stays a single particle, since the energy uncertainty is large enough to produce more particles from the vacuum by the same mechanism that localizes the original particle. But there is another approach to relativistic quantum mechanics which does allow you to follow single particle paths, and it was discovered within the path-integral formulation. If the integration paths in the path integral include paths which move both backwards and forwards in time as a function of their own proper time, it is possible to construct a purely positive frequency wavefunction for a relativistic particle. This construction is appealing, because the equation of motion for the wavefunction is exactly the relativistic wave equation, but with a nonlocal constraint that separates the positive and negative frequency solutions. The positive frequency solutions travel forward in time, the negative frequency solutions travel backwards in time. In this way, they both analytically continue to a statistical field correlation function, which is also represented by a sum over paths. But in real space, they are the probability amplitudes for a particle to travel between two points, and can be used to generate the interaction of particles in a point-splitting and joining framework. The relativistic particle point of view is due to Richard Feynman. Feynman's method also constructs the theory of quantized fields, but from a particle point of view. In this theory, the equations of motion for the field can be interpreted as the equations of motion for a wavefunction only with caution--- the wavefunction is only defined globally, and in some way related to the particle's proper time. The notion of a localized particle is also delicate--- a localized particle in the relativistic particle path integral corresponds to the state produced when a local field operator acts on the vacuum, and exactly which state is produced depends on the choice of field variables. Some general techniques are: In some special cases, special methods can be used: When the potential is zero, the Schrödinger equation is linear with constant coefficients: The solution for any initial condition can be found by Fourier transforms. Because the coefficients are constant, an initial plane wave stays a plane wave. Only the coefficient changes: So that A is also oscillating in time: and the solution is: Where , a restatement of DeBroglie's relations. To find the general solution, write the initial condition as a sum of plane waves by taking its Fourier transform: The equation is linear, so each plane waves evolves independently: Which is the general solution. When complemented by an effective method for taking Fourier transforms, it becomes an efficient algorithm for finding the wavefunction at any future time--- Fourier transform the initial conditions, multiply by a phase, and transform back. An easy and instructive example is the Gaussian wavepacket: where a is a positive real number, the square of the width of the wavepacket. The total normalization of this wavefunction is: The Fourier transform is a Gaussian again in terms of the wavenumber k: With the physics convention which puts the factors of in Fourier transforms in the k-measure. Each separate wave only phase-rotates in time, so that the time dependent Fourier-transformed solution is: The inverse Fourier transform is still a Gaussian, but the parameter a has become complex, and there is an overall normalization factor. The branch of the square root is determined by continuity in time--- it is the value which is nearest to the positive square root of a. It is convenient to rescale time to absorb m, replacing t/m by t. The integral of over all space is invariant, because it is the inner product of with the state of zero energy, which is a wave with infinite wavelength, a constant function of space. For any energy state, with wavefunction , the inner product: only changes in time in a simple way: its phase rotates with a frequency determined by the energy of . When has zero energy, like the infinite wavelength wave, it doesn't change at all. The sum of the absolute square of is also invariant, which is a statement of the conservation of probability. Explicitly in one dimension: Which gives the norm: which has preserved its value, as it must. The width of the Gaussian is the interesting quantity, and it can be read off from the form of  : The width eventually grows linearly in time, as . This is wave-packet spreading--- no matter how narrow the initial wavefunction, a Schrödinger wave eventually fills all of space. The linear growth is a reflection of the momentum uncertainty--- the wavepacket is confined to a narrow width and so has a momentum which is uncertain by the reciprocal amount , a spread in velocity of , and therefore in the future position by , where the factor of m has been restored by undoing the earlier rescaling of time. Galilean boosts are transformations which look at the system from the point of view of an observer moving with a steady velocity -v. A boost must change the physical properties of a wavepacket in the same way as in classical mechanics: So that the phase factor of a free Schrödinger plane wave: is only different in the boosted coordinates by a phase which depends on x and t, but not on p. An arbitrary superposition of plane wave solutions with different values of p is the same superposition of boosted plane waves, up to an overall x,t dependent phase factor. So any solution to the free Schrödinger equation, , can be boosted into other solutions: Boosting a constant wavefunction produces a plane-wave. More generally, boosting a plane-wave: produces a boosted wave: Boosting the spreading Gaussian wavepacket: produces the moving Gaussian: Which spreads in the same way. The narrow-width limit of the Gaussian wavepacket solution is the propagator K. For other differential equations, this is sometimes called the Green's function, but in quantum mechanics it is traditional to reserve the name Green's function for the time Fourier transform of K. When a is the infinitesimal quantity , the Gaussian initial condition, rescaled so that its integral is one: becomes a delta function, so that its time evolution: gives the propagator. Note that a very narrow initial wavepacket instantly becomes infinitely wide, with a phase which is more rapidly oscillatory at large values of x. This might seem strange--- the solution goes from being concentrated at one point to being everywhere at all later times, but it is a reflection of the momentum uncertainty of a localized particle. Also note that the norm of the wavefunction is infinite, but this is also correct since the square of a delta function is divergent in the same way. The factor of is an infinitesimal quantity which is there to make sure that integrals over K are well defined. In the limit that becomes zero, K becomes purely oscillatory and integrals of K are not absolutely convergent. In the remainder of this section, it will be set to zero, but in order for all the integrations over intermediate states to be well defined, the limit is to only to be taken after the final state is calculated. The propagator is the amplitude for reaching point x at time t, when starting at the origin, x=0. By translation invariance, the amplitude for reaching a point x when starting at point y is the same function, only translated: In the limit when t is small, the propagator converges to a delta function: but only in the sense of distributions. The integral of this quantity multiplied by an arbitrary differentiable test function gives the value of the test function at zero. To see this, note that the integral over all space of K is equal to 1 at all times: since this integral is the inner-product of K with the uniform wavefunction. But the phase factor in the exponent has a nonzero spatial derivative everywhere except at the origin, and so when the time is small there are fast phase cancellations at all but one point. This is rigorously true when the limit is taken after everything else. So the propagation kernel is the future time evolution of a delta function, and it is continuous in a sense, it converges to the initial delta function at small times. If the initial wavefunction is an infinitely narrow spike at position : it becomes the oscillatory wave: Since every function can be written as a sum of narrow spikes: the time evolution of every function is determined by the propagation kernel: And this is an alternate way to express the general solution. The interpretation of this expression is that the amplitude for a particle to be found at point x at time t is the amplitude that it started at times the amplitude that it went from to x, summed over all the possible starting points. In other words, it is a convolution of the kernel K with the initial condition. Since the amplitude to travel from x to y after a time can be considered in two steps, the propagator obeys the identity: Which can be interpreted as follows: the amplitude to travel from x to z in time t+t' is the sum of the amplitude to travel from x to y in time t multiplied by the amplitude to travel from y to z in time t', summed over all possible intermediate states y. This is a property of an arbitrary quantum system, and by subdividing the time into many segments, it allows the time evolution to be expressed as a path integral. The spreading of wavepackets in quantum mechanics is directly related to the spreading of probability densities in diffusion. For a particle which is random walking, the probability density function at any point satisfies the diffusion equation: where the factor of 2, which can be removed by a rescaling either time or space, is only for convenience. A solution of this equation is the spreading gaussian: and since the integral of , is constant, while the width is becoming narrow at small times, this function approaches a delta function at t=0: again, only in the sense of distributions, so that for any smooth test function f. The spreading Gaussian is the propagation kernel for the diffusion equation and it obeys the convolution identity: Which allows diffusion to be expressed as a path integral. The propagator is the exponential of an operator H: which is the infinitesimal diffusion operator. A matrix has two indices, which in continuous space makes it a function of x and x'. In this case, because of translation invariance, the matrix element K only depend on the difference of the position, and a convenient abuse of notation is to refer to the operator, the matrix elements, and the function of the difference by the same name: Translation invariance means that continuous matrix multiplication: is really convolution: The exponential can be defined over a range of t's which include complex values, so long as integrals over the propagation kernel stay convergent. As long as the real part of z is positive, for large values of x K is exponentially decreasing and integrals over K are absolutely convergent. The limit of this expression for z coming close to the pure imaginary axis is the Schrödinger propagator: and this gives a more conceptual explanation for the time evolution of Gaussians. From the fundamental identity of exponentiation, or path integration: holds for all complex z values where the integrals are absolutely convergent so that the operators are well defined. So that quantum evolution starting from a Gaussian, which is the diffusion kernel K: gives the time evolved state: This explains the diffusive form of the Gaussian solutions: The variational principle asserts that for any Hermitian matrix A, the eigenvector corresponding to the lowest eigenvalue minimizes the quantity: on the unit sphere . This follows by the method of Lagrange multipliers, at the minimum the gradient of the function is parallel to the gradient of the constraint: which is the eigenvalue condition so that the extreme values of a quadratic form A are the eigenvalues of A, and the value of the function at the extreme values is just the corresponding eigenvalue: When the hermitian matrix is the Hamiltonian, the minimum value is the lowest energy level. In the space of all wavefunctions, the unit sphere is the space of all normalized wavefunctions , the ground state minimizes or, after an integration by parts, All the stationary points come in complex conjugate pairs since the integrand is real. Since the stationary points are eigenvalues, any linear combination is a stationary point, and the real and imaginary part are both stationary points. For a particle in a positive definite potential, the ground state wavefunction is real and positive, and has a dual interpretation as the probability density for a diffusion process. The analogy between diffusion and nonrelativistic quantum motion, originally discovered and exploited by Schrödinger, has led to many exact solutions. A positive definite wavefunction: is a solution to the time-independent Schrödinger equation with m=1 and potential: with zero total energy. W, the logarithm of the ground state wavefunction. The second derivative term is higher order in , and ignoring it gives the semi-classical approximation. The form of the ground state wavefunction is motivated by the observation that the ground state wavefunction is the Boltzmann probability for a different problem, the probability for finding a particle diffusing in space with the free-energy at different points given by W. If the diffusion obeys detailed balance and the diffusion constant is everywhere the same, the Fokker Planck equation for this diffusion is the Schrödinger equation when the time parameter is allowed to be imaginary. This analytic continuation gives the eigenstates a dual interpretation--- either as the energy levels of a quantum system, or the relaxation times for a stochastic equation. W should grow at infinity, so that the wavefunction has a finite integral. The simplest analytic form is: with an arbitrary constant , which gives the potential: This potential describes a Harmonic oscillator, with the ground state wavefunction: The total energy is zero, but the potential is shifted by a constant. The ground state energy of the usual unshifted Harmonic oscillator potential: is then the additive constant: which is the zero point energy of the oscillator. Another simple but useful form is where W is proportional to the radial coordinate. This is the ground state for two different potentials, depending on the dimension. In one dimension, the corresponding potential is singular at the origin, where it has some nonzero density: and, up to some rescaling of variables, this is the lowest energy state for a delta function potential, with the bound state energy added on. with the ground state energy: and the ground state wavefunction: In higher dimensions, the same form gives the potential: which can be identified as the attractive Coulomb law, up to an additive constant which is the ground state energy. This is the superpotential that describes the lowest energy level of the Hydrogen atom, once the mass is restored by dimensional analysis: where is the Bohr radius, with energy The ansatz modifies the Coulomb potential to include a quadratic term proportional to , which is useful for nonzero angular momentum. In the mathematical formulation of quantum mechanics, a physical system is fully described by a vector in a complex Hilbert space, the collection of all possible normalizable wavefunctions. The wavefunction is just an alternate name for the vector of complex amplitudes, and only in the case of a single particle in the position representation is it a wave in the usual sense, a wave in space time. For more complex systems, it is a wave in an enormous space of all possible worlds. Two nonzero vectors which are multiples of each other, two wavefunctions which are the same up to rescaling, represent the same physical state. The wavefunction vector can be written in several ways: 1. as an abstract ket vector:  ::2. As a list of complex numbers, the components relative to a discrete list of normalizable basis vectors :  ::3. As a continuous superposition of non-normalizable basis vectors, like position states : The divide between the continuous basis and the discrete basis can be bridged by limiting arguments. The two can be formally unified by thinking of each as a measure on the real number line. In the most abstract notation, the Schrödinger equation is written: which only says that the wavefunction evolves linearly in time, and names the linear operator which gives the time derivative the Hamiltonian H. In terms of the discrete list of coefficients: which just reaffirms that time evolution is linear, since the Hamiltonian acts by matrix multiplication. In a continuous representation, the Hamiltonian is a linear operator, which acts by the continuous version of matrix multiplication: Taking the complex conjugate: In order for the time-evolution to be unitary, to preserve the inner products, the time derivative of the inner product must be zero: for an arbitrary state , which requires that H is Hermitian. In a discrete representation this means that . When H is continuous, it should be self-adjoint, which adds some technical requirement that H does not mix up normalizable states with states which violate boundary conditions or which are grossly unnormalizable. The formal solution of the equation is the matrix exponential (natural units): For every time-independent Hamiltonian operator, , there exists a set of quantum states, , known as energy eigenstates, and corresponding real numbers satisfying the eigenvalue equation. This is the time-independent Schrödinger equation. For the case of a single particle, the Hamiltonian is the following linear operator (natural units): which is a Self-adjoint operator when V is not too singular and does not grow too fast. Self-adjoint operators have the property that their eigenvalues are real in any basis, and their eigenvectors form a complete set, either discrete or continuous. Expressed in a basis of Eigenvectors of H, the Schrödinger equation becomes trivial: Which means that each energy eigenstate is only multiplied by a complex phase: Which is what matrix exponentiation means--- the time evolution acts to rotate the eigenfunctions of H. When H is expressed as a matrix for wavefunctions in a discrete energy basis: so that: The physical properties of the C's are extracted by acting by operators, matrices. By redefining the basis so that it rotates with time, the matrices become time dependent, which is the Heisenberg picture. Galilean symmetry requires that H(p) is quadratic in p in both the classical and quantum Hamiltonian formalism. In order for Galilean boosts to produce a p-independent phase factor, px - Ht must have a very special form--- translations in p need to be compensated by a shift in H. This is only true when H is quadratic. The infinitesimal generator of Boosts in both the classical and quantum case is: where the sum is over the different particles, and B,x,p are vectors. The poisson bracket/commutator of with x and p generate infinitesimal boosts, with v the infinitesimal boost velocity vector: Iterating these relations is simple, since they add a constant amount at each step. By iterating, the dv's incrementally sum up to the finite quantity V: B divided by the total mass is the current center of mass position minus the time times the center of mass velocity: In other words, B/M is the current guess for the position that the center of mass had at time zero. The statement that B doesn't change with time is the center of mass theorem. For a Galilean invariant system, the center of mass moves with a constant velocity, and the total kinetic energy is the sum of the center of mass kinetic energy and the kinetic energy measured relative to the center of mass. Since B is explicitly time dependent, H does not commute with B, rather: this gives the transformation law for H under infinitesimal boosts: the interpretation of this formula is that the change in H under an infinitesimal boost is entirely given by the change of the center of mass kinetic energy, which is the dot product of the total momentum with the infinitesimal boost velocity. The two quantities (H,P) form a representation of the Galilean group with central charge M, where only H and P are classical functions on phase-space or quantum mechanical operators, while M is a parameter. The transformation law for infinitesimal v: can be iterated as before--- P goes from P to P+MV in infinitesimal increments of v, while H changes at each step by an amount proportional to P, which changes linearly. The final value of H is then changed by the value of P halfway between the starting value and the ending value: The factors proportional to the central charge M are the extra wavefunction phases. Boosts give too much information in the single-particle case, since Galilean symmetry completely determines the motion of a single particle. Given a multi-particle time dependent solution: with a potential that depends only on the relative positions of the particles, it can be used to generate the boosted solution: For the standing wave problem, the motion of the center of mass just adds an overall phase. When solving for the energy levels of multiparticle systems, Galilean invariance allows the center of mass motion to be ignored. See also Edit Notes Edit 1. ^ 2. ^ Erwin Schrödinger, Annalen der Physik, (Leipzig) (1926), Main paper 3. ^ Schrödinger: Life and Thought by Walter John Moore, Cambridge University Press 1992 ISBN 0-521-43767-9, page 219 (hardback version) 5. ^ Schrödinger: Life and Thought by Walter John Moore, Cambridge University Press 1992 ISBN 0-521-43767-9, page 479 (hardback version) makes it clear that even in his last year of life, in a letter to Max Born, he never accepted the Copenhagen Interpretation. cf pg 220 References Edit • Paul Adrien Maurice Dirac (1958). The Principles of Quantum Mechanics (4th ed.). Oxford University Press. • David J. Griffiths (2004). Introduction to Quantum Mechanics (2nd ed.). Benjamin Cummings. • Richard Liboff (2002). Introductory Quantum Mechanics (4th ed.). Addison Wesley. • Walter John Moore (1992). Schrödinger: Life and Thought. Cambridge University Press. • Schrödinger, Erwin (December 1926). "An Undulatory Theory of the Mechanics of Atoms and Molecules". Phys. Rev. 28 (6) 28: 1049–1070. doi:10.1103/PhysRev.28.1049. External links Edit Also on Fandom Random Wiki
3e392d8996b5f42f
The term “from months to days” is a favorite slogan of mine and I have relied on it religiously for over two decades to illustrate the fundamental benefit of symbolic computation. Whether it’s the efficient development of complex physical models using MapleSim, or exploration of parametric design surface equations (my dissertation) using good old fashioned Maple V Release 2, the punch that symbolic computation provided was to automate the algebraic mechanics of equation development. Countless generations of grad students have developed equations for complex models, produced impressive results, and got to the finish line faster because of the algebraic boost of symbolic computation. I’m thrilled to see this fundamental dimension carry through to our latest generation of products anchored by MapleSim. The context is much more focused today and the stakes seem to be much higher but when all is said and done, everything is about saving massive amounts of time and effort. I have to thank my friend and colleague Dr. Stephen Watt (of Maple and Axiom fame), for posting a link on Facebook to a recent post  in MIT’s Technology Review physics blog. It discusses the work of Christoph Koutschan and Doron Zeilberger as they analyze the efforts of 1950’s physicist Chaim Pekeris of the Weizmann Institute of Technology in Israel. Pekeris was one of the first to apply computing techniques to solve problems in quantum physics (e.g. the Schrödinger equation).  Koutschan and Zeilberger are modern researchers in the field of computer algebra (intimately related to symbolic computation). A highlight of the article were the anecdotes of Pekeris’ Herculean efforts to both procure and use a computer of the 1950’s. I was particularly intrigued by his comment that it was John Von Neumann who had to twist Albert Einstein’s arm (both served on the Institute’s technical committee at the Weizmann) to approve the building of the Institute's first computer (sorry, no Best Buy during those days). The silly side of my brain, tucked neatly behind my right brain, immediately began musing about the possible dialog between these two giants of modern science ... Von Neumann:  I think we need a computer. Einstein: What for? Computers are nothing but nuts and bolts. What we need to do is be smarter and more imaginative! Von Neumann: You know Al, one day, you’ll be able to get recipes through your computer... Einstein: ... I think we need a computer ... In the end, of course, Pekeris did get his computer and did manage to break miraculous new scientific ground merging physics with the then nascent field of computer science. Fast forward to today, Koutschan and Zeilberger’s work retraced Pekeris’ mathematical and computational steps and accessed what the effort would have been. The findings were not surprising in some ways but surprising in others. First, as expected, in terms of shear numerical processing power, your average desktop today is orders of magnitude more capable than their original WEIZAC machine. As the article states, “WEIZAC was an asynchronous computer operating on 40-bit words. Instructions consisted of 20-bits: an 8-bit instruction code and 12-bits for addressing. For a memory it had a magnetic drum that could store 1,024 words. Today you'd get more processing power out of a washing machine.” The net result was really from days to fractions of seconds. The WEIZAC computer currently on display [image from Wikipedia] The WEIZAC computer currently on display [image from Wikipedia] The surprising part was on the equation development side. The first step in any exercise in modeling is, of course, the equation derivation. For this Koutschan and Zeilberger managed to successfully replicate Pekeris’ recipe, and using modern algebraic tools like Maple, managed to speed up this part of the task from the original 20 days to about 2 days – i.e. one order of magnitude. So one part of me cheers … my claim of  “from months to days” has been validated by some pretty heavy-duty scientists. Twenty to 2 days is essentially the same single order of magnitude speed up as months to days. There is, however, the flip side of this coin … is this about the best we can do? Could there be a natural and fairly restrictive limit to a “Moores-type” Law for algebraic manipulation for modeling? As the article comments, the issue is “wetware”. The human factor. As long as humans are included in the modeling loop, there will be a natural floor to how quickly a complex task can be done. Being an eternal optimist, however, the end-conclusion I developed was that although there will always be wetware bottlenecks, you can still achieve a heck of a lot by continually automating out more and more human volatility from the workflow. In many ways, I believe this is what MapleSim is doing and indeed, what Maple has always done in a broader context. Take care of more and more algebraic tedium and clutter to reduce sources of error and speed up sub tasks one “20 to 2 days” at a time. With complex modern systems like cars and space vehicles, there are an awful lot of subsystems and these time saving add up pretty quickly. I’ll take that extra 18 days any day. The original article from the MIT Technology Review blog Homepage of Doron Zeilberger Homepage of Christoph Koutschan  Please Wait...
3d79c5ffe8228efd
“Whoa! It's like Spotify but for academic articles.” Instant Access to Thousands of Journals for just $40/month Regge trajectories from the two-body, bound-state Thompson equation using a quark-confining interaction in momentum space Solutions of two-body, bound-state equations have recently been developed for quark-antiquark bound-state pairs. These solutions use a confining potential in momentum space as input into threedimensional reductions of the Bethe-Salpeter equation using special subtraction procedures. Regge trajectories are calculated for the Schrödinger equation which display the well-known unphysical behavior where lower mass trajectories overlap higher mass ones and also display nonlinearity. Both of these features contradict experiment. Regge trajectories obtained from the Thompson equation, which is relativistic in origin, avoid both of these problems. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Physical Review D American Physical Society (APS) Loading next page...
57491598a3659b8e
Geometric phase From Wikipedia, the free encyclopedia Jump to: navigation, search In classical and quantum mechanics, the geometric phase, Pancharatnam–Berry phase (named after S. Pancharatnam and Sir Michael Berry), Pancharatnam phase or most commonly Berry phase, is a phase difference acquired over the course of a cycle, when a system is subjected to cyclic adiabatic processes, which results from the geometrical properties of the parameter space of the Hamiltonian.[1] The phenomenon was first discovered in 1956,[2] and rediscovered in 1984.[3] It can be seen in the Aharonov–Bohm effect and in the conical intersection of potential energy surfaces. In the case of the Aharonov–Bohm effect, the adiabatic parameter is the magnetic field enclosed by two interference paths, and it is cyclic in the sense that these two paths form a loop. In the case of the conical intersection, the adiabatic parameters are the molecular coordinates. Apart from quantum mechanics, it arises in a variety of other wave systems, such as classical optics. As a rule of thumb, it can occur whenever there are at least two parameters characterizing a wave in the vicinity of some sort of singularity or hole in the topology; two parameters are required because either the set of nonsingular states will not be simply connected, or there will be nonzero holonomy. Waves are characterized by amplitude and phase, and both may vary as a function of those parameters. The geometric phase occurs when both parameters are changed simultaneously but very slowly (adiabatically), and eventually brought back to the initial configuration. In quantum mechanics, this could involve rotations but also translations of particles, which are apparently undone at the end. One might expect that the waves in the system return to the initial state, as characterized by the amplitudes and phases (and accounting for the passage of time). However, if the parameter excursions correspond to a loop instead of a self-retracing back-and-forth variation, then it is possible that the initial and final states differ in their phases. This phase difference is the geometric phase, and its occurrence typically indicates that the system's parameter dependence is singular (its state is undefined) for some combination of parameters. To measure the geometric phase in a wave system, an interference experiment is required. The Foucault pendulum is an example from classical mechanics that is sometimes used to illustrate the geometric phase. This mechanics analogue of the geometric phase is known as the Hannay angle. Berry phase in quantum mechanics[edit] In a quantum system at the n-th eigenstate, an adiabatic evolution of the Hamiltonian evolves the system such that it remains in the n-th eigenstate of the Hamiltonian, while also obtaining a phase factor. The phase obtained has a contribution from the state's time evolution and another from the variation of the eigenstate with the changing Hamiltonian. The second term corresponds to the Berry phase and for non-cyclical variations of the Hamiltonian it can be made to vanish by a different choice of the phase associated with the eigenstates of the Hamiltonian at each point in the evolution. However, if the variation is cyclical, the Berry phase cannot be cancelled, it is invariant and becomes an observable property of the system. From the Schrödinger equation the Berry phase \gamma can be calculated to be:[clarification needed] \gamma[C] = i\oint_C \! \langle n,t| \left( \nabla_R |n,t\rangle \right)\,dR \, where R parametrizes the cyclic adiabatic process. It follows a closed path C in the appropriate parameter space. A recent review on the geometric phase effects on electronic properties was given by Xiao, Chang and Niu.[4] Geometric phase along the closed path C can also be calculated by integrating the Berry curvature over surface enclosed by C. Examples of geometric phases[edit] The Foucault pendulum[edit] One of the easiest examples is the Foucault pendulum. An easy explanation in terms of geometric phases is given by von Bergmann and von Bergmann:[5] How does the pendulum precess when it is taken around a general path C? For transport along the equator, the pendulum will not precess. [...] Now if C is made up of geodesic segments, the precession will all come from the angles where the segments of the geodesics meet; the total precession is equal to the net deficit angle which in turn equals the solid angle enclosed by C modulo 2π. Finally, we can approximate any loop by a sequence of geodesic segments, so the most general result (on or off the surface of the sphere) is that the net precession is equal to the enclosed solid angle. To put it in different words, there are no inertial forces that could make the pendulum precess, so the precession (relative to the direction of motion of the path along which the pendulum is carried) is entirely due to the turning of this path. Thus the orientation of the pendulum undergoes parallel transport. For the original Foucault pendulum, the path is a circle of latitude, and by the Gauss–Bonnet theorem, the phase shift is given by the enclosed solid angle. Polarized light in an optical fiber[edit] A second example is linearly polarized light entering a single-mode optical fiber. Suppose the fiber traces out some path in space and the light exits the fiber in the same direction as it entered. Then compare the initial and final polarizations. In semiclassical approximation the fiber functions as a waveguide and the momentum of the light is at all times tangent to the fiber. The polarization can be thought of as an orientation perpendicular to the momentum. As the fiber traces out its path, the momentum vector of the light traces out a path on the sphere in momentum space. The path is closed since initial and final directions of the light coincide, and the polarization is a vector tangent to the sphere. Going to momentum space is equivalent to taking the Gauss map. There are no forces that could make the polarization turn, just the constraint to remain tangent to the sphere. Thus the polarization undergoes parallel transport and the phase shift is given by the enclosed solid angle (times the spin, which in case of light is 1). Stochastic pump effect[edit] A stochastic pump is a classical stochastic system that responds with nonzero, on average, currents to periodic changes of parameters. The stochastic pump effect can be interpreted in terms of a geometric phase in evolution of the moment generating function of stochastic currents.[6] Spin ½[edit] The geometric phase can be evaluated exactly for a spin-½ particle in a magnetic field.[1] Geometric phase defined on attractors[edit] While Berry's formulation was originally defined for linear Hamiltonian systems, it was soon realized by Ning and Haken [7] that similar geometric phase can be defined for entirely different systems such as nonlinear dissipative systems that possess certain cyclic attractors. They showed that such cyclic attractors exist in a class of nonlinear dissipative systems with certain symmetries.[8] Exposure in molecular adiabatic potential surface intersections[edit] There are several ways to compute the geometric phase in molecules within the Born Oppenheimer framework. One way is through the "non-adiabatic coupling M\times M matrix" defined by \tau _{ij}^{\mu }=\left\langle \psi _{i} | \partial ^{\mu }\psi _{j} \right\rangle where \psi _{i} is the adiabatic electronic wave function, depending on the nuclear parameters R_{\mu }. The nonadiabatic coupling can be used to define a loop integral, analogous to a Wilson loop (1974) in field theory, developed independently for molecular framework by M. Baer (1975, 1980, 2000). Given a closed loop \Gamma , parameterized by R_{\mu }\left( t \right) where t\in \left[0,1 \right] is a parameter and R_{\mu }\left( t+1 \right)=R_{\mu }\left( t \right). The D-matrix is given by: D\left[\Gamma \right]=\hat{P}e^{\oint_{\Gamma }{\tau ^{\mu }dR_{\mu }}} (here, {\hat{P}} is a path ordering symbol). It can be shown that once M is large enough (i.e. a sufficient number of electronic states is considered) this matrix is diagonal with the diagonal elements equal to e^{i\beta _{j}} where \beta _{j} are the geometric phases associated with the loop for the j adiabatic electronic state. For time-reversal symmetrical electronic Hamiltonians the geometric phase reflects the number of conical intersections encircled by the loop. More accurately: e^{i\beta _{j}}=\left( -1 \right)^{N_{j}} where N_{j} is the number of conical intersections involving the adiabatic state \psi _{j} encircled by the loop \Gamma . An alternative to the D-matrix approach would be a direct calculation of the Pancharatnam phase. This is especially useful if one is interested only in the geometric phases of a single adiabatic state. In this approach, one takes a number N+1 of points \left( n=0,...,N \right) along the loop R\left( t_{n} \right) with t_{0}=0 and t_{N}=1 then using only the jth adiabatic states \psi _{j}\left[R\left( t_{n} \right) \right] computes the Pancharatnam product of overlaps: I_{j}\left( \Gamma ,N \right)=\prod\limits_{n=0}^{N-1}{\left\langle \psi _{j}\left[R\left( t_{n} \right) \right] | \psi _{j}\left[R\left( t_{n+1} \right) \right] \right\rangle } In the limit N\to \infty one has (See Ryb & Baer 2004 for explanation and some applications): I_{j}\left( \Gamma ,N \right)\to e^{i\beta _{j}} Geometric phase and quantization of cyclotron motion[edit] Electron subjected to magnetic field B moves on a circular (cyclotron) orbit.[1] Classically, any cyclotron radius R_c is acceptable. Quantum-mechanically, only discrete energy levels (Landau levels) are allowed and since R_c is related to electron's energy, this corresponds to quantized values of R_c. The energy quantization condition obtained by solving Schrödinger's equation reads, for example, E=(n+\alpha)\hbar\omega_c, \alpha=1/2 for free electrons (in vacuum) or E=v\sqrt{2(n+\alpha)eB\hbar}, \alpha=0 for electrons in graphene where n=0,1,2,\ldots.[2] Although the derivation of these results is not difficult, there is an alternative way of deriving them which offers in some respect better physical insight into the Landau level quantization. This alternative way is based on the semiclassical Bohr-Sommerfeld quantization condition \hbar\oint d\mathbf{r}\cdot \mathbf{k} - e\oint d\mathbf{r}\cdot\mathbf{A} + \hbar\gamma = 2\pi\hbar(n+1/2) which includes the geometric phase \gamma picked up by the electron while it executes its (real-space) motion along the closed loop of the cyclotron orbit.[9] For free electrons, \gamma=0 while \gamma=\pi for electrons in graphene. It turns out that the geometric phase is directly linked to \alpha=1/2 of free electrons and \alpha=0 of electrons in graphene. See also[edit] ^ For simplicity, we consider electrons confined to a plane, such as 2DEG and magnetic field perpendicular to the plane. ^ \omega_c=eB/m is the cyclotron frequency (for free electrons) and v is the Fermi velocity (of electrons in graphene). 1. ^ a b Solem, J. C.; Biedenharn, L. C. (1993). "Understanding geometrical phases in quantum mechanics: An elementary example". Foundations of Physics 23 (2): 185–195.  2. ^ S. Pancharatnam (1956). "Generalized Theory of Interference, and Its Applications. Part I. Coherent Pencils". Proc. Indian Acad. Sci. A 44: 247–262. doi:10.1007/BF03046050.  3. ^ M. V. Berry (1984). "Quantal Phase Factors Accompanying Adiabatic Changes". Proceedings of the Royal Society A 392 (1802): 45–57. Bibcode:1984RSPSA.392...45B. doi:10.1098/rspa.1984.0023.  4. ^ Di Xiao et al., Rev. Mod. Phys. 82, 1959 (2010) 5. ^ Jens von Bergmann and HsingChi von Bergmann (2007). "Foucault pendulum through basic geometry". Am. J. Phys. 75 (10): 888–892. Bibcode:2007AmJPh..75..888V. doi:10.1119/1.2757623.  6. ^ N. A. Sinitsyn and I. Nemenman (2007). "The Berry phase and the pump flux in stochastic chemical kinetics". Euro. Phys. Lett. 77 (5): 58001. arXiv:q-bio/0612018. Bibcode:2007EL.....7758001S. doi:10.1209/0295-5075/77/58001.  7. ^ C.Z.Ning and H. Haken (1992). "Geometrical phase and amplitude accumulations in dissipative systems with cyclic attractors". Phys. Rev. Lett. 68 (14): 2109–2122. Bibcode:1992PhRvL..68.2109N. doi:10.1103/PhysRevLett.68.2109.  8. ^ C.Z.Ning and H. Haken (1992). "The geometric phase in nonlinear dissipative systems". Mod. Phys. Lett.B 6 (25): 1541–1568. Bibcode:1992MPLB....6.1541N. doi:10.1142/S0217984992001265.  9. ^ For a tutorial, see Jiamin Xue: "Berry phase and the unconventional quantum Hall effect in graphene" (2013) Further reading[edit] • Michael V. Berry ; The geometric phase, Scientific American 259 (6) (1988), 26-34 [3]
1b72d856e048a2c2
From Wikipedia, the free encyclopedia Jump to: navigation, search For a mathematical treatment of spin-½, see Spinor. In quantum mechanics, spin is an intrinsic property of all elementary particles. Fermions, the particles that constitute ordinary matter, have half-integer spin. All known elementary fermions have a spin of ½.[1][2][3] Heuristic depiction of spin angular momentum cones for a spin-1/2 particle. Particles having net spin ½ include the proton, neutron, electron, neutrino, and quarks. The dynamics of spin-½ objects cannot be accurately described using classical physics; they are among the simplest systems which require quantum mechanics to describe them. As such, the study of the behavior of spin-½ systems forms a central part of quantum mechanics. A spin-½ particle is characterized by an angular momentum quantum number for spin s of 1/2. In solutions of the Schrödinger equation, angular momentum is quantized according to this number, so that total spin angular momentum S = {{\sqrt{\frac{1}{2}\left(\frac{1}{2}+1\right)}}}\ \hbar = \frac{\sqrt{3}}{2}\hbar . However, the observed fine structure when the electron is observed along one axis, such as the Z-axis, is quantized in terms of a magnetic quantum number, which can be viewed as a quantization of a vector component of this total angular momentum, which can have only the values of ±½ħ. Note that these values for angular momentum are functions only of the reduced Planck constant (the angular momentum of any photon), with no dependence on mass or charge.[4] Stern–Gerlach experiment[edit] The necessity of introducing half-integral spin goes back experimentally to the results of the Stern–Gerlach experiment. A beam of atoms is run through a strong heterogeneous magnetic field, which then splits into N parts depending on the intrinsic angular momentum of the atoms. It was found that for silver atoms, the beam was split in two—the ground state therefore could not be integral, because even if the intrinsic angular momentum of the atoms were as small as possible, 1, the beam would be split into 3 parts, corresponding to atoms with Lz = −1, 0, and +1. The conclusion was that silver atoms had net intrinsic angular momentum of 12.[1] General properties[edit] Spin-½ objects are all fermions (a fact explained by the spin statistics theorem) and satisfy the Pauli exclusion principle. Spin-½ particles can have a permanent magnetic moment along the direction of their spin, and this magnetic moment gives rise to electromagnetic interactions that depend on the spin. One such effect that was important in the discovery of spin is the Zeeman effect, the splitting of a spectral line into several components in the presence of a static magnetic field. Unlike in more complicated quantum mechanical systems, the spin of a spin-½ particle can be expressed as a linear combination of just two eigenstates, or eigenspinors. These are traditionally labeled spin up and spin down. Because of this, the quantum-mechanical spin operators can be represented as simple 2 × 2 matrices. These matrices are called the Pauli matrices. Creation and annihilation operators can be constructed for spin-½ objects; these obey the same commutation relations as other angular momentum operators. Connection to the uncertainty principle[edit] One consequence of the generalized uncertainty principle is that the spin projection operators (which measure the spin along a given direction like x, y, or z) cannot be measured simultaneously. Physically, this means that it is ill-defined what axis a particle is spinning about. A measurement of the z component of spin destroys any information about the x and y components that might previously have been obtained. Complex phase[edit] A single point in space can spin continuously without becoming tangled. Notice that after a 360 degree rotation, the spiral flips between clockwise and counterclockwise orientations. It returns to its original configuration after spinning a full 720 degrees. When a spinor is rotated by 360 degrees (one full turn), it transforms to its negative, and then after a further rotation of 360 degrees it transforms back to its initial value again. This is because in quantum theory the state of a particle or system is represented by a complex probability amplitude (wavefunction) Ψ, and when the system is measured, the probability of finding the system in the state Ψ equals |Ψ|2 = Ψ*Ψ, the square of the absolute value of the amplitude. Suppose a detector that can be rotated measures a particle in which the probabilities of detecting some state are affected by the rotation of the detector. When the system is rotated through 360 degrees, the observed output and physics are the same as initially but the amplitudes are changed for a spin-½ particle by a factor of −1 or a phase shift of half of 360 degrees. When the probabilities are calculated, the −1 is squared, (−1)2 = 1, so the predicted physics is the same as in the starting position. Also, in a spin-½ particle there are only two spin states and the amplitudes for both change by the same −1 factor, so the interference effects are identical, unlike the case for higher spins. The complex probability amplitudes are something of a theoretical construct which cannot be directly observed. If the probability amplitudes rotated by the same amount as the detector, then they would have changed by a factor of −1 when the equipment was rotated by 180 degrees, which when squared would predict the same output as at the start, but experiments show this to be wrong. If the detector is rotated by 180 degrees, the result with spin-½ particles can be different to what it would be if not rotated, hence the factor of a half is necessary to make the predictions of the theory match the experiments. Mathematical description[edit] NRQM (Non-relativistic quantum mechanics)[edit] The quantum state of a spin-½ particle can be described by a two-component complex-valued vector called a spinor. Observable states of the particle are then found by the spin operators Sx, Sy, and Sz, and the total spin operator S. When spinors are used to describe the quantum states, the three spin operators (Sx, Sy, Sz,) can be described by 2×2 matrices called the Pauli matrices whose eigenvalues are ±ħ/2. For example, the spin projection operator Sz affects a measurement of the spin in the z direction. S_z = \frac{\hbar}{2} \sigma _z = \frac{\hbar}{2} \begin{bmatrix} The two eigenvalues of Sz, ±ħ/2, then correspond to the following eigenspinors: \chi_+ = \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \left \vert {s_z = +\textstyle\frac 1 2} \right \rang = | {\uparrow } \rang = | 0 \rang \chi_- = \begin{bmatrix} 0 \\ 1 \end{bmatrix} = \left \vert {s_z = -\textstyle\frac 1 2} \right \rang = | {\downarrow} \rang = | 1 \rang. These vectors form a complete basis for the Hilbert space describing the spin-½ particle. Thus, linear combinations of these two states can represent all possible states of the spin, including in the x and y directions. The ladder operators are: S_+ = \hbar \begin{bmatrix} 0&1\\ 0&0 \end{bmatrix}, S_-= \hbar \begin{bmatrix} 0&0\\ 1&0 \end{bmatrix} Since S±=Sx±iSy, Sx=1/2(S++S-) and Sy=1/2i(S+-S-). Thus: S_x = \frac{\hbar}{2} \sigma _x = \frac{\hbar}{2} \begin{bmatrix} 0&1\\ 1&0 \end{bmatrix} S_y = \frac{\hbar}{2} \sigma _y = \frac{\hbar}{2} \begin{bmatrix} 0&-i\\ i&0 \end{bmatrix} Their normalized eigenspinors can be found in the usual way. For Sx, they are: \chi^{(x)}_+ = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 \\ 1 \end{bmatrix} = \left \vert {s_x = +\textstyle\frac 1 2} \right \rang \chi^{(x)}_- = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 \\ -1 \end{bmatrix} = \left \vert {s_x = -\textstyle\frac 1 2} \right \rang For Sy, they are: \chi^{(y)}_+ = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 \\ i \end{bmatrix} = \left \vert {s_y = +\textstyle\frac 1 2} \right \rang \chi^{(y)}_- = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 \\ -i \end{bmatrix} = \left \vert {s_y = -\textstyle\frac 1 2} \right \rang RQM (relativistic quantum mechanics)[edit] While NRQM defines spin-½ with 2 dimensions in Hilbert space with dynamics that are described in 3-dimensional space and time, RQM define the spin with 4 dimensions in Hilbert space and dynamics described by 4-dimensional space-time. As a consequence of the four-dimensional nature of space-time in relativity, relativistic quantum mechanics uses 4×4 matrices to describe spin operators and observables. Spin as a consequence of combining quantum theory and special relativity[edit] When physicist Paul Dirac tried to modify the Schrödinger equation so that it was consistent with Einstein's theory of relativity, he found it was only possible by including matrices in the resulting Dirac Equation, implying the wave must have multiple components leading to spin.[5] See also[edit] 3. ^ Peleg, Y.; Pnini, R.; Zaarur, E.; Hecht, E. (2010). Quantum Mechanics (2nd ed.). McGraw Hill. ISBN 978-0-071-62358-2.  4. ^ C.R. Nave (2005). "Electron Spin". Georgia State University.  and internal links from the first section therein. 5. ^ Quantum Field Theory, D. McMahon, Mc Graw Hill (USA), 2008, ISBN 978-0-07-154382-8 Further reading[edit]
a02c76db9c589934
Quantum Computing First published Sun Dec 3, 2006; substantive revision Tue Jun 16, 2015 Combining physics, mathematics and computer science, quantum computing has developed in the past two decades from a visionary idea to one of the most fascinating areas of quantum mechanics. The recent excitement in this lively and speculative domain of research was triggered by Peter Shor (1994) who showed how a quantum algorithm could exponentially “speed-up” classical computation and factor large numbers into primes much more rapidly (at least in terms of the number of computational steps involved) than any known classical algorithm. Shor’s algorithm was soon followed by several other algorithms that aimed to solve combinatorial and algebraic problems, and in the last few years theoretical study of quantum systems serving as computational devices has achieved tremendous progress. Common belief has it that the implementation of Shor’s algorithm on a large scale quantum computer would have devastating consequences for current cryptography protocols which rely on the premise that all known classical worst-case algorithms for factoring take time exponential in the length of their input (see, e.g., Preskill 2005). Consequently, experimentalists around the world are engaged in tremendous attempts to tackle the technological difficulties that await the realization of such a large scale quantum computer. But regardless whether these technological problems can be overcome (Unruh 1995, Ekert and Jozsa 1996, Haroche and Raimond 1996), it is noteworthy that no proof exists yet for the general superiority of quantum computers over their classical counterparts. The philosophical interest in quantum computing is threefold: First, from a social-historical perspective, quantum computing is a domain where experimentalists find themselves ahead of their fellow theorists. Indeed, quantum mysteries such as entanglement and nonlocality were historically considered a philosophical quibble, until physicists discovered that these mysteries might be harnessed to devise new efficient algorithms. But while the technology for isolating 5 or even 7 qubits (the basic unit of information in the quantum computer) is now within reach (Schrader et al. 2004, Knill et al. 2000), only a handful of quantum algorithms exist, and the question whether these can solve classically intractable computational problems is still open. Next, from a more philosophical perspective, advances in quantum computing may yield foundational benefits. It may turn out that the technological capabilities that allow us to isolate quantum systems by shielding them from the effects of decoherence for a period of time long enough to manipulate them will also allow us to make progress in some fundamental problems in the foundations of quantum theory itself. Indeed, the development and the implementation of efficient quantum algorithms may help us understand better the border between classical and quantum physics, hence elucidate an important problem, namely, the measurement problem, that so far resists a solution. Finally, the idea that abstract mathematical concepts such as complexity and (in)tractability may not only be translated into physics, but also re-written by physics bears directly on the autonomous character of computer science and the status of its theoretical entities—the so-called “computational kinds”. As such it is also relevant to the long-standing philosophical debate on the relationship between mathematics and the physical world. 1. A Brief History of the Field 1.1 Physical Computational Complexity The mathematical model for a “universal” computer was defined long before the invention of computers and is called the Turing machine (Turing 1936). A Turing machine consists of an unbounded tape, a head that is capable of reading from the tape and of writing onto it and can occupy an infinite number of possible states, and an instruction table (a transition function). This table, given the head’s initial state and the input it reads from the tape in that state, determines (a) the symbol that the head will write on the tape, (b) the internal state it will occupy, and (c) the displacement of the head on the tape. In 1936 Turing showed that since one can encode the instruction table of a Turing machine \(T\) and express it as a binary number \(\#(T)\), there exists a universal Turing machine \(U\) that can simulate the instruction table of any Turing machine on any given input with at most a polynomial slowdown (i.e., the number of computational steps required by \(U\) to execute the original program \(T\) on the original input will be polynomially bounded in \(\#(T)\)). That the Turing machine model (what we nowadays call “an algorithm”) captures the concept of computability in its entirety is the essence of the Church-Turing thesis, according to which any effectively calculable function can be computed using a Turing machine. Admittedly, no counterexample to this thesis (which is the result of convergent ideas of Turing, Post, Kleene and Church) has yet been found. But since it identifies the class of computable functions with the class of those functions which are computable using a Turing machine, this thesis involves both a precise mathematical notion and an informal and intuitive notion, hence cannot be proved or disproved. Simple cardinality considerations show, however, that not all functions are Turing-computable (the set of all Turing machines is countable, while the set of all functions from the natural numbers to the natural numbers is not), and the discovery of this fact came as a complete surprise in the 1930s (Davis 1958). Computability, or the question whether a function can be computed, is not the only question that interests computer scientists. The cost of computing a function is also of great importance, and this cost, also known as computational complexity, is measured naturally in the physical resources (e.g., time, space, energy) invested in order to solve the computational problem at hand. Computer scientists classify computational problems according to the way their cost function behaves as a function of their input size, \(n\) (the number of bits required to store the input) and in particular, whether it increases exponentially or polynomially with \(n\). Tractable problems are those which can be solved in polynomial cost, while intractable problems are those which can only be solved in an exponential cost (the former solutions are commonly regarded as efficient although an exponential-time algorithm could turn out to be more efficient than a polynomial-time algorithm for some range of input sizes). If we further relax the requirement that a solution to a computational problem be always correct, and allow probabilistic algorithms with a negligible probability of error, we can dramatically reduce the computational cost. Probabilistic algorithms are non-deterministic Turing machines whose transition function can randomly change the head’s configuration in one of several possible ways. The most famous example of an algorithm of this kind is the probabilistic algorithm that decides whether a given natural number is prime in a polynomial number of steps (Rabin 1976). Using this notion we can further refine the distinction between tractable and intractable problems. The class \(\mathbf{P}\) (for Polynomial) is the class that contains all the computational decision problems that can be solved by a deterministic Turing machine at a polynomial cost. The class NP (for Non-deterministic Polynomial) is the class that contains all those computational decision problems whose proposed solution (“guessed” by the non-deterministic Turing machine) can be verified by a deterministic Turing machine at polynomial cost. The most famous problems in NP are called “NP-complete”. The term “complete” designates the fact that these problems stand or fall together: Either they are all tractable, or none of them is! If we knew how to solve an NP-complete problem efficiently (i.e., with a polynomial cost) we could solve any problem in NP with only polynomial slowdown (Cook 1971). Today we know of hundreds of examples of NP-complete problems (Garey and Johnson 1979), all of which are reducible one to another in polynomial slowdown, and since the best known algorithm for any of these problems is exponential, the widely believed conjecture is that there is no polynomial algorithm that can solve them. But while clearly \(\mathbf{P} \subseteq \mathbf{NP}\), proving or disproving the conjecture that \(\mathbf{P} \ne \mathbf{NP}\) remains perhaps one of the most important open questions in computer science and complexity theory. Although the original Church-Turing thesis involved the abstract mathematical notion of computability, physicists as well as computer scientists often interpret it as saying something about the scope and limitations of physical computing machines. Wolfram (1985) claims that any physical system can be simulated (to any degree of approximation) by a universal Turing machine, and that complexity bounds on Turing machine simulations have physical significance. For example, if the computation of the minimum energy of some system of \(n\) particles requires at least an exponentially increasing number of steps in \(n\), then the actual relaxation of this system to its minimum energy state will also take an exponential time. Aharonov (1998) strengthens this thesis (in the context of showing its putative incompatibility with quantum mechanics) when she says that a probabilistic Turing machine can simulate any reasonable physical device at polynomial cost. Further examples for this thesis can be found in Copeland (1996). In order for the physical Church-Turing thesis to make sense we have to relate the space and time parameters of physics to their computational counterparts: memory capacity and number of computation steps, respectively. There are various ways to do that, leading to different formulations of the thesis (Pitowsky 1990). For example, one can encode the set of instructions of a universal Turing machine and the state of its infinite tape in the binary development of the position coordinates of a single particle. Consequently, one can physically ‘realize’ a universal Turing machine as a billiard ball with hyperbolic mirrors (Moore 1990, Pitowsky 1996). For the most intuitive connection between abstract Turing machines and physical devices see the pioneering work of Gandy (1980), simplified later by Sieg and Byrnes (1999). It should be stressed that there is no relation between the original Church-Turing thesis and its physical version (Pitowsky and Shagrir 2003), and while the former concerns the concept of computation that is relevant to logic (since it is strongly tied to the notion of proof which requires validation), it does not analytically entail that all computations should be subject to validation. Indeed, there is a long historical tradition of analog computations which use continuous physical processes (Dewdney 1984), and the output of these computations is validated either by repetitive “runs” or by validating the physical theory that presumably governs the behavior of the analog computer. 1.2 Physical “Short-cuts” of Computation Do physical processes exist which contradict the physical Church-Turing thesis? Apart from analog computation, there exist at least two counter-examples to this thesis that purport to show that the notion of recursion, or Turing-computability, is not a natural physical property (Pour-el and Richards 1981, Pitowsky 1990, Hogarth 1994). Although the physical systems involved (a specific initial condition for the wave equation in three dimensions and an exotic solution to Einstein’s field equations, respectively) are somewhat contrived, recent years saw the emergence of the thriving school of “hypercomputation” that aspires to extend the limited examples of physical “hypercomputers” and in so doing to physically “compute” the non-Turing-computable (for a review see Copeland 2002; for a criticism—Davis 2003). Quantum hypercomputation is rarely discussed in the literature (See, e.g., Calude et al. 2003), but the most concrete attempt to harness quantum theory to compute the non-computable is the suggestion to use the quantum adiabatic algorithm (see below) to solve Hilbert’s Tenth Problem (Kieu 2002, 2004)—a Turing-undecidable problem equivalent to the halting problem. Recent criticism, however, has exposed the unphysical character of the alleged quantum adiabatic hypercomputer (see Smith 2005, Hodges 2005, and Hagar and Korolev 2007). Setting aside the hype around “hypercomputers”, even if we restrict ourselves only to Turing-computable functions and focus on computational complexity, we can still find many physical models that purport to display “short-cuts” in computational resources. Consider, e.g., the DNA model of computation that was claimed (Adleman 1994, Lipton 1995) to solve NP-complete problems in polynomial time. A closer inspection shows that the cost of the computation in this model is still exponential since the number of molecules in the physical system grows exponentially with the size of the problem. Or take an allegedly instantaneous solution to another NP-complete problem using a construction of rods and balls (Vergis et al. 1986) that unfortunately ignores the accumulating time-delays in the rigid rods that result in an exponential overall slowdown. Another example is the physical simulation of the factorization of numbers into primes that uses only polynomial resources in time and space, but requires an exponentially increasing precision. It thus appears that all these models cannot serve as counter-examples to the physical Church-Turing thesis (as far as complexity is concerned) since they all require some exponential physical resource. Note, however, that all these models are based on classical physics, hence the unavoidable question: Can the shift to quantum physics allow us to find “short-cuts” in computational resources? The quest for the quantum computer began with the possibility of giving a positive answer to this question. 1.3 Milestones The idea of a computational device based on quantum mechanics was explored already in the 1970s by physicists and computer scientists. As early as 1969 Steven Wiesner suggested quantum information processing as a possible way to better accomplish cryptologic tasks. But the first four published papers on quantum information (Wiesner published his only in 1983), belong to Alexander Holevo (1973), R.P. Poplavskii (1975), Roman Ingarden (1976) and Yuri Manin (1980). Better known are contributions made in the early 1980s by Charles H. Bennett of the IBM Thomas J. Watson Research Center, Paul A. Benioff of Argonne National Laboratory in Illinois, David Deutsch of the University of Oxford, and the late Richard P. Feynman of the California Institute of Technology. The idea emerged when scientists were investigating the fundamental physical limits of computation. If technology continued to abide by “Moore’s Law” (the observation made in 1965 by Gordon Moore, co-founder of Intel, that the number of transistors per square inch on integrated circuits had doubled every 18 months since the integrated circuit was invented), then the continually shrinking size of circuitry packed onto silicon chips would eventually reach a point where individual elements would be no larger than a few atoms. But since the physical laws that govern the behavior and properties of the putative circuit at the atomic scale are inherently quantum mechanical in nature, not classical, the natural question arose whether a new kind of computer could be devised based on the principles of quantum physics. Inspired by Ed Fredkin’s ideas on reversible computation (see Hagar forthcoming), Feynman was among the first to attempt to provide an answer to this question by producing an abstract model in 1982 that showed how a quantum system could be used to do computations. He also explained how such a machine would be able to act as a simulator for quantum physics. Feynman also conjectured that any classical computer that will be harnessed for this task will do so only inefficiently, incurring an exponential slowdown in computation time. In 1985 David Deutsch proposed the first universal quantum Turing machine and paved the way to the quantum circuit model. The young and thriving domain also attracted philosophers’ attention. In 1983 David Albert showed how a quantum mechanical automaton behaves remarkably differently from a classical automaton, and in 1990 Itamar Pitowsky raised the question whether the superposition principle will allow quantum computers to solve NP-complete problems. He also stressed that although one could in principle ‘squeeze’ information of exponential complexity into polynomially many quantum states, the real problem lay in the efficient retrieval of this information. Progress in quantum algorithms began in the 1990s, with the discovery of the Deutsch-Josza oracle (1992) and of Simon’s oracle (1994). The latter supplied the basis for Shor’s algorithm for factoring. Published in 1994, this algorithm marked a ‘phase transition’ in the development of quantum computing and sparked a tremendous interest even outside the physics community. In that year the first experimental realization of the quantum CNOT gate with trapped ions was proposed by Cirac and Zoller (1995). In 1995, Peter Shor and Andrew Steane proposed (independently) the first scheme for quantum error-correction. In that same year the first realization of a quantum logic gate was done in Boulder, Colorado, following Cirac and Zoller’s proposal. In 1996, Lov Grover from Bell Labs invented the quantum search algorithm which yields a quadratic “speed-up” compared to its classical counterpart. A year later the first NMR model for quantum computation was proposed, based on nuclear magnetic resonance techniques. This technique was realized in 1998 with a 2-qubit register, and was scaled up to 7 qubits in the Los Alamos National Lab in 2000. Starting from 2000 the field saw a tremendous growth. New paradigms of quantum algorithms have appeared, such as adiabatic algorithms, measurement-based algorithms, and topological-quantum-field-theory-based algorithms, as well as new physical models for realizing a large scale quantum computer with cold ion traps, quantum optics (using photons and optical cavity), condensed matter systems and solid state physics (meanwhile, the first NMR model had turned out to be a dead-end with respect to scaling; see DiVicenzo 2000). The basic questions, however, remain open even today: (1) theoretically, can quantum algorithms efficiently solve classically intractable problems? (2) operationally, can we actually realize a large scale quantum computer to run these algorithms? 2. Basics In this section we will review the basic paradigm for quantum algorithms, namely the quantum circuit model, which is composed of the basic quantum units of information (qubits) and the basic logical manipulations thereof (quantum gates). 2.1 The Qubit The qubit is the quantum analogue of the bit, the classical fundamental unit of information. It is a mathematical object with specific properties that can be realized physically in many different ways as an actual physical system. Just as the classical bit has a state (either 0 or 1), a qubit also has a state. Yet contrary to the classical bit, \(\lvert 0\rangle\) and \(\lvert 1\rangle\) are but two possible states of the qubit, and any linear combination (superposition) thereof is also physically possible. In general, thus, the physical state of a qubit is the superposition \(\lvert\psi \rangle = \alpha \lvert 0\rangle + \beta \lvert 1\rangle\) (where \(\alpha\) and \(\beta\) are complex numbers). The state of a qubit can be described as a vector in a two-dimensional Hilbert space, a complex vector space (see the entry on quantum mechanics). The special states \(\lvert 0\rangle\) and \(\lvert 1\rangle\) are known as the computational basis states, and form an orthonormal basis for this vector space. According to quantum theory, when we try to measure the qubit in this basis in order to determine its state, we get either \(\lvert 0\rangle\) with probability \(\lvert \alpha\rvert^2\) or \(\lvert 1\rangle\) with probability \(\lvert \beta\rvert^2\). Since \(\lvert \alpha\rvert^2 + \lvert\beta\rvert^2 = 1\) (i.e., the qubit is a unit vector in the aforementioned two-dimensional Hilbert state), we may (ignoring the overall phase factor) effectively write its state as \(\lvert \psi \rangle =\) cos\((\theta)\lvert 0\rangle + e^{i\phi}\)sin\((\theta)\lvert 1\rangle\), where the numbers \(\theta\) and \(\phi\) define a point on the unit three-dimensional sphere, as shown here. This sphere is often called the Bloch sphere, and it provides a useful means to visualize the state of a single qubit. \(\lvert 0\rangle\) state of qubit \(\lvert 1\rangle\) The Bloch Sphere Theoretically, a single qubit can store an infinite amount of information, yet when measured it yields only the classical result (0 or 1) with certain probabilities that are specified by the quantum state. In other words, the measurement changes the state of the qubit, “collapsing” it from the superposition to one of its terms. The crucial point is that unless the qubit is measured, the amount of “hidden” information it stores is conserved under the dynamic evolution (namely, Schrödinger’s equation). This feature of quantum mechanics allows one to manipulate the information stored in unmeasured qubits with quantum gates, and is one of the sources for the putative power of quantum computers. To see why, let us suppose we have two qubits at our disposal. If these were classical bits, then they could be in four possible states (00, 01, 10, 11). Correspondingly, a pair of qubits has four computational basis states (\(\lvert 00\rangle\), \(\lvert 01\rangle\), \(\lvert 10\rangle\), \(\lvert 11\rangle)\). But while a single classical two-bit register can store these numbers only one at a time, a pair of qubits can also exist in a superposition of these four basis states, each of which with its own complex coefficient (whose mod square, being interpreted as a probability, is normalized). As long as the quantum system evolves unitarily and is unmeasured, all four possible states are simultaneously “stored” in a single two-qubit quantum register. More generally, the amount of information that can be stored in a system of \(n\) unmeasured qubits grows exponentially in \(n\). The difficult task, however, is to retrieve this information efficiently. 2.2 Quantum Gates Classical computational gates are Boolean logic gates that perform manipulations of the information stored in the bits. In quantum computing these gates are represented by matrices, and can be visualized as rotations of the quantum state on the Bloch sphere. This visualization represents the fact that quantum gates are unitary operators, i.e., they preserve the norm of the quantum state (if \(U\) is a matrix describing a single qubit gate, then \(U^{\dagger}U=I\), where \(U^{\dagger}\) is the adjoint of \(U\), obtained by transposing and then complex-conjugating \(U)\). As in the case of classical computing, where there exists a universal gate (the combinations of which can be used to compute any computable function), namely, the NAND gate which results from performing an AND gate and then a NOT gate, in quantum computing it was shown (Barenco et al., 1995) that any multiple qubit logic gate may be composed from a quantum CNOT gate (which operates on a multiple qubit by flipping or preserving the target bit given the state of the control bit, an operation analogous to the classical XOR, i.e., the exclusive OR gate) and single qubit gates. One feature of quantum gates that distinguishes them from classical gates is that they are reversible: the inverse of a unitary matrix is also a unitary matrix, and thus a quantum gate can always be inverted by another quantum gate. The CNOT Gate Unitary gates manipulate the information stored in the quantum register, and in this sense ordinary (unitary) quantum evolution can be regarded as computation (DiVicenzo 1995 showed how a small set of single-qubit gates and a two-qubit gate is universal, in the sense that a circuit combined from this set can approximate to arbitrary accuracy any unitary transformation of \(n\) qubits). In order to read the result of this computation, however, the quantum register must be measured. The measurement gate is a non-unitary gate that “collapses” the quantum superposition in the register onto one of its terms with the corresponding probability. Usually this measurement is done in the computational basis, but since quantum mechanics allows one to express an arbitrary state as a linear combination of basis states, provided that the states are orthonormal (a condition that ensures normalization) one can in principle measure the register in any arbitrary orthonormal basis. This, however, doesn’t mean that measurements in different bases are efficiently equivalent. Indeed, one of the difficulties in constructing efficient quantum algorithms stems exactly from the fact that measurement collapses the state, and some measurements are much more complicated than others. 2.3 Quantum Circuits Quantum circuits are similar to classical computer circuits in that they consist of wires and logical gates. The wires are used to carry the information, while the gates manipulate it (note that the wires do not correspond to physical wires; they may correspond to a physical particle, a photon, moving from one location to another in space, or even to time-evolution). Conventionally, the input of the quantum circuit is assumed to be a computational basis state, usually the state consisting of all \(\lvert 0\rangle\). The output state of the circuit is then measured in the computational basis, or in any other arbitrary orthonormal basis. The first quantum algorithms (i.e. Deutsch-Jozsa, Simon, Shor and Grover) were constructed in this paradigm. Additional paradigms for quantum computing exist today that differ from the quantum circuit model in many interesting ways. So far, however, they all have been demonstrated to be computationally equivalent to the circuit model (see below), in the sense that any computational problem that can be solved by the circuit model can be solved by these new models with only a polynomial overhead in computational resources. 3. Quantum Algorithms Algorithm design is a highly complicated task, and in quantum computing it becomes even more complicated due to the attempts to harness quantum mechanical features to reduce the complexity of computational problems and to “speed-up” computation. Before attacking this problem, we should first convince ourselves that quantum computers can be harnessed to perform standard, classical, computation without any “speed-up”. In some sense this is obvious, given the belief in the universal character of quantum mechanics, and the observation that any quantum computation that is diagonal in the computational basis, i.e., involves no interference between the qubits, is effectively classical. Yet the demonstration that quantum circuits can be used to simulate classical circuits is not straightforward (recall that the former are reversible while the latter use gates which are inherently irreversible). Indeed, quantum circuits cannot be used directly to simulate classical computation, but the latter can still be simulated on a quantum computer using an intermediate gate, namely the Toffoli gate. This gate has three input bits and three output bits, two of which are control bits, unaffected by the action of the gate. The third bit is a target bit that is flipped if both control bits are set to 1, and otherwise is left alone. This gate is reversible (its inverse is itself), and can be used to simulate all the elements of the classical irreversible circuit with a reversible one. Consequently, using the quantum version of the Toffoli gate (which by definition permutes the computational basis states similarly to the classical Toffoli gate) one can simulate, although rather tediously, irreversible classical logic gates with quantum reversible ones. Quantum computers are thus capable of performing any computation which a classical deterministic computer can do. What about non-deterministic computation? Not surprisingly, a quantum computer can also simulate this type of computation by using another famous quantum gate, namely the Hadamard gate, which receives as an input the state \(\lvert 0\rangle\) and produces the state \(\bfrac{\lvert 0\rangle + \lvert 1\rangle}{\sqrt{2}}\). Measuring this output state yields \(\lvert 0\rangle\) or \(\lvert 1\rangle\) with 50/50 probability, which can be used to simulate a fair coin toss. The Hadamard Gate Obviously, if quantum algorithms could be used only to simulate classical algorithms, then the technological advancement in information storage and manipulation, encapsulated in “Moore’s law”, would have only trivial consequences on computational complexity theory, leaving the latter unaffected by the physical world. But while some computational problems will always resist quantum “speed-up” (in these problems the computation time depends on the input, and this feature will lead to a violation of unitarity hence to an effectively classical computation even on a quantum computer—see Myers 1997 and Linden and Popescu 1998), the hope is, nonetheless, that quantum algorithms may not only simulate classical ones, but that they will actually outperform the latter in some cases, and in so doing help to re-define the abstract notions of tractability and intractability and violate the physical Church-Turing thesis, at least as far as computational complexity is concerned. 3.1 Quantum-Circuit-Based Algorithms The first quantum algorithms were designed to exploit the adequacy of quantum computation to computational problems which involve oracles. Oracles are devices which are used to answer questions with a simple yes or no. The questions may be as elaborate as one can make them, the procedure that answers the questions may be lengthy and a lot of auxiliary data may get generated while the question is being answered. Yet all that comes out of the oracle is just yes or no. The oracle architecture is very suitable for quantum computers. The reason for this is that, as stressed above, the read-out of a quantum system is probabilistic. Therefore if one poses a question the answer to which is given in the form of a quantum state, one will have to carry out the computation on an ensemble of quantum computers to get anywhere. On the other hand if the computation can be designed in such a way that one does get yes or no in a single measurement (and some data reduction may be required to accomplish this), then a single quantum computer and a single quantum computation run may suffice. 3.1.1 The Deutsch Oracle This oracle (Deutsch 1989) answers the following question. Suppose we have a function \(f : \{0,1\} \rightarrow \{0,1\}\), which can be either constant or balanced. In this case, the function is constant if \(f\)(0) \(= f\)(1) and it is balanced if \(f\)(0) \(\ne f\)(1). Classically it would take two evaluations of the function to tell whether it is one or the other. Quantumly, we can answer this question in one evaluation. The reason for this complexity reduction is, again, the superposition principle. To see why consider the following quantum algorithm. One can prepare the input qubits of the Deutsch oracle as the superposition \(\bfrac{\lvert 0\rangle + \lvert 1\rangle}{\sqrt{2}}\) (using the Hadamard gate on \(\lvert 0\rangle)\) and the superposition \(\bfrac{\lvert 0\rangle - \lvert 1\rangle}{\sqrt{2}}\) (using the Hadamard gate on \(\lvert 1\rangle)\). The oracle is implemented with a quantum circuit which takes inputs like \(\lvert x,y\rangle\) to \(\lvert x, y\oplus f (x)\rangle\), where \(\oplus\) is addition modulo two, which is exactly what a XOR gate does. The first qubit of the output of this oracle is then fed into a Hadamard gate, and the final output of the algorithm is the state \[ \pm\lvert f(0)\oplus f(1)\rangle\left(\frac{\lvert 0\rangle -\lvert 1\rangle}{\sqrt{2}}\right). \] Since \(f\)(0)\(\oplus f\)(1) is 0 if the function is constant and 1 if the function balanced, a single measurement of the first qubit of the output suffices to retrieve the answer to the question whether the function is constant or balanced. In other words, we can distinguish in one run of the algorithm between the two quantum disjunctions without finding out the truth values of the disjuncts themselves in the computation. 3.1.2 The Deutsch-Jozsa Oracle This oracle (Deutsch and Jozsa 1992) generalizes the Deutsch oracle to a function \(f : \{0,1\}^{n} \rightarrow \{0,1\}\). We ask the same question: is the function constant or balanced. Here balanced means that the function is 0 on half of its arguments and 1 on the other half. Of course in this case the function may be neither constant nor balanced. In this case the oracle doesn’t work: It may say yes or no but the answer will be meaningless. Also here the algorithm allows one to evaluate a global property of the function in one measurement because the output state is a superposition of balanced and constant states such that the balanced states all lie in a subspace orthogonal to the constant states and can therefore be distinguished from the latter in a single measurement. In contrast, the best deterministic classical algorithm would require \(\bfrac{2^{n}}{2}+1\) queries to the oracle in order to solve this problem. 3.1.3 The Simon Oracle Suppose we have a Boolean function \(f : \{0,1\}^{n} \rightarrow \{0,1\}^{n}\). The function is supposed to be 2-to-1, i.e., for every value of \(f\) there are always two \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) such that \(f (\mathbf{x}_{1}) = f (\mathbf{x}_{2})\). The function is also supposed to be periodic, meaning that there is a binary vector \(\mathbf{a}\) such that \(f (\mathbf{x}\oplus \mathbf{a}) = f (\mathbf{x})\), where \(\oplus\) designates addition modulo 2, i.e., \(1 \oplus 1 = 0\). The Simon oracle returns the period \(\mathbf{a}\) in a number of measurements linear in \(n\), which is exponentially faster than any classical algorithm (Simon 1994). Simon’s oracle reduces to Deutsch’s XOR oracle when \(n=2\), and can indeed be regarded as an extension of the latter, in the sense that a global property of a function, in this case its period, can be evaluated in an efficient number of measurements, given that the output state of the algorithm is decomposed into orthogonal subspaces, only one of which contains the solution to the problem, hence repeated measurements in the computational basis will be sufficient to determine this subspace. In other words, Simon’s oracle is another example of a quantum algorithm which evaluates a disjunction seemingly without determining the truth values of its disjuncts. For more on the logical analysis of these first quantum-circuit-based algorithms see Bub (2006b). 3.1.4 Shor’s Algorithm The three oracles just described, although demonstrating the potential superiority of quantum computers over their classical counterparts, nevertheless deal with apparently unimportant computational problems. Indeed, it is doubtful whether the research field of quantum computing would have attracted so much attention and would have evolved to its current status if its merit could be demonstrated only with these problems. But in 1994, after realizing that Simon’s oracle can be harnessed to solve a much more interesting and crucial problem, namely factoring, which lies at the heart of current cryptographic protocols such as the RSA (Rivest 1978), Peter Shor has turned quantum computing into one of the most exciting research domains in quantum mechanics. Shor’s algorithm (1994) exploits the ingenious number theoretic argument that two prime factors \(p,q\) of a positive integer \(N=pq\) can be found by determining the period of a function \(f(x) = y^x\ \textrm{mod}\ N\), for any \(y \lt N\) which has no common factors with \(N\) other than 1 (Nielsen and Chuang 2000, App. 4). The period \(r\) of \(f(x)\) depends on \(y\) and \(N\). Once one knows the period, one can factor \(N\) if \(r\) is even and \(y^{\,\bfrac{r}{2}} \ne -1\) mod \(N\), which will be jointly the case with probability greater than \(\bfrac{1}{2}\) for any \(y\) chosen randomly (if not, one chooses another value of \(y\) and tries again). The factors of \(N\) are the greatest common divisors of \(y^{\,\bfrac{r}{2}} \pm 1\) and \(N\), which can be found in polynomial time using the well known Euclidean algorithm. In other words, Shor’s remarkable result rests on the discovery that the problem of factoring reduces to the problem of finding the period of a certain periodic function \(f: Z_{n} \rightarrow Z_{N}\), where \(Z_{n}\) is the additive group of integers mod \(n\) (Note that \(f(x) = y^{x}\ \textrm{mod}\ N\) so that \(f(x+r) = f(x)\) if \(x+r \le n\). The function is periodic if \(r\) divides \(n\) exactly, otherwise it is almost periodic). That this problem can be solved efficiently by a quantum computer is demonstrated with Simon’s oracle. Shor’s result is the most dramatic example so far of quantum “speed-up” of computation, notwithstanding the fact that factoring is believed to be only in NP and not in NP-complete. To verify whether \(n\) is prime takes a number of steps which is a polynomial in \(\log_{2}n\) (the binary encoding of a natural number \(n\) requires \(\log_{2}n\) resources). But nobody knows how to factor numbers into primes in polynomial time, not even on a probabilistic Turing machine, and the best classical algorithms we have for this problem are sub-exponential. This is yet another open problem in the theory of computational complexity. Modern cryptography and Internet security protocols such public key and electronic signature are based on these facts (Giblin 1993): It is easy to find large prime numbers fast, and it is hard to factor large composite numbers in any reasonable amount of time. The discovery that quantum computers can solve factoring in polynomial time has had, therefore, a dramatic effect. The implementation of the algorithm on a physical machine would have economic, as well as scientific consequences. 3.1.5 Grover’s Algorithm Suppose you have met someone who kept her name secret, but revealed her telephone number to you. Can you find out her name using her number and a phone directory? In the worst case, if there are \(n\) entries in the directory, the computational resources required will be linear in \(n\). Grover (1996) showed how this task, namely, searching an unstructured database, could be done with a quantum algorithm with complexity of the order \(\sqrt{n}\). Agreed, this “speed-up” is more modest than Shor’s since searching an unstructured database belongs to the class \(\mathbf{P}\), but contrary to Shor’s case, where the classical complexity of factoring is still unknown, here the superiority of the quantum algorithm, however modest, is definitely provable. That this quadratic “speed-up” is also the optimal quantum “speed-up” possible for this problem was proved by Bennett, Bernstein, Brassard and Vazirani (1997). Although the purpose of Grover’s algorithm is usually described as “searching a database”, it may be more accurate to describe it as “inverting a function”. Roughly speaking, if we have a function \(y=f(x)\) that can be evaluated on a quantum computer, Grover’s algorithm allows us to calculate \(x\) when given \(y\). Inverting a function is related to searching a database because we could come up with a function that produces a particular value of \(y\) if \(x\) matches a desired entry in a database, and another value of \(y\) for other values of \(x\). The applications of this algorithm are far-reaching (over and above finding the name of the mystery ‘date’ above). For example, it can be used to to determine efficiently the number of solutions to an \(N\)-item search problem, hence to perform exhaustive searches on a class of solutions to an NP-complete problem and substantially reduce the computational resources required for solving it. 3.2 Adiabatic Algorithms More than a decade has passed since the discovery of the first quantum algorithm, but so far little progress has been made with respect to the “Holy Grail” of solving an NP-complete problem with a quantum-circuit model. As stressed above, Shor’s algorithm stands alone in its exponential “speed-up”, yet while no efficient classical algorithm for factoring is known to exist, there is also no proof that such an algorithm doesn’t or cannot exist. In 2000 a group of physicists from MIT and Northeastern University (Farhi et al. 2000) proposed a novel paradigm for quantum computing that differs from the circuit model in several interesting ways. Their goal was to try to solve with this algorithm an instance of satisfiability—deciding whether a proposition in the propositional calculus has a satisfying truth assignment—which is one of the most famous NP-complete problems (Cook 1971). According to the adiabatic theorem (e.g., Messiah 1961) and given certain specific conditions, a quantum system remains in its lowest energy state, known as the ground state, along an adiabatic transformation in which the system is deformed slowly and smoothly from an initial Hamiltonian to a final Hamiltonian (as an illustration, think of moving a baby who is sleeping in a cradle from the living room to the bedroom. If the transition is done slowly and smoothly enough, and if the baby is a sound sleeper, then it will remain asleep during the whole transition). The most important condition in this theorem is the energy gap between the ground state and the next excited state (in our analogy, this gap reflects how sound asleep the baby is). Being inversely proportional to the evolution time \(T\), this gap controls the latter. If this gap exists during the entire evolution (i.e., there is no level crossing between the energy states of the system), the theorem dictates that in the adiabatic limit (when \(T\rightarrow \infty)\) the system will remain in its ground state. In practice, of course, \(T\) is always finite, but the longer it is, the less likely it is that the system will deviate from its ground state during the time evolution. The crux of the quantum adiabatic algorithm which rests on the adiabatic theorem lies in the possibility of encoding a specific instance of a given decision problem in a certain Hamiltonian (this can be done by capitalizing on the well-known fact that any decision problem can be derived from an optimization problem by incorporating into it a numerical bound as an additional parameter). One then starts the system in a ground state of another Hamiltonian which is easy to construct, and slowly evolves the system in time, deforming it towards the desired Hamiltonian. According to the quantum adiabatic theorem and given the gap condition, the result of such a physical process is another energy ground state that encodes the solution to the desired decision problem. The adiabatic algorithm is thus a rather ‘laid back’ algorithm: one needs only to start the system in its ground state, deform it adiabatically, and measure its final ground state in order to retrieve the desired result. But whether or not this algorithm yields the desired “speed-up” depends crucially on the behavior of the energy gap as the number of degrees of freedom in the system increases. If this gap decreases exponentially with the size of the input, then the evolution time of the algorithm will increase exponentially; if the gap decreases polynomially, the decision problem so encoded could be solved efficiently in polynomial time. Although physicists have been studying spectral gaps for almost a century, they have never done so with quantum computing in mind. How this gap behaves in general remains thus far an open empirical question. The quantum adiabatic algorithm holds much promise (Farhi et al. 2001), and recently it was shown (Aharonov et al. 2004) to be polynomially equivalent to the circuit model (that is, each model can simulate the other with only polynomial, i.e., modest, overhead of resources, namely, number of qubits and computational steps), but the caveat that is sometimes left unmentioned is that its application to an intractable computational problem may sometimes require solving another, as intractable a task (this general worry was first raised by a philosopher; see Pitowsky 1990). Indeed, Reichardt (2004) has shown that there are simple problems for which the algorithm will get stuck in a local minimum, in which there are exponentially many eigenvalues all exponentially close to the ground state energy, so applying the adiabatic theorem, even for these simple problems, will take exponential time, and we are back to square one. 3.3 Measurement-Based Algorithms Measurement-based algorithms differ from the circuit model in that instead of applying unitary evolution as the basic mechanism for the manipulation of information, these algorithms use only non-unitary measurements as their computational steps. These models are especially interesting from a foundational perspective because they have no evident classical analogues and because they offer a new insight on the role of entanglement in quantum computing (Jozsa 2005). They may also have interesting consequences for experimental considerations, suggesting a different kind of computer architecture which is more fault tolerant (Nielsen and Dawson 2004). The measurement-based algorithms fall into two categories. The first is teleportation quantum computing (based on an idea of Gottesman and Chuang 1999, and developed into a computational model by Nielsen 2003 and Leung 2003). The second is the “one way quantum computer”, known also as the “cluster state” model (Raussendorf and Briegel 2000). The interesting feature of these models is that they are able to represent arbitrary quantum dynamics, including unitary dynamics, with basic non-unitary measurements. The measurements are performed on a pool of highly entangled states (the amount of entanglement needed is still under dispute), and are adaptive, i.e., each measurement is done in a different basis which is calculated classically, given the result of the earlier measurement (the first model uses measurements of 2 or more qubits, while the second uses only single qubit measurements; in the first model only bi-partite entanglement is used, while the second one has multi-partite entanglement across all qubits). Such exotic models might seem redundant, especially when they have been shown to be polynomially equivalent to the standard circuit model in terms of computational complexity (Raussendorf et al. 2003). Their merit, however, lies in the foundational lessons they drive home: with these models the separation between the classical part (i.e., the calculation of the next measurement-basis) and the quantum parts (i.e., the measurement and the entangled states) of the computation becomes evident, hence it may be easier to pinpoint the quantum resources that are responsible for the putative “speed-up”. 3.4 Topological-Quantum-Field-Theory (TQFT) Algorithms Another exotic model for quantum computing which is attracting a lot of attention lately, especially from Microsoft inc. (Freedman 1998), is the Topological Quantum Field Theory model. In contrast to the straightforward and standard circuit model, this model resides in the most abstract reaches of theoretical physics. The exotic physical systems TQFT describes are topological states of matter. That the formalism of TQFT can be applied to computational problems was shown by Witten (1989) and the idea was later developed by others. Also here the model was proved to be efficiently simulated on a standard quantum computer (Freedman, Kitaev, Wang 2000, Aharonov et al. 2005), but its merit lies in its high tolerance to errors resulting from any possible realization of a large scale quantum computer (see below). Topology is especially helpful here because many global topological properties are, by definition, invariant under deformation, and given that most errors are local, information encoded in topological properties is robust against them. 3.5 Realizations The quantum computer might be the theoretician’s dream, but as far as experimentalists are concerned, its realization is a nightmare. The problem is that while some prototypes of the simplest elements needed to build a quantum computer have already been implemented in the laboratory, it is still an open question how to combine these elements into scalable systems. Shor’s algorithm may break the RSA code, but it will remain an anecdote if the largest number that it can factor is 15. In the circuit-based model the problem is to achieve a scalable quantum system that at the same time will allow one to (1) robustly represent quantum information, (2) perform a universal family of unitary transformation, (3) prepare a fiducial initial state, and (4) measure the output result. Alternative paradigms may trade some of these requirements with others, but the gist will remain the same, i.e., one would have to achieve control of one’s quantum system in such a way that the system will remain “quantum” albeit macroscopic or even mesoscopic in its dimensions. In order to deal with these requirements, several ingenious solutions were devised, including quantum error correction codes (Shor 1995) and fault tolerant computation (Shor and DiVicenzo 1996, Aharonov and Ben-Or 1997) that dramatically reduce the spread of errors during a ‘noisy’ quantum computation. The problem with these active error correction schemes is that they were devised for a very unrealistic noise model which treats the computer as quantum and the environment as classical (Alicki, Lidar & Zanardi 2006) Once a more realistic noise model is allowed, the feasibility of large scale, fault tolerant and computationally superior quantum computers is less clear (Hagar 2009). Another scheme to reduce errors in the implementation of quantum algorithms on large scale quantum computers is to encode information in noiseless subsystems, or decoherence free subspaces (Lidar, Chuang & Whaley 1998). This strategy seems more promising from a physical point of view, yet also here the question of how those noiseless subspaces scale with the size of the computer remains open. If one hopes to solve intractable problems efficiently with a scalable quantum computer, then the construction of the theoretical operator that measures a quantum state which encodes a solution to an NP-hard problem should not require an exponential time, or solving yet another NP-hard problem. Finally, as the implementation of Shor’s algorithm on a large scale quantum computer seems still beyond our reach, quantum information scientists turn to the original goal of using quantum computers to simulate quantum systems. While Feynman’s conjecture is still unproven, complexity theorists attempt to narrow the gap between what they believe is true about quantum mechanics, namely, that it’s exponentially-hard to simulate on a classical computer, and what experimentalists can currently demonstrate (e.g., Aaronson & Arkhipov 2010). 4. Philosophical Implications 4.1 What is Quantum in Quantum Computing? Notwithstanding the excitement around the discovery of Shor’s algorithm, and apart from the almost insurmountable problem of practically realizing and implementing a large scale quantum computer, a crucial theoretical question remains open, namely, what physical resources are responsible for the putative power of quantum computing? Put another way, what are the essential features of quantum mechanics that allow one to solve problems or simulate certain systems far more efficiently than on a classical computer? Remarkable is also the fact that the relevance of features commonly thought essential to the superiority of quantum computers, e.g., entanglement and interference (Josza 1997), has recently been questioned (Linden and Popescu 1999, Biham 2004). Moreover even if these features \(do\) play an essential role in the putative quantum “speed-up”, one must still answer the question of how they do so (Fortnow 2003, Cuffaro forthcoming). Theoretical as it may seem, the question “what is quantum in quantum computing?” has an enormous practical consequence. One of the embarrassments of quantum computing is the fact that, so far, only one algorithm has been discovered, namely Shor’s, for which a quantum computer is significantly faster than any known classical one. It is almost certain that one of the reasons for this scarcity of quantum algorithms is related to the lack of our understanding of what makes a quantum computer quantum (see also Preskill 1998 and Shor 2004). As an ultimate answer to this question one would like to have something similar to Bell’s (1964) famous theorem, i.e., a succinct crispy statement of the fundamental difference between quantum and classical systems, encapsulated in the non-commutative character of observables. Quantum computers, unfortunately, do not seem to allow such simple characterization. Observables—in the quantum circuit model there are only two, the preparation of the initial state and the observation of the final state, in the same basis, and of the same variable, at the end of the computation—are not as important here as in Bell’s case since any measurement commutes with itself. The non-commutativity in quantum computing lies much deeper, and it is still unclear how to cash it into useful currency. Quantum computing skeptics (Levin 2003) happily capitalize on this puzzle: If no one knows why quantum computers are superior to classical ones, how can we be sure that they are, indeed, superior? 4.1.1 The Debate over Parallelism and Many Worlds One well-known answer to the question, popularized by Deutsch (1997), takes its motivation from the circumstance that in some quantum circuit model algorithms, there are steps in which it appears as though functions are evaluated for each of their possible input values simultaneously. Something like the following transformation, for instance, is typical (note that normalization factors have been omitted): \[\tag{1} \Sigma_{x} \lvert x\rangle \lvert 0\rangle \rightarrow \Sigma_{x} \lvert x\rangle \lvert f(x)\rangle. \] The idea that we should take this at face value—that quantum computers actually do compute a function for many different input values simultaneously—is what Duwell (2007) calls the Quantum Parallelism Thesis (QPT). For Deutsch, who accepts it as true, the only reasonable explanation for the QPT is that the many worlds interpretation (MWI) of quantum mechanics is also true. For Deutsch, a quantum computer in superposition, like any other quantum system, exists in some sense in many classical universes simultaneously. These provide the physical arena within which the computer effects its parallel computations. This conclusion is also endorsed by Hewitt-Horsman (2009) and by Wallace (2012). Wallace notes, however, that the QPT—and hence the explanatory need for many worlds—may not be true of all or even most quantum algorithms, although Wallace takes it to be true of Shor’s. For Steane (2003), in contrast, quantum computers are not well described in terms of many worlds or even quantum parallelism. Among other things, Steane argues that the motivation for the QPT is at least partly due to misleading aspects of the standard quantum formalism, for some classical systems can be similarly described so as to suggest parallelism. Further, the Gottesman-Knill theorem (Nielsen and Chuang, 2000) shows that many algorithms, which suggest parallelism when written in standard notation, can be re-expressed in an alternative notation so as to lend themselves straightforwardly to an efficient classical simulation. Additionally, comparing the information actually produced by quantum and classical algorithms (state collapse entails that only one evaluation instance in (1) is ever accessible, while a classical computer must actually produce every instance) suggests that quantum algorithms perform not more but fewer, cleverer, computations than classical algorithms (see, also, 4.1.2 below). Another critic is Duwell (2007), who (contra Steane) accepts the QPT, but nevertheless denies that it uniquely supports the MWI. Considering the phase relations between the terms in a superposition such as (1) is crucially important when evaluating a quantum algorithm’s computational efficiency. Phase relations, however, are global properties of a state. Thus a quantum computation, Duwell argues, does not consist solely of local parallel computations. But in this case, the QPT does not uniquely support the MWI over other explanations. Defending the MWI, Hewitt-Horsman (2009) argues (contra Steane) that to state that quantum computers do not actually generate each of the evaluation instances represented in (1) is false according to the view: on the MWI such information could be extracted in principle given sufficiently advanced technology. Further, Hewitt-Horsman emphasizes that the MWI is not motivated simply by a suggestive mathematical representation. Unlike in Steane’s ‘classical parallelism’ examples, worlds on the MWI are defined by their explanatory usefulness, manifested in particular by their stability and independence over the time scales relevant to the computation. Wallace (2012) argues similarly. Cuffaro (2012) and Aaronson (2013) point out that the Many Worlds Explanation of Quantum Computing (MWQC) and the MWI are not actually identical. The latter employs decoherence as a criterion for distinguishing macroscopic worlds from one another. Quantum circuit model algorithms, however, utilize coherent superpositions. To distinguish computational worlds, therefore, one must weaken the decoherence criterion, but Cuffaro argues that this move is ad hoc. Further, Cuffaro argues that the MWQC is for all practical purposes incompatible with measurement based computation, for even granting a weakened world identification criterion, there is no natural way in this model to identify worlds that are stable and independent in the way required. 4.1.2 The Elusive Nature of Speed-Up Even if we could rule out the MWQC, the problem of finding the physical resource(s) responsible for quantum “speed-up” would remain a difficult one. Consider a solution of a decision problem, say satisfiability, with a quantum algorithm based on the circuit model. What we are given here as input is a proposition in the propositional calculus and we have to decide whether it has a satisfying truth assignment. As Pitowsky (2002) shows, the quantum algorithm appears to solve this problem by testing all \(2^{n}\) assignments “at once” as suggested by (1), yet this quantum ‘miracle’ helps us very little since, as previously mentioned, any measurement performed on the output state collapses it, and if there is one possible truth assignment that solves this decision problem, the probability of retrieving it is \(2^{-n}\), just as in the case of a classical probabilistic Turing machine which guesses the solution and then checks it. Pitowsky’s conclusion (echoed, as we saw, by Steane (2003) and Duwell (2007)) is that in order to enhance computation with quantum mechanics we must construct ‘clever’ superpositions that increase the probability of successfully retrieving the result far more than that of a pure guess. Shor’s algorithm and the class of algorithms that evaluate a global property of a function (this class is known as the hidden subgroup class of algorithms) are (so far) a unique example of both a construction of such ‘clever’ superposition and a retrieval of the solution in polynomial time. The quantum adiabatic algorithm may give us similar results, contingent upon the existence of an energy gap that decreases polynomially with the input. This question also raises important issues about how to measure the complexity of a given quantum algorithm. The answer differs, of course, according to the particular model at hand. In the adiabatic model, for example, one needs only to estimate the energy gap behavior and its relation to the input size (encoded in the number of degrees of freedom of the Hamiltonian of the system). In the measurement-based model, one counts the number of measurements needed to reveal the solution that is hidden in the input cluster state (since the preparation of the cluster state is a polynomial process, it does not add to the complexity of the computation). But in the circuit model things are not as straightforward. After all, the whole of the quantum-circuit-based computation can be be simply represented as a single unitary transformation from the input state to the output state. This feature of the quantum circuit model supports the conjecture that the power of quantum computers, if any, lies not in quantum dynamics (i.e., in the Schrödinger equation), but rather in the quantum state, or the wave function. Another argument in favor of this conjecture is that the Hilbert subspace “visited” during a quantum computational process is, at any moment, a linear space spanned by all of the vectors in the total Hilbert space which have been created by the computational process up to that moment. But this Hilbert subspace is thus a subspace spanned by a polynomial number of vectors and is thus at most a polynomial subspace of the total Hilbert space. A classical simulation of the quantum evolution on a Hilbert space with polynomial number of dimensions (that is, a Hilbert space spanned by a number of basis vectors which is polynomial in the number of qubits involved in the computation), however, can be carried out in a polynomial number of classical computations. Were quantum dynamics the sole ingredient responsible to the efficiency of quantum computing, the latter could be mimicked in a polynomial number of steps with a classical computer (see, e.g., Vidal 2003). This is not to say that quantum computation is no more powerful than classical computation. The key point, of course, is that one does not end a quantum computation with an arbitrary superposition, but aims for a very special, ‘clever’ state—to use Pitowsky’s term. Quantum computations may not always be mimicked with a classical computer because the characterization of the computational subspace of certain quantum states is difficult, and it seems that these special, ‘clever’, quantum states cannot be classically represented as vectors derivable via a quantum computation in an optimal basis, or at least that one cannot do so in such way that would allow one to calculate the outcome of the final measurement made on these states. Consequently, in the quantum circuit model one should count the number of computational steps in the computation not by counting the number of transformations of the state, but by counting the number of one- or two-qubit local transformations that are required to create the ‘clever’ superposition that ensures the desired “speed-up”. (Note that Shor’s algorithm, for example, involves three major steps in this context: First, one creates the ‘clever’ entangled state with a set of unitary transformations. The result of the computation—a global property of a function—is now ‘hidden’ in this state; second, in order to retrieve this result, one projects it on a subspace of the Hilbert space, and finally one performs another set of unitary transformations in order to make the result measurable in the original computational basis. All these steps count as computational steps as far as the efficiency of the algorithm is concerned. See also Bub 2006a.) The trick is to perform these local one- or two-qubit transformations in polynomial time, and it is likely that it is here where the physical power of quantum computing may be found. 4.2 Experimental Metaphysics? The quantum information revolution has prompted several physicists and philosophers to claim that new insights can be gained from the rising new science into conceptual problems in the foundations of quantum mechanics (Fuchs 2002, Bub 2005). Yet while one of the most famous foundational problems in quantum mechanics, namely the quantum measurement problem, remains unsolved even within quantum information theory (see Hagar 2003 and Hagar and Hemmo 2006 for a critique of the quantum information theoretic approach to the foundations of quantum mechanics and the role of the quantum measurement problem in this context), some quantum information theorists dismiss it as a philosophical quibble (Fuchs 2002). Indeed, in quantum information theory the concept of “measurement” is taken as a primitive, a “black box” which remains unanalyzed. The measurement problem itself, furthermore, is regarded as a misunderstanding of quantum theory. But recent advances in the realization of a large scale quantum computer may eventually prove quantum information theorists wrong: Rather than supporting the dismissal of the quantum measurement problem, these advances may surprisingly lead to its empirical solution. The speculative idea is the following. As it turns out, collapse theories— one form of alternatives to quantum theory which aim to solve the measurement problem—modify Schrödinger’s equation and give different predictions from quantum theory in certain specific circumstances. These circumstances may be realized, moreover, if decoherence effects could be suppressed (Bassi et al. 2005). Now one of the most difficult obstacles that await the construction of a large scale quantum computer is its robustness against decoherence effects (Unruh 1995). It thus appears that the technological capabilities required for the realization of a large scale quantum computer are exactly those upon which the distinction between “true” and “false” collapse (Pearle 1998), i.e., between collapse theories and environmentally induced decoherence, is contingent. Consequently, while quantum computing may elucidate the essential distinction between quantum and classical physics, its physical realization would shed light also on one of the long standing conceptual problems in the foundations of the theory, and would serve as yet another example of experimental metaphysics (the term was coined by Abner Shimony to designate the chain of events that led from the EPR argument via Bell’s theorem to Aspect’s experiments). 4.3 Are There Computational Kinds? Another philosophical implication of the realization of a large scale quantum computer regards the long-standing debate in the philosophy of mind on the autonomy of computational theories of the mind (Fodor 1974). In the shift from strong to weak artificial intelligence, the advocates of this view tried to impose constraints on computer programs before they could qualify as theories of cognitive science (Pylyshyn 1984). These constraints include, for example, the nature of physical realizations of symbols and the relations between abstract symbolic computations and the physical causal processes that execute them. The search for the computational feature of these theories, i.e., for what makes them computational theories of the mind, involved isolating some features of the computer as such. In other words, the advocates of weak AI were looking for computational properties, or kinds, that would be machine independent, at least in the sense that they would not be associated with the physical constitution of the computer, nor with the specific machine model that was being used. These features were thought to be instrumental in debates within cognitive science, e.g., the debate between functionalism and connectionism (Fodor and Pylyshyn 1988). Note, however, that once the physical Church-Turing thesis is violated, some computational notions cease to be autonomous. In other words, given that quantum computers may be able to efficiently solve classically intractable problems, hence re-describe the abstract space of computational complexity, computational concepts and even computational kinds such as ‘an efficient algorithm’ or ‘the class NP’, will become machine-dependent, and recourse to ‘hardware’ will become inevitable in any analysis thereof. Advances in quantum computing may thus militate against the functionalist view about the unphysical character of the types and properties that are used in computer science. In fact, these types and categories may become physical as a result of this natural development in physics (e.g., quantum computing, chaos theory). Consequently, efficient quantum algorithms may also serve as counterexamples to a-priori arguments against reductionism (Pitowsky 1996). • Aaronson, S., 2013, ‘Why Philosophers Should Care about Computational Complexity’, in B. Jack Copeland, Carl J. Posy, Oron Shagrir (eds.), Computability: Turing, Gödel, Church, and Beyond, Cambridge, MA: MIT Press, pp. 261–327. • Adleman, L.M., 1994, ‘Molecular computation of solutions to combinatorial problems’, Science, 266: 1021–1024. • Aharonov, D., 1998, ‘Quantum computing’, Annual Review of Computational Physics, VI, Singapore: World Scientific. [Preprint available online]. • Aharonov, D. and Ben-Or, M., 1997, ‘Fault tolerant computation with constant error’, Proc. ACM Symposium on the Theory of Computing (STOC), 176–188. • Albert, D., 1983, ‘On quantum mechanical automata’, Phys. Lett., A 98: 249. • Alicki, R., Lidar, D., & Zanardi, P., 2006, ‘Internal consistency of fault tolerant quantum error correction’, Phys. Rev. A, 73: 052311. • Barenco, A. et al., 1995, ‘Elementary gates for quantum computation’, Phys. Rev., A 52: 3457–3467. • Bell, J.S., 1964, ‘On the Einstein Podolsky Rosen paradox’, Physics, 1: 195–200. • Bennett, C. et al., 1997, ‘Strengths and weaknesses of quantum computing’, SIAM Journal on Computing, 26(5): 1510–1523. • Biham, E., et al., 2004, ‘Quantum computing without entanglement’, Theoretical Computer Science, 320: 15–33. • Bub, J., 2005, ‘Quantum mechanics is about quantum information’, Foundations of Physics, 34: 541–560. • Cirac, J.I. and Zoller, P., 1995, ‘Quantum computations with cold trapped ions’, Phys. Rev. Lett., 74: 4091–4094. • Copeland, J., 2002, ‘Hypercomputation’, Minds and Machines, 12: 461–502. • Cook, S. A., 1971, ‘The complexity of theorem proving procedures’, Proc. 3rd ACM Symposium on Theory of Computing, 151–158. • Cuffaro, M. E., 2012, ‘Many Worlds, the Cluster-state Quantum Computer, and the Problem of the Preferred Basis.’ Studies in History and Philosophy of Modern Physics, 43: 35–42. • Cuffaro, M. E., forthcoming, ‘The Significance of the Gottesman-Knill Theorem’, The British Journal for the Philosophy of Science. • Davis, M., 1958, The Undecidable, New York: Dover. • Davis, M., 2003, ‘The myth of hypercomputation’, in C. Teuscher (ed.), Alan Turing, Life and Legacy of a Great Thinker, New York: Springer, pp. 195–212. • Deutsch, D., 1985, ‘Quantum theory, the Church Turing principle, and the universal quantum computer’, Proc. Roy. Soc. Lond., A 400: 97–117. • Deutsch, D., 1997, The Fabric of Reality. New York: Penguin. • Deutsch, D. and Jozsa, R., 1992, ‘Rapid solution of problems by quantum computer’, Proc. Roy. Soc. Lond, A 439: 553–558. • Dewdney A. K., 1984, ‘On the spaghetti computer and other analog gadgets for problem solving’, Scientific American, 250(6): 19–26. • Duwell, A., 2007, ‘The Many-Worlds Interpretation and Quantum Computation’. Philosophy of Science 74: 1007–1018. • DiVicenzo, D., 1995, ‘Two-bit gates are universal for quantum computation’, Phys. Rev., A 51: 1015–1022. • Ekert, A. and Jozsa, R., 1996, ‘Quantum computation and Shor’s factoring algorithm’, Rev. Mod. Phys., 68(3): 733–753. • Farhi, E. et al., 2001, ‘A quantum adiabatic evolution algorithm applied to random instances of an NP-complete problem’, Science, 292(5516): 472–475. • Feynman, R., 1982, ‘Simulating physics with computers’, International Journal of Theoretical Physics, 21: 467–488. • Fodor, J., 1974, ‘Special Sciences’, Synthese, 2: 97–115. • Fodor, J. and Pylyshyn, Z., 1988, ‘Connectionism and cognitive architecture, a critical analysis’, Cognition, 28: 3–71. • Fortnow, L., 2003, ‘One complexity theorist’s view of quantum computing’, Theoretical Computer Science, 292: 597–610. • Freedman, M., 1998, ‘P/NP and the quantum field computer’, Proc. Natl. Acad. Sci., 95: 98–101. • Gandy, R., 1980, ‘Church’s thesis and principles for mechanisms’, in J. Barwise et al. (eds.), The Kleene Symposium, Amsterdam: North-Holland, pp. 123–148. • Garey, M. R. and Johnson, D.S., 1979, Computers and intractability: A guide to the theory of NP-completeness, New York: WH Freeman. • Giblin, P., 1993, Primes and Programming, Cambridge, Cambridge University Press. • Gottesman, D. and Chuang, I., 1999, ‘Demonstrating the viability of universal quantum computation using teleportation and single-qubit operations’, Nature, 402: 390–393. • Grover, L., 1996, ‘A fast quantum mechanical algorithm for database search’, Proc. 28th ACM Symp. Theory of Computing, 212–219. • Hagar, A., 2003, ‘A philosopher looks at quantum information theory’, Philosophy of Science, 70: 752–775. • Hagar, A., 2009, ‘Active Fault Tolerant Quantum Error Correction: The Curse of The Open System’, Philosophy of Science, 76(4): 506–535. • Hagar, A., forthcoming, ‘Ed Fredkin and the Physics of Information: an Inside Story of an Outsider Scientist’. Information and Culture. • Hagar, A. and Hemmo, M., 2006, ‘Explaining the unobserved: Why quantum mechanics ain’t only about information’, Foundations of Physics, 36(9): 1295–1324 [Preprint available online] • Hagar, A. and Korolev, A., 2007, ‘Quantum hypercomputation: Hype or Computation?’, Philosophy of Science, 74(3): 347–363. • Haroche, S. and Raimond, J.M., 1996, ‘Quantum computing: Dream or nightmare?’, Physics Today, 8: 51–52. • Hewitt-Horsman, C., 2009, ‘An Introduction to Many Worlds in Quantum Computation’. Foundations of Physics, 39: 869–902. • Hogarth, M., 1994, ‘Non-Turing computers and non-Turing computability’, PSA, 94(1): 126–138. • Holevo, A.S., 1973, ‘Bounds for the quantity of information transmitted by a quantum communication channel’, Problemy Peredachi Informatsii, 9(3): 3–11. English translation in Problems of Information Transmission, 9: 177–183, 1973. • Ingarden, R.S., 1976, ‘Quantum information theory’, Rep. Math. Phys., 10: 43–72. • Jozsa, R., 1997, ‘Entanglement and quantum computation’, Ch. 27 in S. Hugget et al. (eds.), The Geometric Universe, Oxford: Oxford University Press. [Preprint available online]. • Kieu, T.D., 2002, ‘Quantum Hypercomputability’, Minds and Machines, 12: 541–561. • Kieu, T.D., 2004, ‘A reformulation of Hilbert’s Tenth Problem through quantum mechanics’, Proc. Royal Soc., A 460: 1535–1545. • Knill, E. et al., 2000, ‘An algorithmic benchmark for quantum information processing’, Nature, 404: 368–370. • Levin, L., 2003, ‘Polynomial time and extravagant models’, Problems of Information Transmission, 39(1): 92–103. • Lidar, D., Chuang, I., & Whaley, B., 2010, ‘Decoherence free subspaces for quantum computation’, Phys. Rev. Lett. 81: 2594–2597. • Linden, N. and Popescu, S., 1999, ‘Good dynamics versus bad kinematics: Is entanglement needed for quantum computation?’, Phys. Rev. Lett., 87(4): 047901. [Preprint available online]. • Lipton, R., 1995, ‘Using DNA to solve NP-complete problems’, Science, 268: 542–545. • Manin, Y., 1980, Computable and Uncomputable, Moscow: Sovetskoye Radio. • Messiah, A., 1961, Quantum Mechanics (Volume II), New York: Interscience Publishers. • Moore, C., 1990, ‘Unpredictability and undecidability in dynamical systems’, Phys. Rev. Lett., 64: 2354–2357. • Myers, J., 1997, ‘Can a universal quantum computer be fully quantum?’, Phys. Rev. Lett., 78(9): 1823–1824. • Nielsen, M., 2003, ‘Quantum computation by measurement and quantum memory’, Phys. Lett., A 308: 96–100. • Nielsen, M.A. and Chuang I.L., 2000, Quantum Computation and Quantum Information, Cambridge: Cambridge University Press. • Pitowsky, I., 1990, ‘The physical Church thesis and physical computational complexity, Iyyun, 39: 81–99. • Pitowsky. I., 1996, ‘Laplace’s demon consults an oracle: The computational complexity of predictions’, Studies in the History and Philosophy of Modern Physics, 27: 161–180. • Pitowsky, I., 2002, ‘Quantum speed-up of computations’, Philosophy of Science, 69: S168–S177. • Pitowsky, I. and Shagrir, O., 2003, ‘Physical hypercomputation and the Church-Turing thesis’, Minds and Machines, 13: 87–101. • Poplavskii, R.P, 1975, ‘Thermodynamical models of information processing’, (in Russian). Uspekhi Fizicheskikh Nauk, 115(3): 465–501. • Pour-el, M. and Richards, I., 1981, The wave equation with computable initial data such that its unique solution is not computable’, Advances in Mathematics, 39: 215–239. • Preskill, J., 1998, ‘Quantum computing: Pro and Con’, Proc. Roy. Soc. Lond., A 454: 469–486. • Pylyshyn, Z., 1984, Computation and Cognition: Toward a Foundation for Cognitive Science, Cambridge: MIT Press. • Rabin, M., 1976, ‘Probabilistic algorithms’, in J. Traub (ed.) Algorithms and Complexity: New Directions and Recent Results, New York: Academic Press, pp. 21–39. • Reichardt, B.W., 2004, ‘The quantum adiabatic optimization algorithm and local minima’, Proceedings of the 36th Symposium on Theory of Computing (STOC), 502–510. • Rivset R. et al., 1978, ‘A method for obtaining digital signatures and public-key cryptosystems’, Communications of the ACM, 21(2): 120–126. • Schrader et al., 2004, ‘Neutral atoms quantum register’, Phys. Rev. Lett., 93: 150501. • Sieg W. and Byrnes J., 1999, ‘An abstract model for parallel computations’, The Monist, 82: 150–164. • Simon, D.R., 1994, ‘On the power of quantum computation’, Proceedings of the 35th Annual IEEE Symposium on Foundations of Computer Science, pp. 116–123; reprinted, SIAM Journal on Computing, 26(5) (1997): 1474–1483. • Shor, P., 1994 ‘Algorithms for quantum computation: Discrete logarithms and factoring’, Proceedings of the 35th Annual IEEE Symposium on Foundations of Computer Science, pp. 124–134. • Shor, P., 1995, ‘Scheme for reducing decoherence in quantum computer memory’, Phys. Rev., A 52: 2493–2496. • Shor, P., 1996, ‘Fault-tolerant quantum computation’, Proceedings of the 37th Annual IEEE Symposium on Foundations of Computer Science, pp. 56–65. • Shor, P., 2004, ‘Progress in quantum computing’, Quantum Information Processing, 3: 5–13. [Preprint available online]. • Shor, P. and DiVincenzo, D., 1996m ‘Fault tolerant error correction with efficient quantum codes’, Phys. Rev. Lett., 77: 3260–3263. • Steane, A.M., 1996, ‘Multiple particle interference and quantum error correction’, Proc. Roy. Soc. Lond., A 452: 2551–2577. • Steane, A.M., 2003, ‘A Quantum Computer Only Needs One Universe’. Studies in History and Philosophy of Modern Physics, 34: 469–478. • Turing, A., 1936, ‘On computable numbers, with an application to the Entscheidungsproblem’, reprinted in M. Davis (ed.), The Undecidable, New York: Raven Press, 1965, 116–154. • Unruh,W.G., 1995, ‘Maintaining coherence in quantum computers’, Phys. Rev., A 51: 992–997. • Vergis, A. et al., 1986, ‘The complexity of analog computation’, Mathematics and Computers in Simulation, 28: 91–113. • Vidal, G., 2003, ‘Efficient classical simulation of slightly entangled quantum computations’, Phys. Rev. Lett., 91: 147902. • Wallace, D., 2012, The Emergent Multiverse. Oxford: Oxford University Press. • Wiesner, S., 1983, ‘Conjugate coding’, Sigact news, 18: 78–88. • Witten, E., 1989, ‘Quantum field theory and the Jones polynomial’, Comm. Math. Phys., 121: 351–399. • Wolfram, S., 1985, ‘Undecidability and intractability in theoretical physics’, Phys. Rev. Lett., 54: 735. Other Internet Resources Online Papers Web sites of interest Copyright © 2015 by Amit Hagar <hagara@indiana.edu> Michael Cuffaro <mike@michaelcuffaro.com> Please Read How You Can Help Keep the Encyclopedia Free
851e6e6338a75057
Stanford Encyclopedia of Philosophy The Many-Worlds Interpretation of Quantum Mechanics 1. Introduction The fundamental idea of the MWI, going back to Everett 1957, is that there are myriads of worlds in the Universe in addition to the world we are aware of. In particular, every time a quantum experiment with different outcomes with non-zero probability is performed, all outcomes are obtained, each in a different world, even if we are aware only of the world with the outcome we have seen. In fact, quantum experiments take place everywhere and very often, not just in physics laboratories: even the irregular blinking of an old fluorescent bulb is a quantum experiment. There are numerous variations and reinterpretations of the original Everett proposal, most of which are briefly discussed in the entry on Everett's relative state formulation of quantum mechanics. Here, a particular approach to the MWI (which differs from the popular "actual splitting worlds" approach in De Witt 1970) will be presented in detail, followed by a discussion relevant for many variants of the MWI. The MWI consists of two parts: 1. A mathematical theory which yields evolution in time of the quantum state of the (single) Universe. Part (i) is essentially summarized by the Schrödinger equation or its relativistic generalization. It is a rigorous mathematical theory and is not problematic philosophically. Part (ii) involves "our experiences" which do not have a rigorous definition. An additional difficulty in setting up (ii) follows from the fact that human languages were developed at a time when people did not suspect the existence of parallel worlds. This, however, is only a semantic problem.[1] 2. Definitions 2.1 What is "A World"? A world is the totality of (macroscopic) objects: stars, cities, people, grains of sand, etc. in a definite classically described state. This definition is based on the common attitude to the concept of world shared by human beings. Another concept (considered in some approaches as the basic one, e.g., in Saunders 1995) is a relative, or perspectival, world defined for every physical system and every one of its states (provided it is a state of non-zero probability): I will call it a centered world. This concept is useful when a world is centered on a perceptual state of a sentient being. In this world, all objects which the sentient being perceives have definite states, but objects that are not under her observation might be in a superposition of different (classical) states. The advantage of a centered world is that it does not split due to a quantum phenomenon in a distant galaxy, while the advantage of our definition is that we can consider a world without specifying a center, and in particular our usual language is just as useful for describing worlds at times when there were no sentient beings. The concept of "world" in the MWI belongs to part (ii) of the theory, i.e., it is not a rigorously defined mathematical entity, but a term defined by us (sentient beings) in describing our experience. When we refer to the "definite classically described state" of, say, a cat, it means that the position and the state (alive, dead, smiling, etc.) of the cat is maximally specified according to our ability to distinguish between the alternatives and that this specification corresponds to a classical picture, e.g., no superpositions of dead and alive cats are allowed in a single world.[2] The concept of a world in the MWI is based on the layman's conception of a world; however, several features are different: Obviously, the definition of the world as everything that exists does not hold in the MWI. "Everything that exists" is the Universe, and there is only one Universe. The Universe incorporates many worlds similar to the one the layman is familiar with. Nowadays, the layman knows that objects are made of elementary microscopic particles, and he believes that, consequently, a more precise definition of the world is the totality of all these particles. In the MWI this naive step is incorrect. Microscopic particles might be in a superposition, while objects within a world (as defined in the MWI) cannot be in a superposition. The connection between macroscopic objects defined according to our experience, and microscopic objects defined in a physical theory that aims to explain our experience, is more subtle, and will be discussed further below. The definition of a world in the MWI involves only concepts related to our experience. A layman believes that our present world has a unique past and future. According to the MWI, a world defined at some moment of time corresponds to a unique world at a time in the past, but to a multitude of worlds at a time in the future. 2.2 Who am "I"? "I" am an object, such as Earth, cat, etc. "I" is defined at a particular time by a complete (classical) description of the state of my body and of my brain. "I" and "Lev" do not name the same things (even though my name is Lev). At the present moment there are many different "Lev"s in different worlds (not more than one in each world), but it is meaningless to say that now there is another "I". I have a particular, well defined past: I correspond to a particular "Lev" in 2002, but I do not have a well defined future: I correspond to a multitude of "Lev"s in 2010. In the framework of the MWI it is meaningless to ask: Which Lev in 2010 will I be? I will correspond to them all. Every time I perform a quantum experiment (with several possible results) it only seems to me that I obtain a single definite result. Indeed, Lev who obtains this particular result thinks this way. However, this Lev cannot be identified as the only Lev after the experiment. Lev before the experiment corresponds to all "Lev"s obtaining all possible results. Although this approach to the concept of personal identity seems somewhat unusual, it is plausible in the light of the critique of personal identity by Parfit 1986. Parfit considers some artificial situations in which a person splits into several copies, and argues that there is no good answer to the question: Which copy is me? He concludes that personal identity is not what matters when I divide. 3. Correspondence Between the Formalism and Our Experience 3.1 The Quantum State of an Object The basis for the correspondence between the quantum state (the wave function) of the Universe and our experience is the description that physicists give in the framework of standard quantum theory for objects composed of elementary particles. Elementary particles of the same kind are identical. Therefore, the essence of an object is the quantum state of its particles and not the particles themselves (see the elaborate discussion in the entry on identity and individuality in quantum theory): one quantum state of a set of elementary particles might be a cat and another state of the same particles might be a small table. Clearly, we cannot now write down an exact wave function of a cat. We know with a reasonable approximation the wave function of some elementary particles that constitute a nucleon. The wave function of the electrons and the nucleons that together make up an atom is known with even better precision. The wave functions of molecules (i.e. the wave functions of the ions and electrons out of which molecules are built) are well studied. A lot is known about biological cells, so physicists can write down a rough form of the quantum state of a cell. This is difficult because there are many molecules in a cell. Out of cells we construct various tissues and then the whole body of a cat or of a table. So, let us denote the quantum state constructed in this way |Psi>OBJECT. In our construction |Psi>OBJECT is the quantum state of an object in a definite state and position.[3] According to the definition of a world we have adopted, in each world the cat is in a definite state: either alive or dead. Schrödinger's experiment with the cat leads to a splitting of worlds even before opening the box. Only in the alternative approach is Schrödinger's cat, which is in a superposition of being alive and dead, a member of the (single) centered world of the observer before she opened the sealed box with the cat (the observer perceives directly the facts related to the preparation of the experiment and she deduces that the cat is in a superposition). 3.2 The Quantum State that corresponds to a World The wave function of all particles in the Universe corresponding to any particular world will be a product of states of sets of particles corresponding to all objects in the world multiplied by the quantum state |Phi> of all the particles that do not constitute "objects". Within a world, "objects" have definite macroscopic states by fiat:[4] |PsiWORLD> = |Psi>OBJECT 1 |Psi>OBJECT 2 ... |Psi>OBJECT N    |Phi> (1) The quantum states corresponding to centered worlds of sentient beings have exactly the same form. The only difference is that in the product there are only states of the objects perceived directly, while most of the universe is, in general, entangled; it is described by |Phi>. 3.3 The Quantum State of the Universe The quantum state of the Universe can be decomposed into a superposition of terms corresponding to different worlds: |PsiUNIVERSE> = sigmaalphai |PsiWORLD i> (2) Different worlds correspond to different classically described states of at least one object. Different classically described states correspond to orthogonal quantum states. Therefore, different worlds correspond to orthogonal states: all states |PsiWORLD i> are mutually orthogonal and consequently, sigma|alphai| 2 = 1. 3.4 FAPP The construction of the quantum state of the Universe in terms of the quantum states of objects presented above is only approximate, it is good only for all practical purposes (FAPP). Indeed, the concept of an object itself has no rigorous definition: should a mouse that a cat just swallowed be considered as a part of the cat? The concept of a "definite position" is also only approximately defined: how far should a cat be displaced in order for it to be considered to be in a different position? If the displacement is much smaller than the quantum uncertainty, it must be considered to be at the same place, because in this case the quantum state of the cat is almost the same and the displacement is undetectable in principle. But this is only an absolute bound, because our ability to distinguish various locations of the cat is far from this quantum limit. Further, the state of an object (e.g. alive or dead) is meaningful only if the object is considered for a period of time. In our construction, however, the quantum state of an object is defined at a particular time. In fact, we have to ensure that the quantum state will have the shape of the object not only at that time, but for some period of time. Splitting of the world during this period of time is another source of ambiguity, in particular, due to the fact that there is no precise definition of when the splitting occurs. The reason that I am only able to propose an approximate prescription for correspondence between the quantum state of the Universe and our experience, is essentially the same that led Bell 1990 to claim that "ordinary quantum mechanics is just fine FAPP". The concepts we use: "object", "measurement", etc. are not rigorously defined. Bell was, and many others are looking (until now in vain) for a "precise quantum mechanics". Since it is not enough for a physical theory to be just fine FAPP, a quantum mechanics needs rigorous foundations. However, in the MWI just fine FAPP is enough. Indeed, the MWI has rigorous foundations for (i), the "physics part" of the theory; only part (ii), the correspondence with our experience, is approximate (just fine FAPP). But "just fine FAPP" means that the theory explains our experience for any possible experiment, and this is the goal of (ii). See Butterfield 2001 and Wallace 2001b for more arguments why a FAPP definition of a world ("branch" in their language) is enough. 3.5 The Measure of Existence There are many worlds existing in parallel in the Universe. Although all worlds are of the same physical size (this might not be true if we take quantum gravity into account), and in every world sentient beings feel as "real" as in any other world, in some sense some worlds are larger than others. I describe this property as the measure of existence of a world.[5] The measure of existence of a world quantifies its ability to interfere with other worlds in a gedanken experiment, see Vaidman 1998 (p. 256), and is the basis for introducing probability in the MWI. The measure of existence makes precise what is meant by the probability measure discussed in Everett 1957 and pictorially described in Lockwood 1989 (p. 230). Given the decomposition (2), the measure of existence of the world i is µi = |alphai| 2. It also can be expressed as the expectation value of Pi, the projection operator on the space of quantum states corresponding to the actual values of all physical variables describing the world i: mui triple bar <PsiUNIVERSE|Pi|PsiUNIVERSE> (3) "I" also have a measure of existence. It is the sum of measures of existence of all different worlds in which I exist; equally, it can be defined as the measure of existence of my perception world. Note that I do not experience directly the measure of my existence. I feel the same weight, see the same brightness, etc. irrespectively of how tiny my measure of existence might be. 4. Probability in the MWI There is a serious difficulty with the concept of probability in the context of the MWI. In a deterministic theory, such as the MWI, the only possible meaning for probability is an ignorance probability, but there is no relevant information that an observer who is going to perform a quantum experiment is ignorant about. The quantum state of the Universe at one time specifies the quantum state at all times. If I am going to perform a quantum experiment with two possible outcomes such that standard quantum mechanics predicts probability 1/3 for outcome A and 2/3 for outcome B, then, according to the MWI, both the world with outcome A and the world with outcome B will exist. It is senseless to ask: "What is the probability that I will get A instead of B?" because I will correspond to both "Lev"s: the one who observes A and the other one who observes B.[6] To solve this difficulty, Albert and Loewer 1988 proposed the Many Minds interpretation (in which the different worlds are only in the minds of sentient beings). In addition to the quantum wave of the Universe, Albert and Loewer postulate that every sentient being has a continuum of minds. Whenever the quantum wave of the Universe develops into a superposition containing states of a sentient being corresponding to different perceptions, the minds of this sentient being evolve randomly and independently to mental states corresponding to these different states of perception (with probabilities equal to the quantum probabilities for these states). In particular, whenever a measurement is performed by an observer, the observer's minds develop mental states that correspond to perceptions of the different outcomes, i.e. corresponding to the worlds A or B in our example. Since there is a continuum of minds, there will always be an infinity of minds in any sentient being and the procedure can continue indefinitely. This resolves the difficulty: each "I" corresponds to one mind and it ends up in a state corresponding to a world with a particular outcome. However, this solution comes at the price of introducing additional structure into the theory, including a genuinely random process. Vaidman1998 (p. 254) resolves the problem by constructing an ignorance probability in the framework of the MWI. It seems senseless to ask: "What is the probability that Lev in the world A will observe A?" This probability is trivially equal to 1. The task is to define the probability in such a way that we could reconstruct the prediction of the standard approach: probability 1/3 for A. It is indeed senseless for you to ask what is the probability that Lev in the world A will observe A, but this might be a meaningful question for Lev in the world of the outcome A. Under normal circumstances, the world A is created (i.e. measuring devices and objects which interact with measuring devices will become localized according to the outcome A) before Lev will be aware of the result A. Then, it is sensible to ask this Lev about his probability to be in world A. There is a matter of fact about which outcome this Lev will see, but he is ignorant about this fact at the time of the question. In order to make this point vivid, Vaidman proposed an experiment in which the experimenter is given a sleeping pill before the experiment. Then, while asleep, he is moved to room A or to room B depending on the results of the experiment. When the experimenter has woken up (in one of the rooms), but before he has opened his eyes, he is asked "In which room are you?" Certainly, there is a matter of fact about which room he is in (he can learn about it by opening his eyes), but he is ignorant about this fact at the time of the question. This construction provides the ignorance interpretation of probability, but the value of the probability has to be postulated (see Section 6.3 below for attempts to derive it): Probability Postulate The probability of an outcome of a quantum experiment is proportional to the total measure of existence of all worlds with that outcome.[7] The question of the probability of obtaining A also makes sense for the Lev in world B before he becomes aware of the outcome. Both "Lev"s have the same information on the basis of which they should give their answer. According to the probability postulate they will give the same answer: 1/3 (the relative measure of existence of the world A). Since Lev before the measurement is associated with two "Lev"s after the measurement who have identical ignorance probability concepts for the outcome of the experiment, I can define the probability of the outcome of the experiment to be performed as the ignorance probability of the successors of Lev for being in a world with a particular outcome. The "sleeping pill" argument does not reduce the probability of an outcome of a quantum experiment to a familiar concept of probability in the classical context. The quantum situation is genuinely different. Since all outcomes of a quantum experiment are actualized, there is no probability in the usual sense. The argument explains the Behavior Principle (see below) for an experimenter according to which he should behave as if there were certain probabilities for different outcomes. The justification is particularly clear in the approach to probability as the value of a rational bet on a particular result. The results of the betting of the experimenter are relevant for his successors emerging after performing the experiment in different worlds. Since the experimenter is related to all of his successors and they all have identical rational strategies for betting, then, this should also be the strategy of the experimenter before the experiment. Several authors justify the probability postulate without relying on the sleeping pill argument. Tappenden 2000 (p. 111) adopts a different semantics according to which "I" live in all branches and have "distinct experiences" in different "superslices", and uses "weight of a superslice" instead of measure of existence. He argues that it is intelligible to associate probabilities according to the probability postulate: "Faced with an array of weighted superslices as part of myself ... what choice do I have but to assign an array of attitudes, degrees of belief, towards the experiences associated with those superslices?". Saunders 1998, exploiting a variety of ideas in decoherence theory, the relational theory of tense and theories of identity over time, also argues for "identification of probability with the Hilbert Space norm" (which equals the measure of existence). Page 2002 promotes an approach which he has recently named Mindless Sensationalism. The basic concept in this approach is a conscious experience. He assigns weights to different experiences depending on the quantum state of the universe, as the expectation values of presently-unknown positive operators corresponding to the experiences (similar to the measures of existence of the corresponding worlds (3)). Page writes "... experiences with greater weights exist in some sense more ..." In all of these approaches, the postulate is justified by appeal to an analogy with treatments of time, e.g., the measure of existence of a world is analogous to the duration of a time interval. In a more ambitious work, Deutsch 1999 has claimed to derive the probability postulate from the quantum formalism and the classical decision theory, but it is far from clear that he achieves this (see Barnum et al.). 5. Tests of the MWI Despite the name "interpretation", the MWI is a variant of quantum theory that is different from others. Experimentally, the difference is relative to collapse theories. It seems that there is no experiment distinguishing the MWI from other no-collapse theories such as Bohmian mechanics or other variants of MWI. The collapse leads to effects that are, in principle, observable; these effects do not exist if the MWI is the correct theory. To observe the collapse we would need a super technology, which allows "undoing" a quantum experiment, including a reversal of the detection process by macroscopic devices. See Lockwood 1989 (p. 223), Vaidman 1998 (p. 257), and other proposals in Deutsch 1986. These proposals are all for gedanken experiments that cannot be performed with current or any foreseen future technology. Indeed, in these experiments an interference of different worlds has to be observed. Worlds are different when at least one macroscopic object is in macroscopically distinguishable states. Thus, what is needed is an interference experiment with a macroscopic body. Today there are interference experiments with larger and larger objects (e.g., fullerene molecules C60), but these objects are still not large enough to be considered "macroscopic". Such experiments can only refine the constraints on the boundary where the collapse might take place. A decisive experiment should involve the interference of states which differ in a macroscopic number of degrees of freedom: an impossible task for today's technology.[8] The collapse mechanism seems to be in contradiction with basic physical principles such as relativistic covariance, but nevertheless, some ingenious concrete proposals have been made (see Pearle 1986 and the entry on collapse theories). These proposals (and Weissman's 1999 non-linear MW idea) have additional observable effects, such as a tiny energy non-conservation, that were tested in several experiments. The effects were not found and some (but not all!) of these models have been ruled out. An apparent candidate for such an experiment is a setup proposed in Englert et al. 1992 in which a Bohmian world is different from the worlds of the MWI (see also Aharonov and Vaidman 1996). In this example, the Bohmian trajectory of a particle in the past is contrary to the records of seemingly good measuring devices (such trajectories were named surrealistic). However, at present, there are no memory records that can determine unambiguously (without deduction from a particular theory) the particle trajectory in the past. Thus, this difference does not lead to an experimental way of distinguishing between the MWI and Bohmian mechanics. I believe that no other experiment can distinguish between the MWI and other no-collapse theories either, except for some perhaps exotic modifications, e.g., Bohmian mechanics with initial particle position distribution deviating from the quantum distribution. There are other opinions about the possibility of testing the MWI. It has frequently been claimed, e.g. by De Witt 1970, that the MWI is in principle indistinguishable from the ideal collapse theory. On the other hand, Plaga 1997 claims to have a realistic proposal for testing the MWI, and Page 2000 argues that certain cosmological observations might support the MWI. 6. Objections to the MWI Some of the objections to the MWI follow from misinterpretations due to the multitude of various MWIs. The terminology of the MWI can be confusing: "world" is "universe" in Deutsch 1996, while "universe" is "multiverse", etc. There are two very different approaches with the same name "The Many-Minds Interpretation (MMI)". The Albert and Loewer 1988 MMI mentioned above should not be confused with Lockwood’ 1996 MMI (which resembles the approach of Zeh 1981). The latter is much closer to the MWI as it is presented here, see Sec. 17 of Vaidman 1998. Further, the MWI in the Heisenberg representation (Deutsch 2001) differs significantly from the MWI presented in the Schrödinger representation (used here). The MWI presented here is very close to Everett's original proposal, but in the entry on Everett's relative state formulation of quantum mechanics, as well as in his book Barrett 1999, Barrett uses the name "MWI" for the splitting worlds view publicized by De Witt 1970. This approach has been justly criticized: it has both some kind of collapse (an irreversible splitting of worlds in a preferred basis) and the multitude of worlds. Now I consider the main objections in detail. 6.1 Ockham's Razor It seems that the majority of the opponents of the MWI reject it because, for them, introducing a very large number of worlds that we do not see is an extreme violation of Ockham's principle: "Entities are not to be multiplied beyond necessity". However, in judging physical theories one could reasonably argue that one should not multiply physical laws beyond necessity either (such a verion of Ockham's Razor has been applied in the past), and in this respect the MWI is the most economical theory. Indeed, it has all the laws of the standard quantum theory, but without the collapse postulate, the most problematic of physical laws. The MWI is also more economic than Bohmian mechanics which has in addition the ontology of the particle trajectories and the laws which give their evolution. Tipler 1986 (p. 208) has presented an effective analogy with the criticism of Copernican theory on the grounds of Ockham's razor. One might consider also a possible philosophical advantage of the plurality of worlds in the MWI, similar to that claimed by realists about possible worlds, such as Lewis 1986 (see the discussion of the analogy between the MWI and Lewis's theory by Skyrms 1976). However, the analogy is not complete: Lewis' theory considers all logically possible worlds, many more than all worlds incorporated in the quantum state of the Universe. 6.2 The Problem of the Preferred Basis A common criticism of the MWI stems from the fact that the formalism of quantum theory allows infinitely many ways to decompose the quantum state of the Universe into a superposition of orthogonal states. The question arises: "Why choose the particular decomposition (2) and not any other?" Since other decompositions might lead to a very different picture, the whole construction seems to lack predictive power. Indeed, the mathematical structure of the theory (i) does not yield a particular basis. The basis for decomposition into worlds follows from the common concept of a world according to which it consists of objects in definite positions and states ("definite" on the scale of our ability to distinguish them). In the alternative approach, the basis of a centered world is defined directly by an observer. Therefore, given the nature of the observer and given her concepts for describing the world, the particular choice of the decomposition (2) follows (up to a precision which is good FAPP, as required). If we do not ask why we are what we are, and why the world we perceive is what it is, but only how to explain relations between the events we observe in our world, then the problem of the preferred basis does not arise: we and the concepts of our world define the preferred basis. But a stronger response can be made to this criticism. Looking at the details of the physical world, the structure of the Hamiltonian, the value of the Planck constant, etc., one can argue why the sentient beings we know are of a particular type and why they have the particular concepts they do for describing their worlds. The main argument is that the locality of interactions yields stability of worlds in which objects are well localized. The small value of the Planck constant allows macroscopic objects to be well localized for a considerable period of time. Thus, such worlds (corresponding to quantum states |Psii>) can maintain their macroscopic description long enough to be perceived by sentient beings. By constrast, a "world" with macroscopic objects being in a superposition of macroscopically distinguishable states (corresponding to a quantum state 1/2(|Psi1>+|Psi2>) evolves during an extremely small time, much smaller than the perception time of any feasible sentient being, into a mixture with the other "world" 1/2(|Psi1>-|Psi2>) (see Zurek 1998). This is a good argument why sentient beings perceive localized objects and not superpositions, but one cannot rely on the decoherence argument alone in order to single out the proper basis. (See some technical difficulties in Barvinsky and Kamenshchik 1995.) The fact that we can perceive only well localized objects in definite macroscopic states might not be just a physics issue: chemistry, biology, and even psychology might be needed to account for our evolution. See various attempts to construct a theory of evolution of sentient beings based on the MWI or its variants in Albert 1992, Chalmers 1996, Deutsch 1996, Donald 1990, Gell-Mann and Hartle 1990, Lehner 1997, Lockwood 1989, Page 2002, Penrose 1994, Saunders 1994, and Zeh 1981. 6.3 Derivation of the Probability Postulate from the Formalism of the MWI Besides the question of the interpretation of the probability measure, which we have treated above, there is a separate issue about probabilities in the MWI, namely the claim that was sometimes made, e.g. by De Witt 1970, that the probability postulate, i.e. the postulate that the probability measure is proportional to the measure of existence, can be derived from the formalism of the MWI. Several authors, e.g., Kent 1990, criticize the MWI on the grounds that this claim fails. As a matter of fact, the MWI has no advantage over other interpretations with regard to this issue. What is true instead is that one can derive the Probability Postulate from a weaker postulate according to which the probability is a function of the measure of existence. The derivation can be based on Gleason's 1957 theorem about the uniqueness of the probability measure. Similar results can be achieved by the analysis of the frequency operator originated by Hartle 1968 and from more general arguments by Deutsch 1999. All these results can be derived in the framework of various interpretations and thus the success or failure of these proofs cannot be an argument in favor or against the MWI. The MWI, like all other interpretations, requires a probability postulate. Another idea for obtaining a probability law out of the formalism is to state, by analogy to the frequency interpretation of classical probability, that the probability of an outcome is proportional to the number of worlds with this outcome. This proposal immediately yields predictions that are different from what we observe in experiments. Some authors, arguing that counting is the only sensible way to introduce probability, consider this to be a fatal difficulty for the MWI, e.g., Ballentine 1975. Graham 1973 suggested that the counting of worlds does yield correct probabilities if one takes into account detailed splitting of the worlds in realistic experiments, but other authors have criticized the MWI because of the failure of Graham's claim. Weissman 1999 has proposed a modification of quantum theory with additional non-linear decoherence (and hence even more worlds than standard MWI), which can lead asymptotically to worlds of equal mean measure for different outcomes. Although this avoids random processes, like other MWI's, the price in the complication of the mathematical theory seems to be too high for the simplification in explaining probability. I believe that assigning equal probability to every world is unjustified. The formalism of quantum theory includes different amplitudes for quantum states corresponding to different worlds. It is a positive feature of the theory that the differences in the mathematical descriptions of worlds (different absolute values of amplitudes) are manifest in our experience. See Saunders 1998 for a detailed analysis of this issue. From the weak probability postulate (the probability is a function of the measure of existence) follows that in case all the worlds in which a particular experiment took place have equal measures of existence, the probability of an outcome is proportional to the number of worlds with this outcome. If the measures of existence of these worlds are not equal, the experimenters in all the worlds can perform additional auxiliary measurements of some variables such that all the new worlds will have equal measures of existence. The experimenters should be completely indifferent to the results of these auxiliary measurements: their only purpose is to split the worlds into "equal-weight" worlds. This procedure reconstructs the standard quantum probability rule from the counting worlds approach; see Deutsch 1999 for details. 6.4 Social Behavior of a Believer in the MWI There are claims that a believer in the MWI will behave in an irrational way. One claim is based on the naive argument described in the previous section: a believer who assigns equal probabilities to all different worlds will bet equal bets for the outcomes of quantum experiments that have unequal probabilities. Another claim, recently discussed by Lewis 2000, is related to the strategy of a believer in the MWI who is offered to play a quantum Russian roulette game. The argument is that I, who would not accept an offer to play a classical Russian roulette, should agree to play the roulette any number of times if the triggering occurs according to the outcome of a quantum experiment. Indeed, at the end, there will be one world in which Lev is a multi-millionaire and all other worlds in which there will be no Lev Vaidman alive. Thus, in the future, Lev will be rich and presumably a happy man. However, adopting the Probability Postulate leads all believers in the MWI to behave according to the following principle: Behavior Principle We care about all our successive worlds in proportion to their measures of existence. With this principle our behavior will be similar to the behavior of a believer in the collapse theory who cares about possible future worlds according to the probability of their occurrence. I should not agree to play quantum Russian roulette because the measure of existence of worlds with Lev dead will be much larger than the measure of existence of the worlds with rich Lev alive. 7. Why the MWI? The MWI exhibits some kind of nonlocality: "world" is a nonlocal concept, but it avoids action at a distance and, therefore, it is not in conflict with the relativistic quantum mechanics; see discussions of nonlocality in Vaidman 1994, Tipler 2000, Bacciagaluppi 2002, and Hemmo and Pitowsky 2001. Although the issues of (non)locality are most transparent in the Schrödinger representation, an additional insight can be gained through recent analysis in the framework of the Heisenberg representation, see Deutsch and Hayden 2000, Rubin 2001, and Deutsch 2001. The most celebrated example of nonlocality was given by Bell 1964 in the context of the Einstein-Podolsky-Rosen argument. However, in the framework of the MWI, Bell's argument cannot get off the ground because it requires a predetermined single outcome of a quantum experiment. Another example of a kind of an action at a distance in a quantum theory with collapse is the interaction-free measurement of Elitzur and Vaidman 1993. Consider a super-sensitive bomb which explodes when any single particle arrives at its location. It seems that it is impossible to see this bomb, because any photon that arrives at the location of the bomb will cause an explosion. Nevertheless, using the Elitzur and Vaidman method, it is possible, at least sometimes, to find the location of the bomb without exploding it. In the case of success, a paradoxical situation arises: we obtain information about some region of space without any particle being there. Indeed, we know that no particle was in the region of the bomb because there was no explosion. The paradox disappears in the framework of the MWI. The situation is paradoxical because it contradicts physical intuition: the bomb causes an observable change in a remote region without sending or reflecting any particle. Physics is the theory of the Universe and therefore the paradox is real if this story is correct in the whole physical Universe. But it is not. There was no photon in the region of the bomb in a particular world, but there are other worlds in which a photon reaches the bomb and causes it to explode. Since the Universe incorporates all the worlds, it is not true that in the Universe no photon arrived at the location of the bomb. It is not surprising that our physical intuition leads to a paradox when we limit ourselves to a particular world: physical laws are applicable when applied to the physical universe that incorporates all of the worlds. The MWI is not the most accepted interpretation of quantum theory among physicists, but it is becoming increasingly popular (see Tegmark 1998). The strongest proponents of the MWI can be found in the communities of quantum cosmology and quantum computing. In quantum cosmology it makes it possible to discuss the whole Universe avoiding the difficulty of the standard interpretation which requires an external observer. In quantum computing, the key issue is the parallel processing performed on the same computer; this is very similar to the basic picture of the MWI.[9] Many physicists and philosophers believe that the most serious weakness of the MWI (and especially of its version presented here) is that it "gives up trying to explain things". In the words of Steane 1999, "It is no use to say that the [Schrödinger] cat is ‘really’ both alive and dead when every experimental test yields unambiguously the result that the cat is either alive or dead." (Steane dismisses the interference experiment which can reveal the presence of the superposition as unfeasible.) Indeed, if there is nothing in physics except the wave-function of the Universe, evolving according to the Schrödinger equation, then there are questions answering which requires help by other sciences. However, the advantage of the MWI is that it allows us to view quantum mechanics as a complete and consistent physical theory which agrees with all experimental results obtained to date. Other Internet Resources Other Resources Related Entries quantum mechanics | quantum mechanics: Everett’s relative-state formulation of | quantum theory: measurement in I am thankful to everybody who has borne with me through endless discussions of the MWI (in this and other worlds) and, in particular, to Yakir Aharonov, David Albert, Guido Bacciagalupi, Jeremy Butterfield, Rob Clifton, David Deutsch, Simon Saunders, Philip Pearle, and David Wallace. I acknowledge partial support by grant 62/01 of the Israel Science Foundation and the EPSRC grant GR/N33058. Copyright © 2002 by Lev Vaidman Table of Contents buttonTable of Contents First published: March 24, 2002 Content last modified: March 24, 2002
d66c2897022ea69c
Quantum quartic oscillators with two coupled degrees of freedom The numerical integration of the non-stationary two-dimensional Schrödinger equation is carried out. The dynamical properties (dynamical averages, frequency spectra, uncertainty relations, autocorrelators) for two quantum oscillators coupled by the Pallen- Edmonds potential are studied. As a result of the numerical simulations, it was established that the quantum system is sensitive to small changes in Hamiltonian that are caused by the Pallen-Edmonds coupling potential. In the regime of weak coupling high-frequency oscillations are generated, the spectral component number is increased at amplification of coupling.
93b9b75193551d3a
God Is Not Dead By Amit Goswami, Ph.D. Apart from simplistic and very diverse pictures of God that all religions give for popular appeal, at the esoteric core, all religions agree that apart from material interactions, there is another agent of causation in the world; and this is what they call God. Religions also agree that apart from the material level of reality, which we experience outside of us, there are other subtle levels of reality that we experience when we look inside. Religions also agree about a third very important aspect of divinity: We must try to manifest divine qualities—love, beauty, justice, truth, and good, for example—in our lives. When not so long ago, the philosopher Nietzsche declared, “God is dead,” he was lamenting that the popular religious renditions of God are so simplistic that they can no longer guide people to move toward Godliness. This is true. Yet to this day, many scientists beat a dead horse by trying to disprove the popular pictures of God. This is beating around the bush and not at all useful. The real questions, and these are all questions of science, are: (1) Is there causation in the world apart from material interactions? (2) Are there subtle non-material levels of reality? And (3) is there any scientific justification of ethics, which compels us to pursue Godliness in our lives? Most scientists today squarely say “No,” in answer to these questions because they contradict their metaphysics of scientific materialism according to which there is only matter and its interactions, nothing else is real. In my book, God Is Not Dead, I give answers also, and they are all in the affirmative. Yes, there is God. Because (1) there is an agent of causation apart from material interaction; (2) what we experience internally are subtle non-material worlds; and (3) not only should we pursue Godliness in our lives, our evolution is taking us toward better and better manifestations of Godliness. In my book I back up these assertions with both scientific theory and empirical evidence. Believe it or not, one of the most well known mathematical equations of science proves the existence of God if examined within the new context that we have set. It is called the Schrödinger equation named after one of its discoverers, and it is the fundamental equation of quantum physics. Physicists apply this equation for the study of many objects and many events; under these circumstances, the equation predicts (statistically) deterministic results and so most physicists miss God in the equation. The right question to ask is how does this equation apply to a single object in a single event, as it must? You see, the problem is that the Schrödinger equation depicts objects not as “determined things” of Newtonian vintage but as waves of possibility for consciousness to choose from? How do we know this? Because whenever we look at a quantum object, an electron for example, we don’t see possibilities—an electron in different places all at once—but an electron in one actual place, an actuality. So we must be choosing where the electron actualizes! Let’s go deeper. If we (our consciousness) are able to convert possibility into actuality, our consciousness cannot be a brain product or any other material object since all material objects obey quantum physics and must be possibilities only. So consciousness as a nonmaterial agent of choice is a causal agent! Have we discovered God? No, say the scientists, and they are right up to a point. The above raises the paradox of dualism if we think of the choosing consciousness or God, an agent separate from us, as popular religions do. To see this, ask the simple question, how does a nonmaterial God interact with the material world? It can’t without a mediator. But a mediator signal requires energy. And the energy of the physical world is a constant; energy never passes from the material world to a God world and vice versa. In the esoteric core, the masters of the various religions understood the situation perfectly. God is not separate from the material world, they declare at various places, times, and cultures. God is both transcendent and immanent. But what do they mean? Until recently, scientists and ordinary people alike, have not been able to penetrate the wisdom of these words. So scientists ignore them and ordinary people go on thinking about God as a dual agent of causation. Proper understanding of quantum physics resolves the logjam. The quantum concept that is truly radical and that is changing our world view is called nonlocality—signal less interaction. Matter consists of waves of possibility within consciousness, which is the ground of all being. Consciousness chooses one facet out of the multifaceted quantum possibility wave and converts possibility into the actuality of that chosen facet, but there is no dualism because consciousness does the choosing nonlocally without signal. It is choosing from itself. Is it like Waiting for Godot: We have been looking for God and it is us? It is each of us who chooses his or her own reality. Alas! This, too, is too simplistic, which is why your wishful thinking about manifesting a BMW for yourself does not usually work. There is a paradox here. Suppose you and your friend are approaching from perpendicular directions a “quantum” traffic light with two possible facets, red and green. Being busy people, you both want green, but who gets to choose? If you both get to choose, obviously there would be pandemonium. Or perhaps you are like the Hollywood woman who meets a friend on Sunset Boulevard and takes her to a coffee house to “catch up.” Over coffee, she starts talking and after an hour says, “Oh my God, I have been talking about myself all this time. Let’s now talk about you. What do you think of me?” To this woman, the only consciousness in the world is hers, and she is always the chooser. Such people are called solipsistic. But solipsism is obviously not the answer to our paradox. It has shifted the question, “Who gets to choose?” to, “Who gets to be the solipsistic head honcho of the situation? No more than that. The paradox remains. The authentic solution is this: The choosing nonlocal consciousness is not us in our ordinary ego, but a “transcendent” consciousness that is both us and beyond us, both transcendent and immanent. Makes sense, doesn’t it? And more. This nonlocality of our choosing consciousness is an experimentally verifiable idea. In fact, this nonlocality has been verified by five different experiments by five different groups at five different laboratories all showing the direct transfer (without signals) of electrical activity from one subject’s brain to another when the subjects are correlated through meditative intention. This is reported in God Is Not Dead. So the scientific evidence for God and God’s causal efficacy is already here. The evidence is definitive because nonlocality can never be simulated by material interactions that always occur via the intermediary of signals. This is not the only evidence. God’s choice is creative and manifests in our creative experience through discontinuous quantum leaps akin to electron’s leap from one atomic orbit to another without going through the intervening space. Creative experiences are subjective, you say. Not when such leaps heal a person from a life threatening disease, a phenomenon called quantum healing for which plenty of evidence exists. Objective evidence for such creative quantum leaps also show up in biological evolution and explains the puzzling phenomena of the fossil gaps (or missing links) which Darwinism cannot explain. How about subtle bodies? If matter consists of waves of possibility for consciousness to choose from and conscious choice leads to our experience of sensing, then it makes sense to posit that our internal experiences are also due to conscious choice from subtle domains of quantum possibilities. As the psychologist Carl Jung first codified, we have four kinds of experiences: sensing, feeling, thinking, and intuiting. In this way there must be four different compartments of conscious possibilities; the physical we sense, the vital energies we feel, the mental meaning we think, and the supramental archetypes—love etc.—we intuit. The empirical evidence for subtle bodies abound in health and healing, in dreams, in the phenomenon of biological morphogenesis, in survival after death and reincarnation, just to name a few. Again, scientific evidence for God is already here, so what should we do about it? For one thing, we should take the religious masters seriously and pay attention to ethics. The values—love, beauty, justice, truth, and goodness—that ethics talk about are what we intuit. And plenty of evidence exists (for example, in the phenomena of dreams, creativity, and reincarnation) for the importance and validity of ethics as discussed in God Is Not Dead. And more. When we recognize that Darwin’s theory of continuous evolution is incomplete and complement it with the creative discontinuous quantum leaps, we discover an astounding thing. Biological evolution’s direction from simple to complex organisms can be explained. We evolve from simplicity to complexity to be able to manifest our experiences of the subtle domains of possibilities better and better. In particular, right now we are evolving toward manifesting better and better Godly qualities. Someday, said the Jesuit philosopher Teilhard de Chardin, we shall harness . . . the energies of love.” Teilhard was right. That day is not very far away. In his private life, Goswami is a practitioner of spirituality and transformation. His forth coming book, The Everything Answer Book: How Quantum Science Explains Love, Death and the Meaning of Life will be published by Hampton Roads Publishing Company in April 2017. Translate »
ff5f6db8db1f3dce
Monday, July 29, 2019 Now, the package has our return address on it, so you would think that in a situation like that Fedex would simply return the package to us.  But you would be wrong.  Very, very wrong. Fedex has an on-line tracking system, so we knew that the package had not been delivered (though at the time we didn't know why).  We kept an eye on it for about a week waiting for it to be either delivered or returned.  After a week, we called Fedex to ask what was going on.  We were told that they had been unable to obtain the required signature.  We asked why the package had not been returned.  It turns out that return shipping is not included in the original price.  We had paid cash for the original shipment, they could not return it because they didn't have a credit card on file. OK, we said, that's kind of annoying to find out at this stage in the game, but no problem, we'll give you a credit card.  Oh no, they said, that won't work.  We can't accept a credit card payment at this point, you need a Fedex account number.  Why can't they accept a credit card now?  No one knew. So we opened a Fedex account.  Problem solved, no? It turns out that there is more than one kind of Fedex account, and we had opened the wrong kind.  So we opened another Fedex account. By this point a month had gone by, and the package had been shipped from Colorado to Mississippi!  Why they didn't just send it back to California I have no idea, and neither did they. By now we have spoken to no fewer than ten different people at Fedex over a period of two months.  No one can tell us why the package has not been returned to us.  At one point we were told it was going through some kind of security screening (there is nothing in the package but paper).  As if the situation were not already ironic enough, the latest delay (we are told) has something to do with Fedex wanting to give us a refund for the original shipping charge in order to make up for the inconvenience, though, of course, they are still going to charge us for the return shipment. If by some bizarre chance anyone at Fedex upper management is seeing this, you have a very serious problem in your processes.  This is beyond unacceptable.  For an organization whose entire business model is based on getting things to their destinations on time, you should be hanging your head in shame that it has taken you over two months to return a package from Colorado To California with no end in sight.  I really don't like to engage in public shaming, but if ever there was a situation that warranted it, this is it. Monday, July 08, 2019 The Trouble with Many Worlds Ten years ago I wrote an essay entitled "The Trouble with Shadow Photons" describing a problem with the dramatic narrative of what is commonly called the "many-worlds" interpretation of quantum mechanics (but which was originally and IMHO more appropriately called the "relative state" interpretation) as presented by David Deutsch in his (otherwise excellent) book, "The Fabric of Reality."  At the end of that essay I noted in an update: Deutsch just referred me to this paper which is the more formal formulation of his multiple-worlds theory. I must confess that on a cursory read it seems to be a compelling argument. So I may have to rethink this whole thing. That paper is entitled "The Structure of the Multiverse" and its abstract is delightfully succinct.  I quote it here in its entirety: The structure of the multiverse is determined by information flow. Those of you who have been following my quantum adventures know that I am a big fan of information theory, so I was well primed to resonate with Deutsch's theory.  And I did resonate with it (and still do).  Deutsch's argument was compelling (and still is).  Nonetheless, I never wrote a followup for two reasons.  First, something was still bothering me about the argument, though I couldn't really put my finger on it.  Yes, Deutsch's argument was compelling, but on the other hand, so was my argument (at least to me).  The difference seemed to me (as many things in QM interpretations do) a matter of taste, so it seemed pointless to elaborate.  And second, I didn't think anyone reading this blog would really care.  So I tabled it. But last May the comment thread in the original post was awakened from its slumber by a fellow named Elliot Temple.  The subsequent exchange led me to this paper, of which I was previously unaware.  Here's the abstract, again, in its entirety: The "special probabilistic axiom" to which Deutsch refers is called the Born rule (named after Max Born).  The "remaining, non-probabilistic axioms of quantum theory" comprises mainly the Schrödinger equation.  (To condense things a bit I'll occsaionally refer to these as the BR and the SE.) The process of applying quantum mechanics to real-world situations consists of two steps: first you solve the SE.  The result is something called a "wave function".  Then you apply the BR to the wave function and what pops out is a set of probabilities for various possible results of the experiment you're doing.  Following this procedure yields astonishingly accurate results: no experiment has ever been done whose outcome is at odds with its predictions.  The details don't matter.  What matters is: there's this procedure.  It yields incredibly accurate predictions.  It consists of two parts.  One part is deterministic, the other part isn't. This naturally raises the question of why this procedure works as well as it does.  In particular, why does the procedure have two parts?  And why does it only yield probabilities?  Answering these questions is the business of "interpretations" of quantum mechanics.  Wikipedia lists almost twenty of these.  The fact that after nearly 100 years no consensus has emerged as to which one is correct gives you some idea of the thorniness of this problem. So the paper that Elliot referred me to was potentially a Big Deal.  It is hard to overstate the magnitude of the breakthrough this would be.  It would show that there are not in fact two disparate parts to the theory, there is only one: the SE.  Such a unification would be of the same order of magnitude as the discovery of relativity.  It would be headline news.  David Deutsch would be a Nobel Laureate, on a par with Newton and Einstein.  But the fact that there is still an active debate over the issue shows that Deutsch's claim has not been universally accepted.  So there would seem to be only two possibilities: either Deutsch is wrong, or he's right and the rest of the physics community has failed to recognize it. Normally when a claim of a major result like this fails to be recognized by the community it's because the claim is wrong.  In fact, more than 99% of the time it's because the claimant is a crackpot.  But Deutsch is no crackpot.  He's a foundational figure in quantum computing.  He discovered the first quantum algorithm.  Even if he got something wrong he very likely got it wrong in a very interesting way. So I decided to do a deep dive into this.  It led me down quite the little rabbit hole.  There are a number of published critiques of Deutsch's work, and counter-critiques critiquing the critiques, and counter-counter-critiques.  They're all quite technical.  It took me a couple of months of steady effort to sort it all out, and that only with the kind of help of a couple of people who understand all this stuff much better than I do.  (Many thanks to Tim Maudlin, David Wallace, and especially the patient, knowledgeable, and splendidly-pseudonymed /u/ididnoteatyourcat on Reddit.) In the rest of this post I'm going to try to describe the result of going down that rabbit hole in a way that is accessible to what I think is the majority of the audience of this blog.  The TL;DR is that Deutsch's argument depends on at least one assumption that is open to legitimate doubt.  Figuring out what that assumption is isn't easy, and whether or not the assumption is actually untrue is arguable.  That's the reason that Deutsch hasn't won his Nobel yet. I have to start with a review of the rhetoric of the many-worlds interpretation of quantum mechanics (MWI).  The rhetoric says that when you do a quantum measurement it is simply not the case that it has a single outcome.  Instead, what happens is that the universe "splits" into multiple parts when a measurement is performed, and so all of the possible outcomes of an experiment actually happen as a matter of physical fact.  The reason you only perceive a single outcome is that you yourself split into multiple copies.  Each copy of you perceives a single outcome, but the sum total of all the "you's" that have been created collectively perceive all the possible outcomes. I used the word "rhetoric" above because, as we shall see, there is a disconnect between what I have just written and the math.  To be fair to Deutsch, his rhetoric is different from what I have written above, and it more closely matches the math.  Instead of "splitting", on Deutsch's view the universe "peels apart" (that's my terminology) in "waves of differentiation" (that is Deutsch's terminology) rather than "splitting" (that is everyone else's terminology) but this is a detail.  The point is that at the end of a process that involves you doing a quantum measurement with N possible outcomes, there are, again in point of actual physical fact, N "copies" of you (Deutsch uses the word "doppelgänger"). Again, to be fair to Deutsch, he acknowledges that this is not quite correct: Universes, histories, particles and their instances are not referred to by quantum theory at all – any more than are planets, and human beings and their lives and loves. Those are all approximate, emergent phenomena in the multiverse.  [The Beginning of Infinity, p292, emphasis added.] All of the difficulty, it will turn out, hinges on the fidelity of the approximation.  But let us ignore this for now and look at Deutsch's argument. Deutsch attempts to capture the idea of probability in a deterministic theory using game theory, that is, by looking at how a rational agent should act, applying a few reasonable-looking assumptions about the utility function, and showing that a rational agent operating under the MWI would act exactly as if they were using the Born rule.  The argument is long and technical, but it can be summarized very simply. [Note to nit-pickers: this simplified argument is in fact a straw man because it is based on the assumption that branch counting is a legitimate rational strategy, which is actually false on the Deutsch-Wallace view.  But since the conclusion I am going to reach is the same as Deutsch's I consider this legitimate rhetorical and literary license because the target audience here is mainly non-technical.] For simplicity, let's consider only the case of doing an experiment with two possible outcomes (let's call them A and B).  The game-theoretical setup is this: you are going to place a bet on either A or B and then do the experiment.  If the outcome matches your choice, you win $1, otherwise you lose $1. If the experiment is set up in such a way that the quantum-mechanical odds of each outcome are the same (i.e. 50-50) then there is no conflict between the orthodox Born-rule-based approach and the MWI: in both cases, the agent has no reason to prefer betting on one outcome over the other.  The only difference is the rationale that each agent would offer: one would say, "The Born rule says the odds are even so I don't care which I choose" and the other would say, "I am going to split into two and one of me is going to experience one outcome (and win $1) and the other of me is going to experience the other outcome (and lose $1), and that will be situation no matter whether I choose A or B, so I don't care which I choose." [Aside: Deutsch goes through a great deal more complicated argument to prove this result because it is based on an assumption that Deutsch rejects.  In fact, he goes on from there to put in a great deal more effort to extend this result to an experiment with N possible outcomes, all of which have equal probabilities under the Born rule.  He has to do this because my argument is based on a tacit assumption that Deutsch rejects.  We'll get to that.  My goal at this point is not to reproduce Deutsch's reasoning, only to convince you that this intermediate result is plausibly true.] Deutsch's argument is based on an assumption called branching indifference.  Deutsch himself did not make this explicit in his original paper, it was clarified by David Wallace in a follow-up paper.  Branching indifference says that a rational agent doesn't care about branching per se.  In other words, if an agent does a quantum experiment that doesn't have a wager associated with it, then the agent has no reason to care whether or not the experiment is performed or not. The reasoning then proceeds as follows: suppose that the many-worlder who ends up on the A branch does a follow-up experiment with two outcomes and even odds, but without placing a bet.  Now there are three copies of him, two of which have won $1 and one of which has lost $1.  But (and this is the crucial point) all of these copies are now on branches that have equal probabilities.  Because of branch indifference, this situation is effectively equivalent to one where there was a single experiment with three outcomes, each with equal probability, but two of which result in winning $1, and where the agent had the opportunity to place the bet on both winning branches. So that sounds like a reasonable argument.  In fact, it is a correct argument, i.e. the conclusions really do follow from the premises. But are the premises reasonable?  Well, many many-worlders think so.  But I don't.  In particular, I cast a very jaundiced eye on branching indifference.  There are two reasons for this.  But first, let's look at Wallace's argument for why branching indifference is reasonable: Solution continuity and branching indifference — and indeed problem continuity — can be understood in the same way, in terms of the limitations of any physically realisable agent. Any discontinuous preference order would require an agent to make arbitrarily precise distinctions between different acts, something which is not physically possible. Any preference order which could not be extended to allow for arbitrarily small changes in the acts being considered would have the same requirement. And a preference order which is not indifferent to branching per se would in practice be impossible to act on: branching is uncontrollable and ever-present in an Everettian universe. If that didn't make sense to you, don't worry, I'll explain it.  But first I want to take a brief diversion.  Trust me, I'll come back to this. Remember how I said earlier that my simplified argument for Deutsch's conclusion was based on a premise that Deutsch would reject?  That premise is called branch counting.  It is the idea that the number of copies of me that exist matters.  This seems like an odd premise to dispute.  How could it possibly not matter if there is one of me winning $1 or a million of me each winning $1?  The latter situation might not have a utility that is a million times higher than the former, but if I'm supposed to care about "copies of me" at all, how can it not matter how many there are? Here is Wallace's answer: Why it is irrational: The first thing to note about branch counting is that it can’t actually be motivated or even defined given the structure of quantum mechanics. There is no such thing as “branch count”: as I noted earlier, the branching structure emergent from unitary quantum mechanics does not provide us with a well-defined notion of how many branches there are. Wait, what???  There is no "well defined notion of how many branches there are?" No, there isn't.  Wallace reiterates this over and over: ...the precise fineness of the grain of the decomposition is underspecified  There is no “real” branching structure beyond a certain fineness of grain...  ...agents branch all the time (trillions of times per second at least, though really any count is arbitrary) the actual physics there is no such thing as a well-defined branch number Remember how earlier I told you that there was a disconnect between the rhetoric and the math?  That the idea of "splitting" or "peeling apart" or whatever you want to call it was an approximation?  Well, this is where the rubber meets the road on that approximation.  Branching indifference is necessary because branching is not a well-defined concept. So what about the rhetoric of MWI, that when you do an experiment with N possible outcomes that you split/peel-apart/whatever-you-want-to-call-it into N copies of yourself?  That is an approximation to the truth, but like classical reality itself, it is not the truth.  The actual truth is much more complex and subtle, and it hinges on what the word "you" means. If by "you" you mean your body, which is to say, all the atoms that make up your arms and legs and eyes and brain etc. then it's true that there is no such thing as a well-defined branch count.  This is because every atom — indeed, every electron and every other sub-atomic particle — in your body is constantly "splitting" by virtue of its interactions with other nearby particles, including photons that are emitted by the sun and your smart phone and all the other objects that surround you.  These "splits" propagate out at the speed of light and create what Deutsch calls "waves of differentiation", what I call the "peeling apart" of different "worlds".  (If you are a regular reader you will have heard me refer to this phenomenon as creating "large systems of mutually entangled particles".  Same thing.)  This process is a continuous one.  There is never a well-defined "point in time" where the entire universe splits into two, and no point in time where you (meaning your body) splits into two.  There is a constant and continuous process of "peeling apart".  Actually many, many (many!) peelings-apart, all of which are happening continuously.  To call it mind-boggling would be quite the understatement. On the other hand, if by "you" you mean "the entity that has subjective experiences and makes decisions based on those experiences" then things are much less clear.  I don't know about you, but my subjective experience is that there is exactly one of me at all times.  I consider this aspect of my subjective experience to be an essential component of what it means to be me.  I might even go so far as to say that my subjective experience of being a single unified whole defines what it is to be "me".  So the only way that there could be a "copy of me" is if there is another entity that has a subjective experience that is bound to the same past as my own, but whose present subjective experience is somehow different from my own e.g. my experiment came out A and theirs came out B.  An entity whose subjective experience is indistinguishable from my own isn't a copy of me, it's me. The mathematical account of universes "peeling apart" has nothing to say about when the peeling process has progressed far enough to be considered a fully-fledged universe in its own right and so it has nothing to say about when I have "peeled apart" sufficiently to be considered a copy.  That is why branch count is not a coherent concept. And yet, if I am going to apply the notion of branching to myself (which is to say, to the entity having the subjective experience of being a coherent and unified whole) then branch count must be a coherent concept.  It might not be possible to know the branch count, but at any point in time whatever underlying physical processes are really going on,  it has to either qualify as me branching or not.  There is no middle ground. So we are faced with this stark choice: we can either believe the math, or we can believe our subjective experiences, but we can't do both, at least not at the same time.  We can take a "God's eye view" and look at the universal wave function, or we can take a "mortal's-eye view" and see our unified subjective experience as real.  But we can't do both simultaneously.  It's like a Necker cube.  You can see it one way or the other, but not both at the same time. Interestingly, this is all predicted by the math!  In fact, the math tells us why there is this dichotomy.  Subjective experience is necessarily classical because it requires copying information.  In order to be conscious, you have to be conscious of something.  In order to make decisions, you have to obtain information about your environment and take actions that affect your environment.  All of these things require copying information into and out of your brain.  But quantum information cannot be copied.  Only classical information can be copied.  And the only way to create copyable classical information out of a quantum system is to ignore part of the quantum system.  Classical behavior emerges from quantum systems (mathematically) when you trace over parts of the system.  Specifically, it emerges when you consider a subset of an entangled system in isolation from the rest of the system.  When you do that, the mathematical description of the system switches from being a pure state to being a mixed state.  Nothing physical has changed.  It's purely a question of the point of view you choose to take.  You can either look at the whole system (in which case you see quantum behavior) or you can look at part of the system (in which case you see classical behavior) but you can't do both at the same time. As a practical matter, in our day-to-day lives we have no choice but to "look" only at "part" of the system, because "the system" is the entire universe.  (In fact, it's an interesting puzzle how we can observe quantum behavior at all.  Every photon has to be emitted by, and hence be entangled with, something.  So why does the two-slit experiment work?)  We can take a "God's-eye view" only in the abstract.  We can never actually know the true state of the universe.  And, in fact, neither can God. Classical reality is what you get when you slice-and-dice the wave function in a particular way.  It turns out that there is more than one way to do the slicing-and-dicing, and so if you take a God's-eye view you get more than one classical universe.  An arbitrary number, in fact, because the slicing-and-dicing is somewhat arbitrary.  (It is only "somewhat" arbitrary because there are only certain ways to do the slicing-and-dicing that yield coherent classical universes.  But even with that constraint there are an infinite number of possibilities, hence "no well-defined branch count".)  But the only way you can be you, the only way to become aware of your own existence, indeed the only way to become aware of anything, is to descend from Olympus, ignore parts of the wave function, and become classical.  That leaves open the question of which parts to ignore.  To me, the answer is obvious: I ignore all of it except the parts that measurably effect the "branch" that "I" am on.  To me, that is the only possible rational choice.
1b7c0713dbe7dc45
Webrelaunch 2020 Splitting Methods (Winter Semester 2016/17) Lecture: Tuesday 15:45-17:15 SR 3.61 Problem class: Thursday 15:45-17:15 SR 2.67 Lecturer JProf. Dr. Katharina Schratz Office hours: By Appointment Room Kollegiengebäude Mathematik (20.30) Problem classes Dr. Patrick Krämer Office hours: by appointment Room 3.025 Kollegiengebäude Mathematik (20.30) Email: patrick.kraemer3@kit.edu Description of the lecture: Due to their computational advantage splitting methods are nowadays omnipresent in scientific computing. They pursue the intention to break down a complicated problem into a series of simpler subproblems. In the context of time integration a common idea is to split up the right-hand side and to decompose the given evolution equation into a sequence of subproblems, which in many situations can be solved far more efficiently or even exactly. The exact solution of the full-problem is then approximated by the composition of the flows associated to the simpler subproblems. Firstly we will investigate the error behavior of splitting methods for ordinary differential equations. The analysis will be based on the Baker-Campbell-Hausdorff formula and the calculus of Lie derivatives. We will in particular discuss splitting methods applied to Hamiltonian systems of ODEs and analyze in how far geometric properties (such as the energy of the system) are conserved within this numerical approach. We will then analyze splitting approaches for certain partial differential equations, such as linear Schrödinger equations, Schrödinger equations with a polynomial nonlinearity, as well as the so-called dimension splitting for parabolic evolution equations. In the exercises we will deepen some theoretical results and carry out practical implementations. Prerequisites: One should be familiar with basic concepts of the numerical time integration of ODEs and PDEs and functional analysis. A basic knowledge of the theory of semigroups is helpful. References: Will be given in the lecture.
33119d3d34a8a9fe
In the multi-configurational time-dependent Hartree (MCTDH) method, the equation of motion is derived from the time-dependent Schrödinger equation by substituting the wavefunction ansatz expanded in a single particle function (SPF) basis. The wavepacket dynamics is solved by solving a series of decoupled equations of motion in the SPF basis as follows: $$ i\dot\phi^{(k)}=(1-\hat{P}^{(k)})(\boldsymbol \rho^{(k)})^{-1}\boldsymbol{H}^{(k)}\phi^{(k)}. $$ For the definition of the quantities, one could refer to the linked article: I wonder if someone knows how to derive this equation above from the time dependent Schrödinger equation? • 3 $\begingroup$ Generally, a review paper won't include derivations, but it will likely reference a paper that did. Even if the whole derivation isn't there, it will likely be easier to answer "what is the missing step in this derivation" than "derive this equation" $\endgroup$ – Tyberius Jul 6, 2020 at 14:24 • 1 $\begingroup$ I answered the question. I basically followed Tyberius's advice which was to look at the original MCTDH paper from 1990. $\endgroup$ Jul 6, 2020 at 18:43 2 Answers 2 I will outline the way it was derived in the original 1990 paper. We start with an ansatz for the time-dependent wavefunction: \begin{equation} \tag{1} \psi(x_1,\ldots,x_n;t) = \sum_{j_1=1}^{m_1}\cdots \sum_{j_n=1}^{m_n}a_{j_1\cdots j_n}\phi_{j_1}^{(1)}(x_1,t)\cdots \phi_{j_n}^{(n)}(x_n,t), \end{equation} with single-particle functions (SPFs) satisfying (the second constraint is to make MCTDH simpler): \begin{equation} \tag{2}\label{ortho} \langle \phi_i^{(k)} | \phi_j^{(k)}\rangle =\delta_{ij} ~,~ \langle \phi_i^{(k)} | \dot\phi_j^{(k)}\rangle =0. \end{equation} Now we will use the Dirac-Frenkel variational principle (DFVP) to optimize the parameters: \begin{equation} \tag{3}\label{DiracFrenkel} \langle \delta \psi |(H-\rm{i}\frac{\partial}{\partial t})|\psi\rangle =0. \end{equation} Making use of all 4 equations so far, leads to this (you may need some practice with using DFVP): \begin{equation} \tag{4}\label{} \textrm{i}\dot a_{j_1\ldots j_n}=\langle \phi_{j_1}^{(1)}\cdots\phi_{j_n}^{(n)}|H|\psi\rangle . \end{equation} If we define the following: \begin{align} J &\equiv (j_1,j_2,\ldots ,j_{k-1},j_{k+1},\ldots ,j_n)\tag{5}\\ \mathbf{A}^{(k)} &\equiv a_{j_1\ldots j_{k-1},j,j_{k+1}}^{(k)} \equiv A_{Jj}^{(k)} \tag{6}\\ \mathbf{B}^{(k)} &\equiv \left(\mathbf{A}^{(k)\dagger}\mathbf{A}^{(k)\dagger} \right)^{-1}\mathbf{A}^{(k)\dagger}\tag{7}\\ \hat{H}^{(k)}_{IJ} &\equiv \langle \phi_I^{(k)} |H|\phi_J^{(k)}\rangle \tag{8}\\ \hat{P}^{(k)}&\equiv \sum_{j=1}^{m_k}|\phi_j^{(k)}\rangle\langle \phi_k^{(k)}|\tag{9}, \end{align} we can instead write: \begin{equation} \tag{10} \textrm{i}|\dot\phi_i^{(k)}\rangle = (1 - \hat{P}^{(k)})\sum_{IJj}B_{iI}^{(k)}\hat{H}_{IJ}^{(k)}A_{Jj}^{(k)}|\phi_j^{(k)}\rangle. \end{equation} These are the original working equations for MCTDH, and they are also almost exactly written the way you've written, except with $B$ instead of $\rho$: This is enough to get you started. A full derivation of the working MCTDH equation typically takes more than 60 lines, assuming you already have available some handy DFVP expressions. • $\begingroup$ Thanks for answering my question. I have briefly read the original paper and corrected some typo in your answer. I agree that your derivation follows the original paper which claim that DFVP is applied to derive equation 4. However, as for me, equation 4 is derived simply by projection of time dependent Schrödinger equation on SPF basis. $\endgroup$ – Paulie Bao Jul 7, 2020 at 7:41 • $\begingroup$ You're right. My version would have had an $H|\dot \psi \rangle$ which would have been wrong. $\endgroup$ Jul 7, 2020 at 14:21 This response comes late, but I hope you or other readers will find it useful. Regarding your question on how to derive MCTDH equations (directly) from the time-dependent Schrödinger equation (SE), I would add the following comments. The SE is exact in the full Hilbert space, but for numerical simulations, you need to choose a finite (usually small) working subspace, which is imposed by the form of your ansatz. Since the solution of the SE lies in the full Hilbert space, you cannot generally solve the SE in this working subspace. This means that the coupled MCTDH equations are not equivalent to the SE, and cannot be derived unambiguously from the SE. Alternatively, you can minimize the error of your approximate solution with respect to the parameters used in your ansatz. The time-dependent variational principle (TDVP) is the tool that does that by projecting the SE error on the correct tangent space. Therefore, one must distinguish between the exact SE and its approximate version coming out of the TDVP, and I believe the answer to your question is in the application of the TDVP. The projection done by the TDVP is otherwise equivalently achieved using a linear projection of the SE only when the parameterization is "linear", as it is the case for the time-dependent parameters $a_{j_1⋯j_n}$ (the expansion coefficients in the basis of the configurations). Hence, your comment on eq.4 being "derived simply by projection". However, the projection is more complicated for a "non-linear" parameterization, and MCTDH employs such a non-linear parameterization in the expression of the time-dependent SPFs. Hence, the resulting equation of motion for the SPFs are not simple projections of the SE but are obtained using the TDVP. Thus, the answer to your question would be that MCTDH equations cannot be derived directly from the SE. You need to use the SE in combination with the TDVP to correctly project on the tangent space corresponding to your ansatz. I hope this answer helps you, and if you are interested in more details, I suggest you the following documents to read: P. Kramer, M. Saraceno: Geometry of the Time-Dependent Variational Principle in Quantum Mechanics (1981) A. Raab: Chem. Phys. Lett. 319, 674 (2000) L. Hackl, T. Guaita, T. Shi, J. Haegeman, E. Demler et al.: arXiv:2004.01015 (2020) Your Answer
035f212e2b9458ad
Gleason's theorem Gleason's theorem In mathematical physics, Gleason's theorem shows that the rule one uses to calculate probabilities in quantum physics, the Born rule, can be derived from the usual mathematical representation of measurements in quantum physics together with the assumption of non-contextuality. Andrew M. Gleason first proved the theorem in 1957,[1] answering a question posed by George W. Mackey, an accomplishment that was historically significant for the role it played in showing that wide classes of hidden-variable theories are inconsistent with quantum physics. Multiple variations have been proven in the years since. Gleason's theorem is of particular importance for the field of quantum logic and its attempt to find a minimal set of mathematical axioms for quantum theory. Contents 1 Statement of the theorem 1.1 Conceptual background 1.2 Deriving the state space and the Born rule 2 History and outline of Gleason's proof 3 Implications 3.1 Hidden variables 3.2 Quantum logic 4 Generalizations 5 Notes 6 References Statement of the theorem Part of a series of articles about Quantum mechanics {displaystyle ihbar {frac {partial }{partial t}}|psi (t)rangle ={hat {H}}|psi (t)rangle } Schrödinger equation IntroductionGlossaryHistory show Background show Fundamentals show Experiments show Formulations show Equations show Interpretations hide Advanced topics Relativistic quantum mechanics Quantum field theory Quantum information science Quantum computing Quantum chaos Density matrix Scattering theory Quantum statistical mechanics Quantum machine learning show Scientists vte Conceptual background In quantum mechanics, each physical system is associated with a Hilbert space. For the purposes of this overview, the Hilbert space is assumed to be finite-dimensional. In the approach codified by John von Neumann, a measurement upon a physical system is represented by a self-adjoint operator on that Hilbert space sometimes termed an "observable". The eigenvectors of such an operator form an orthonormal basis for the Hilbert space, and each possible outcome of that measurement corresponds to one of the vectors comprising the basis. A density operator is a positive-semidefinite operator on the Hilbert space whose trace is equal to 1. In the language of von Weizsäcker, a density operator is a "catalogue of probabilities": for each measurement that can be defined, the probability distribution over the outcomes of that measurement can be computed from the density operator.[2] The procedure for doing so is the Born rule, which states that {displaystyle P(x_{i})=operatorname {Tr} (Pi _{i}rho ),} where {displaystyle rho } is the density operator, and {displaystyle Pi _{i}} is the projection operator onto the basis vector corresponding to the measurement outcome {displaystyle x_{i}} . The Born rule associates a probability with each unit vector in the Hilbert space, in such a way that these probabilities sum to 1 for any set of unit vectors comprising an orthonormal basis. Moreover, the probability associated with a unit vector is a function of the density operator and the unit vector, and not of additional information like a choice of basis for that vector to be embedded in. Gleason's theorem establishes the converse: all assignments of probabilities to unit vectors (or, equivalently, to the operators that project onto them) that satisfy these conditions take the form of applying the Born rule to some density operator. Gleason's theorem holds if the dimension of the Hilbert space is 3 or greater; counterexamples exist for dimension 2. Deriving the state space and the Born rule The probability of any outcome of a measurement upon a quantum system must be a real number between 0 and 1 inclusive, and in order to be consistent, for any individual measurement the probabilities of the different possible outcomes must add up to 1. Gleason's theorem shows that any function that assigns probabilities to measurement outcomes, as identified by projection operators, must be expressible in terms of a density operator and the Born rule. This gives not only the rule for calculating probabilities, but also determines the set of possible quantum states. Let {displaystyle f} be a function from projection operators to the unit interval with the property that, if a set {displaystyle {Pi _{i}}} of projection operators sum to the identity matrix (that is, if they correspond to an orthonormal basis), then {displaystyle sum _{i}f(Pi _{i})=1.} Such a function expresses an assignment of probability values to the outcomes of measurements, an assignment that is "noncontextual" in the sense that the probability for an outcome does not depend upon which measurement that outcome is embedded within, but only upon the mathematical representation of that specific outcome, i.e., its projection operator.[3][4]: §1.3 [5]: §2.1 [6] Gleason's theorem states that for any such function {displaystyle f} , there exists a positive-semidefinite operator {displaystyle rho } with unit trace such that {displaystyle f(Pi _{i})=operatorname {Tr} (Pi _{i}rho ).} Both the Born rule and the fact that "catalogues of probability" are positive-semidefinite operators of unit trace follow from the assumptions that measurements are represented by orthonormal bases, and that probability assignments are "noncontextual". In order for Gleason's theorem to be applicable, the space on which measurements are defined must be a real or complex Hilbert space, or a quaternionic module.[a] (Gleason's argument is inapplicable if, for example, one tries to construct an analogue of quantum mechanics using p-adic numbers.) History and outline of Gleason's proof Gleason in 1959 In 1932, John von Neumann also managed to derive the Born rule in his textbook Mathematische Grundlagen der Quantenmechanik [Mathematical Foundations of Quantum Mechanics]. However, the assumptions on which von Neumann built his proof were rather strong and eventually regarded to not be well-motivated.[14] Specifically, von Neumann assumed that the probability function must be linear on all observables, commuting or non-commuting. His proof was derided by John Bell as "not merely false but foolish!".[15][16] Gleason, on the other hand, did not assume linearity, but merely additivity for commuting projectors together with noncontextuality, assumptions seen as better motivated and more physically meaningful.[16][17] By the late 1940s, George Mackey had grown interested in the mathematical foundations of quantum physics, wondering in particular whether the Born rule was the only possible rule for calculating probabilities in a theory that represented measurements as orthonormal bases on a Hilbert space.[18][19] Mackey discussed this problem with Irving Segal at the University of Chicago, who in turn raised it with Richard Kadison, then a graduate student. Kadison showed that for 2-dimensional Hilbert spaces there exists a probability measure that does not correspond to quantum states and the Born rule. Gleason's result implies that this only happens in dimension 2.[19] Gleason's original proof proceeds in three stages.[20]: §2  In Gleason's terminology, a frame function is a real-valued function {displaystyle f} on the unit sphere of a Hilbert space such that {displaystyle sum _{i}f(x_{i})=1} whenever the vectors {displaystyle x_{i}} comprise an orthonormal basis. A noncontextual probability assignment as defined in the previous section is equivalent to a frame function.[b] Any such measure that can be written in the standard way, that is, by applying the Born rule to a quantum state, is termed a regular frame function. Gleason derives a sequence of lemmas concerning when a frame function is necessarily regular, culminating in the final theorem. First, he establishes that every continuous frame function on the Hilbert space {displaystyle mathbb {R} ^{3}} is regular. This step makes use of the theory of spherical harmonics. Then, he proves that frame functions on {displaystyle mathbb {R} ^{3}} have to be continuous, which establishes the theorem for the special case of {displaystyle mathbb {R} ^{3}} . This step is regarded as the most difficult of the proof.[21][22] Finally, he shows that the general problem can be reduced to this special case. Gleason credits one lemma used in this last stage of the proof to his doctoral student Richard Palais.[1]: fn 3  Robin Lyth Hudson described Gleason's theorem as "celebrated and notoriously difficult".[23] Cooke, Keane and Moran later produced a proof that is longer than Gleason's but requires fewer prerequisites.[21] Implications Gleason's theorem highlights a number of fundamental issues in quantum measurement theory. As Fuchs argues, the theorem "is an extremely powerful result", because "it indicates the extent to which the Born probability rule and even the state-space structure of density operators are dependent upon the theory's other postulates". In consequence, quantum theory is "a tighter package than one might have first thought".[24]: 94–95  Various approaches to rederiving the quantum formalism from alternative axioms have, accordingly, employed Gleason's theorem as a key step, bridging the gap between the structure of Hilbert space and the Born rule.[3][12]: §2 [25][26]: §1.4  Hidden variables Moreover, the theorem is historically significant for the role it played in ruling out the possibility of hidden variables in quantum mechanics. A hidden-variable theory that is deterministic implies that the probability of a given outcome is always either 0 or 1. For example, a Stern–Gerlach measurement on a spin-1 atom will report that the atom's angular momentum along the chosen axis is one of three possible values, which can be designated {displaystyle -} , {displaystyle 0} and {displaystyle +} . In a deterministic hidden-variable theory, there exists an underlying physical property that fixes the result found in the measurement. Conditional on the value of the underlying physical property, any given outcome (for example, a result of {displaystyle +} ) must be either impossible or guaranteed. But Gleason's theorem implies that there can be no such deterministic probability measure. The mapping {displaystyle uto langle rho u,urangle } is continuous on the unit sphere of the Hilbert space for any density operator {displaystyle rho } . Since this unit sphere is connected, no continuous probability measure on it can be deterministic.[26]: §1.3  Gleason's theorem therefore suggests that quantum theory represents a deep and fundamental departure from the classical intuition that uncertainty is due to ignorance about hidden degrees of freedom.[27] More specifically, Gleason's theorem rules out hidden-variable models that are "noncontextual". Any hidden-variable model for quantum mechanics must, in order to avoid the implications of Gleason's theorem, involve hidden variables that are not properties belonging to the measured system alone but also dependent upon the external context in which the measurement is made. This type of dependence is often seen as contrived or undesirable; in some settings, it is inconsistent with special relativity.[27][28] In the Bloch sphere representation of a qubit, each point on the unit sphere stands for a pure state. All other density matrices correspond to points in the interior. To construct a counterexample for 2-dimensional Hilbert space, known as a qubit, let the hidden variable be a unit vector {displaystyle {vec {lambda }}} in 3-dimensional Euclidean space. Using the Bloch sphere, each possible measurement on a qubit can be represented as a pair of antipodal points on the unit sphere. Defining the probability of a measurement outcome to be 1 if the point representing that outcome lies in the same hemisphere as {displaystyle {vec {lambda }}} and 0 otherwise yields an assignment of probabilities to measurement outcomes that obeys Gleason's assumptions. However, this probability assignment does not correspond to any valid density operator. By introducing a probability distribution over the possible values of {displaystyle {vec {lambda }}} , a hidden-variable model for a qubit that reproduces the predictions of quantum theory can be constructed.[27][29] Gleason's theorem motivated later work by John Bell, Ernst Specker and Simon Kochen that led to the result often called the Kochen–Specker theorem, which likewise shows that noncontextual hidden-variable models are incompatible with quantum mechanics. As noted above, Gleason's theorem shows that there is no probability measure over the rays of a Hilbert space that only takes the values 0 and 1 (as long as the dimension of that space exceeds 2). The Kochen–Specker theorem refines this statement by constructing a specific finite subset of rays on which no such probability measure can be defined.[27][30] The fact that such a finite subset of rays must exist follows from Gleason's theorem by way of a logical compactness argument, but this method does not construct the desired set explicitly.[20]: §1  In the related no-hidden-variables result known as Bell's theorem, the assumption that the hidden-variable theory is noncontextual instead is replaced by the assumption that it is local. The same sets of rays used in Kochen–Specker constructions can also be employed to derive Bell-type proofs.[27][31][32] Pitowsky uses Gleason's theorem to argue that quantum mechanics represents a new theory of probability, one in which the structure of the space of possible events is modified from the classical, Boolean algebra thereof. He regards this as analogous to the way that special relativity modifies the kinematics of Newtonian mechanics.[4][5] The Gleason and Kochen–Specker theorems have been cited in support of various philosophies, including perspectivism, constructive empiricism and agential realism.[33][34][35] Quantum logic Gleason's theorem finds application in quantum logic, which makes heavy use of lattice theory. Quantum logic treats the outcome of a quantum measurement as a logical proposition and studies the relationships and structures formed by these logical propositions. They are organized into a lattice, in which the distributive law, valid in classical logic, is weakened, to reflect the fact that in quantum physics, not all pairs of quantities can be measured simultaneously.[36] The representation theorem in quantum logic shows that such a lattice is isomorphic to the lattice of subspaces of a vector space with a scalar product.[5]: §2  Using Solèr's theorem, the (skew) field K over which the vector space is defined can be proven, with additional hypotheses, to be either the real numbers, complex numbers, or the quaternions, as is needed for Gleason's theorem to hold.[12]: §3 [37][38] By invoking Gleason's theorem, the form of a probability function on lattice elements can be restricted. Assuming that the mapping from lattice elements to probabilities is noncontextual, Gleason's theorem establishes that it must be expressible with the Born rule. Generalizations Gleason originally proved the theorem assuming that the measurements applied to the system are of the von Neumann type, i.e., that each possible measurement corresponds to an orthonormal basis of the Hilbert space. Later, Busch[39] and independently Caves et al.[24]: 116 [40] proved an analogous result for a more general class of measurements, known as positive-operator-valued measures (POVMs). The set of all POVMs includes the set of von Neumann measurements, and so the assumptions of this theorem are significantly stronger than Gleason's. This made the proof of this result simpler than Gleason's, and the conclusions stronger. Unlike the original theorem of Gleason, the generalized version using POVMs also applies to the case of a single qubit.[41][42] Assuming noncontextuality for POVMs is, however, controversial, as POVMs are not fundamental, and some authors defend that noncontextuality should be assumed only for the underlying von Neumann measurements.[43] Gleason's theorem, in its original version, does not hold if the Hilbert space is defined over the rational numbers, i.e., if the components of vectors in the Hilbert space are restricted to be rational numbers, or complex numbers with rational parts. However, when the set of allowed measurements is the set of all POVMs, the theorem holds.[40]: §3.D  The original proof by Gleason was not constructive: one of the ideas on which it depends is the fact that every continuous function defined on a compact space attains its minimum. Because one cannot in all cases explicitly show where the minimum occurs, a proof that relies upon this principle will not be a constructive proof. However, the theorem can be reformulated in such a way that a constructive proof can be found.[20][44] Gleason's theorem can be extended to some cases where the observables of the theory form a von Neumann algebra. Specifically, an analogue of Gleason's result can be shown to hold if the algebra of observables has no direct summand that is representable as the algebra of 2×2 matrices over a commutative von Neumann algebra (i.e., no direct summand of type I2). In essence, the only barrier to proving the theorem is the fact that Gleason's original result does not hold when the Hilbert space is that of a qubit.[45] Notes ^ For additional discussion on this point, see Piron,[7]: §6  Drisch,[8] Horwitz and Biedenharn,[9] Razon and Horwitz,[10] Varadarajan,[11]: 83  Cassinelli and Lahti,[12]: §2  and Moretti and Oppio.[13] ^ Gleason allows for the possibility that a frame function is normalized to a constant other than 1, but focusing on the case of "unit weight" as done here does not result in any loss of generality. References ^ Jump up to: a b Gleason, Andrew M. (1957). "Measures on the closed subspaces of a Hilbert space". Indiana University Mathematics Journal. 6 (4): 885–893. doi:10.1512/iumj.1957.6.56050. MR 0096113. ^ Drieschner, M.; Görnitz, Th.; von Weizsäcker, C. F. (1988-03-01). "Reconstruction of abstract quantum theory". International Journal of Theoretical Physics. 27 (3): 289–306. Bibcode:1988IJTP...27..289D. doi:10.1007/bf00668895. ISSN 0020-7748. S2CID 122866239. ^ Jump up to: a b Barnum, H.; Caves, C. M.; Finkelstein, J.; Fuchs, C. A.; Schack, R. (2000-05-08). "Quantum probability from decision theory?". Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences. 456 (1997): 1175–1182. arXiv:quant-ph/9907024. Bibcode:2000RSPSA.456.1175B. CiteSeerX doi:10.1098/rspa.2000.0557. ISSN 1364-5021. S2CID 11563591. ^ Jump up to: a b Pitowsky, Itamar (2003). "Betting on the outcomes of measurements: a Bayesian theory of quantum probability". Studies in History and Philosophy of Modern Physics. 34 (3): 395–414. arXiv:quant-ph/0208121. Bibcode:2003SHPMP..34..395P. doi:10.1016/S1355-2198(03)00035-2. ^ Jump up to: a b c Pitowsky, Itamar (2006). "Quantum mechanics as a theory of probability". In Demopoulos, William; Pitowsky, Itamar (eds.). Physical Theory and its Interpretation: Essays in Honor of Jeffrey Bub. Springer. p. 213. arXiv:quant-ph/0510095. ISBN 9781402048760. OCLC 917845122. ^ Kunjwal, Ravi; Spekkens, Rob W. (2015-09-09). "From the Kochen–Specker theorem to noncontextuality inequalities without assuming determinism". Physical Review Letters. 115 (11): 110403. arXiv:1506.04150. Bibcode:2015PhRvL.115k0403K. doi:10.1103/PhysRevLett.115.110403. PMID 26406812. S2CID 10308680. ^ Piron, C. (1972-10-01). "Survey of general quantum physics". Foundations of Physics. 2 (4): 287–314. Bibcode:1972FoPh....2..287P. doi:10.1007/bf00708413. ISSN 0015-9018. S2CID 123364715. ^ Drisch, Thomas (1979-04-01). "Generalization of Gleason's theorem". International Journal of Theoretical Physics. 18 (4): 239–243. Bibcode:1979IJTP...18..239D. doi:10.1007/bf00671760. ISSN 0020-7748. S2CID 121825926. ^ Horwitz, L. P.; Biedenharn, L. C. (1984). "Quaternion quantum mechanics: Second quantization and gauge fields". Annals of Physics. 157 (2): 432–488. Bibcode:1984AnPhy.157..432H. doi:10.1016/0003-4916(84)90068-x. ^ Razon, Aharon; Horwitz, L. P. (1991-08-01). "Projection operators and states in the tensor product of quaternion Hilbert modules". Acta Applicandae Mathematicae. 24 (2): 179–194. doi:10.1007/bf00046891. ISSN 0167-8019. S2CID 119666741. ^ Varadarajan, Veeravalli S. (2007). Geometry of Quantum Theory (2nd ed.). Springer Science+Business Media. ISBN 978-0-387-96124-8. OCLC 764647569. ^ Jump up to: a b c Cassinelli, G.; Lahti, P. (2017-11-13). "Quantum mechanics: why complex Hilbert space?". Philosophical Transactions of the Royal Society A. 375 (2106): 20160393. Bibcode:2017RSPTA.37560393C. doi:10.1098/rsta.2016.0393. ISSN 1364-503X. PMID 28971945. ^ Moretti, Valter; Oppio, Marco (2018-10-16). "The Correct Formulation of Gleason's Theorem in Quaternionic Hilbert Spaces". Annales Henri Poincaré. 19 (11): 3321–3355. arXiv:1803.06882. Bibcode:2018AnHP...19.3321M. doi:10.1007/s00023-018-0729-8. S2CID 53630146. ^ John Bell (1966). "On the Problem of Hidden Variables in Quantum Mechanics". Reviews of Modern Physics. 38 (3): 447. doi:10.1103/RevModPhys.38.447. OSTI 1444158. ^ Jeffrey Bub (2010). "Von Neumann's 'No Hidden Variables' Proof: A Re-Appraisal". Foundations of Physics. 40 (9–10): 1333–1340. arXiv:1006.0499. doi:10.1007/s10701-010-9480-9. S2CID 118595119. ^ Jump up to: a b Mermin, N. David; Schack, Rüdiger (2018). "Homer nodded: von Neumann's surprising oversight". Foundations of Physics. 48 (9): 1007–1020. arXiv:1805.10311. Bibcode:2018FoPh...48.1007M. doi:10.1007/s10701-018-0197-5. S2CID 118951033. ^ Peres, Asher (1992). "An experimental test for Gleason's theorem". Physics Letters A. 163 (4): 243–245. doi:10.1016/0375-9601(92)91005-C. ^ Mackey, George W. (1957). "Quantum Mechanics and Hilbert Space". The American Mathematical Monthly. 64 (8P2): 45–57. doi:10.1080/00029890.1957.11989120. JSTOR 2308516. ^ Jump up to: a b Chernoff, Paul R. "Andy Gleason and Quantum Mechanics" (PDF). Notices of the AMS. 56 (10): 1253–1259. ^ Jump up to: a b c Hrushovski, Ehud; Pitowsky, Itamar (2004-06-01). "Generalizations of Kochen and Specker's theorem and the effectiveness of Gleason's theorem". Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics. 35 (2): 177–194. arXiv:quant-ph/0307139. Bibcode:2004SHPMP..35..177H. doi:10.1016/j.shpsb.2003.10.002. S2CID 15265001. ^ Jump up to: a b Cooke, Roger; Keane, Michael; Moran, William (1985). "An elementary proof of Gleason's theorem". Mathematical Proceedings of the Cambridge Philosophical Society. 98 (1): 117–128. doi:10.1017/S0305004100063313. ^ Pitowsky, Itamar (1998). "Infinite and finite Gleason's theorems and the logic of indeterminacy". Journal of Mathematical Physics. 39 (1): 218–228. Bibcode:1998JMP....39..218P. doi:10.1063/1.532334. ^ Hudson, Robin Lyth (1986). "Geometry of quantum theory". The Mathematical Gazette. 70 (454): 332–333. doi:10.2307/3616230. JSTOR 3616230. ^ Jump up to: a b Fuchs, Christopher A. (2011). Coming of Age with Quantum Information: Notes on a Paulian Idea. Cambridge: Cambridge University Press. ISBN 978-0-521-19926-1. OCLC 535491156. ^ Stairs, Allen (2015). "Quantum Logic and Quantum Reconstruction". Foundations of Physics. 45 (10): 1351–1361. arXiv:1501.05492. Bibcode:2015FoPh...45.1351S. doi:10.1007/s10701-015-9879-4. S2CID 126435. ^ Jump up to: a b Wilce, A. (2017). "Quantum Logic and Probability Theory". In The Stanford Encyclopedia of Philosophy (Spring 2017 Edition), Edward N. Zalta (ed.). ^ Jump up to: a b c d e Mermin, N. David (1993-07-01). "Hidden variables and the two theorems of John Bell". Reviews of Modern Physics. 65 (3): 803–815. arXiv:1802.10119. Bibcode:1993RvMP...65..803M. doi:10.1103/RevModPhys.65.803. S2CID 119546199. ^ Shimony, Abner (1984). "Contextual Hidden Variable Theories and Bell's Inequalities". British Journal for the Philosophy of Science. 35 (1): 25–45. doi:10.1093/bjps/35.1.25. ^ Harrigan, Nicholas; Spekkens, Robert W. (2010). "Einstein, incompleteness, and the epistemic view of quantum states". Foundations of Physics. 40 (2): 125–157. arXiv:0706.2661. doi:10.1007/s10701-009-9347-0. S2CID 32755624. ^ Peres, Asher (1991). "Two simple proofs of the Kochen-Specker theorem". Journal of Physics A: Mathematical and General. 24 (4): L175–L178. Bibcode:1991JPhA...24L.175P. doi:10.1088/0305-4470/24/4/003. ISSN 0305-4470. ^ Stairs, Allen (1983). "Quantum Logic, Realism, and Value Definiteness". Philosophy of Science. 50 (4): 578–602. doi:10.1086/289140. S2CID 122885859. ^ Heywood, Peter; Redhead, Michael L. G. (1983). "Nonlocality and the Kochen–Specker paradox". Foundations of Physics. 13 (5): 481–499. doi:10.1007/BF00729511. S2CID 120340929. ^ Edwards, David (1979). "The Mathematical Foundations of Quantum Mechanics". Synthese. 42: 1–70. doi:10.1007/BF00413704. S2CID 46969028. ^ van Fraassen, Bas (1991). Quantum Mechanics: An Empiricist View. Clarendon Press. ISBN 9780198239802. OCLC 1005285550. ^ Barad, Karen (2007). Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning. Duke University Press. ISBN 9780822339175. OCLC 894219980. ^ Dvurecenskij, Anatolij (1992). Gleason's Theorem and Its Applications. Mathematics and its Applications, Vol. 60. Dordrecht: Kluwer Acad. Publ. p. 348. ISBN 978-0-7923-1990-0. OCLC 751579618. ^ Baez, John C. (2010-12-01). "Solèr's Theorem". The n-Category Café. Retrieved 2017-04-24. ^ Moretti, Valter; Oppio, Marco (2019-06-01). "Quantum theory in quaternionic Hilbert space: How Poincaré symmetry reduces the theory to the standard complex one". Reviews in Mathematical Physics. 31 (4): 1950013–502. arXiv:1709.09246. Bibcode:2019RvMaP..3150013M. doi:10.1142/S0129055X19500132. S2CID 119733863. ^ Busch, Paul (2003). "Quantum States and Generalized Observables: A Simple Proof of Gleason's Theorem". Physical Review Letters. 91 (12): 120403. arXiv:quant-ph/9909073. Bibcode:2003PhRvL..91l0403B. doi:10.1103/PhysRevLett.91.120403. PMID 14525351. S2CID 2168715. ^ Jump up to: a b Caves, Carlton M.; Fuchs, Christopher A.; Manne, Kiran K.; Renes, Joseph M. (2004). "Gleason-Type Derivations of the Quantum Probability Rule for Generalized Measurements". Foundations of Physics. 34 (2): 193–209. arXiv:quant-ph/0306179. Bibcode:2004FoPh...34..193C. doi:10.1023/B:FOOP.0000019581.00318.a5. S2CID 18132256. ^ Robert W. Spekkens (2014). "The Status of Determinism in Proofs of the Impossibility of a Noncontextual Model of Quantum Theory". Foundations of Physics. 44 (11): 1125–1155. arXiv:1312.3667. doi:10.1007/s10701-014-9833-x. S2CID 118469528. ^ Wright, Victoria J.; Weigert, Stephan (2019). "A Gleason-type theorem for qubits based on mixtures of projective measurements". Journal of Physics A. 52 (5): 055301. arXiv:1808.08091. doi:10.1088/1751-8121/aaf93d. S2CID 119309162. ^ Andrzej Grudka; Paweł Kurzyński (2008). "Is There Contextuality for a Single Qubit?". Physical Review Letters. 100 (16): 160401. arXiv:0705.0181. doi:10.1103/PhysRevLett.100.160401. PMID 18518167. S2CID 13251108. ^ Richman, Fred; Bridges, Douglas (1999-03-10). "A Constructive Proof of Gleason's Theorem". Journal of Functional Analysis. 162 (2): 287–312. doi:10.1006/jfan.1998.3372. ^ Hamhalter, Jan (2003-10-31). Quantum Measure Theory. Springer Science & Business Media. ISBN 9781402017148. MR 2015280. OCLC 928681664. Zbl 1038.81003. Categories: Hilbert spaceQuantum measurementProbability theorems Si quieres conocer otros artículos parecidos a Gleason's theorem puedes visitar la categoría Hilbert space. Deja una respuesta Tu dirección de correo electrónico no será publicada.
e503c81a1fff7d6c
Waves, Kakeya sets, and Diophantine equations MSRI - June 2017 Waves, Kakeya sets, and Diophantine equations Thumbnail Image Ciprian Demeter A central problem in Physics is to understand the complicated ways in which waves can interact with one another. The field of Harmonic Analysis grew out of the observation that any natural signal that is periodic in time can be built by superimposing simple waves with whole number frequencies. In geometric optics, quantum mechanics, and the study of water waves, we must understand the interactions of waves in space, where waves have not just a frequency and an amplitude, but also a direction. In the late 1960s, a systematic study of the interactions of spatial waves led to the development of Restriction Theory, which is concerned with the interactions of waves whose frequencies and directions are highly constrained. This investigation has revealed deep and surprising connections with a mysterious class of geometric objects, known as “Kakeya sets.” A Kakeya set is a collection of length one line segments (we think of needles having thickness zero) that point in all possible directions. The simplest Kakeya set is the interior of a circle of radius 1 (the unit disk), since for each direction, there is a corresponding line segment joining the center to the boundary of the circle. The disk has positive area and can easily be drawn and visualized, but there are examples of Kakeya sets that are invisible not only to the human eye, but to arbitrarily large orders of magnification. Even stranger, there exist sets of arbitrarily small area such that one may continuously rotate a needle through all possible directions, while staying inside the set. Kakeya Needle Set, Wikimedia Commons The “Kakeya Set Conjecture” predicts that Kakeya sets become visible under a certain very special mathematical microscope called Hausdorff dimension. While this problem has only been fully solved in two spatial dimensions, attempts to solve it in higher dimensions have led to spectacular developments on related problems. As an example, approximately ten years ago, a simplified, discrete, model for the Kakeya Set Conjecture was solved. This version has connections with theoretical computer science, and the tool developed to solve it involves careful counting arguments using polynomials. A few years later, the same method was expanded to count the minimum number of distances determined by a fixed number of points in the plane. More recently, the polynomial method has been finely tuned to estimate more efficiently the cancellations between different waves. The idea is that we can use a relatively small collection of relatively simple polynomials to divide space into small cells, in such a way that the wave interactions inside each cell are easier to understand. The implementation of this approach required the development of connections between areas of mathematics that were hitherto thought to be unrelated. In the last three years we have discovered a new method, called “decoupling,” to quantitatively study wave interactions. Once again, the connection with the Kakeya phenomenon proved instrumental. This discovery was initially motivated by the study of the Schrödinger equation, which models the energy of quantum mechanical systems, but decouplings proved to have a host of other far reaching consequences in an unexpected area of mathematics: Diophantine equations. Diophantine equations are potentially complicated systems of equations involving whole numbers, and mathematicians are interested in counting the number of solutions to such systems. Unlike waves, numbers do not oscillate, at least not in an obvious manner, but we can think of numbers as frequencies, and thus associate them to waves. In this way, problems related to counting the number of solutions to Diophantine systems can be rephrased in the language of quantifying wave interferences. An example of a Diophantine problem where the decoupling approach has been useful is in counting the number of ways a given whole number can be written as a sum of perfect cubes; another involves subtle issues around the intricate behavior of prime numbers. It is expected that the decoupling phenomenon will find new applications in the future, and the Spring 2017 MSRI program has articulated a few new exciting directions.
359d0f23ac9ad634
skip to main content Search for: All records Creators/Authors contains: "Bondy, Aaron" What is a DOI Number? 1. Abstract Ionization of laser-dressed atomic helium is investigated with focus on photoelectron angular distributions stemming from two-color multi-photon excited states. The experiment combines extreme ultraviolet (XUV) with infrared (IR) radiation, while the relative polarization and the temporal delay between the pulses can be varied. By means of an XUV photon energy scan over several electronvolts, we get access to excited states in the dressed atom exhibiting various binding energies, angular momenta, and magnetic quantum numbers. Furthermore, varying the relative polarization is employed as a handle to switch on and off the population of certain states that are only accessible bymore »two-photon excitation. In this way, photoemission can be suppressed for specific XUV photon energies. Additionally, we investigate the dependence of the photoelectron angular distributions on the IR laser intensity. At our higher IR intensities, we start leaving the simple multi-photon ionization regime. The interpretation of the experimental results is supported by numerically solving the time-dependent Schrödinger equation in a single-active-electron approximation. Graphic abstract« less
1fa51f36ea380063
Arrow of time From Wikipedia, the free encyclopedia This article is an overview of the subject. For a more technical discussion and for information related to current research, see Entropy (arrow of time). Arthur Stanley Eddington The Arrow of Time, or Time's Arrow, is a concept developed in 1927 by the British astronomer Arthur Eddington involving the "one-way direction" or "asymmetry" of time. It is an unsolved general physics question. This direction, according to Eddington, can be determined by studying the organization of atoms, molecules, and bodies, and might be drawn upon a four-dimensional relativistic map of the world ("a solid block of paper").[1] Physical processes at the microscopic level are believed to be either entirely or mostly time-symmetric: if the direction of time were to reverse, the theoretical statements that describe them would remain true. Yet at the macroscopic level it often appears that this is not the case: there is an obvious direction (or flow) of time. In the 1928 book The Nature of the Physical World, which helped to popularize the concept, Eddington stated: Let us draw an arrow arbitrarily. If as we follow the arrow we find more and more of the random element in the state of the world, then the arrow is pointing towards the future; if the random element decreases the arrow points towards the past. That is the only distinction known to physics. This follows at once if our fundamental contention is admitted that the introduction of randomness is the only thing which cannot be undone. I shall use the phrase ‘time's arrow’ to express this one-way property of time which has no analogue in space. Eddington then gives three points to note about this arrow: 1. It is vividly recognized by consciousness. 2. It is equally insisted on by our reasoning faculty, which tells us that a reversal of the arrow would render the external world nonsensical. 3. It makes no appearance in physical science except in the study of organization of a number of individuals. According to Eddington the arrow indicates the direction of progressive increase of the random element. Following a lengthy argument upon the nature of thermodynamics he concludes that, so far as physics is concerned, time's arrow is a property of entropy alone. The symmetry of time (T-symmetry) can be understood by a simple analogy: if time were perfectly symmetrical, a video of real events would seem realistic whether played forwards or backwards.[2] An obvious objection to this notion is gravity: things fall down, not up. Yet a ball that is tossed up, slows to a stop and falls into the hand is a case where recordings would look equally realistic forwards and backwards. The system is T-symmetrical but while going "forward" kinetic energy is dissipated and entropy is increased. Entropy may be one of the few processes that is not time-reversible. According to the statistical notion of increasing entropy, the "arrow" of time is identified with a decrease of free energy.[3] The thermodynamic arrow of time The arrow of time is the "one-way direction" or "asymmetry" of time. The thermodynamic arrow of time is provided by the Second Law of Thermodynamics, which says that in an isolated system, entropy tends to increase with time. Entropy can be thought of as a measure of microscopic disorder; thus the Second Law implies that time is asymmetrical with respect to the amount of order in an isolated system: as a system advances through time, it will statistically become more disordered. This asymmetry can be used empirically to distinguish between future and past though measuring entropy does not accurately measure time. Also in an open system entropy can decrease with time. British physicist Sir Alfred Brian Pippard wrote, "There is thus no justification for the view, often glibly repeated, that the Second Law of Thermodynamics is only statistically true, in the sense that microscopic violations repeatedly occur, but never violations of any serious magnitude. On the contrary, no evidence has ever been presented that the Second Law breaks down under any circumstances."[4] However, there are a number of paradoxes regarding violation of the Second Law of Thermodynamics, one of them due to the Poincaré recurrence theorem. This arrow of time seems to be related to all other arrows of time and arguably underlies some of them, with the exception of the weak arrow of time. Harold Blum's 1951 book Time's Arrow and Evolution[5] "explored the relationship between time's arrow (the second law of thermodynamics) and organic evolution." This influential text explores "irreversibility and direction in evolution and order, negentropy, and evolution."[6] Blum argues that evolution followed specific patterns predetermined by the inorganic nature of the earth and its thermodynamic processes.[7] The cosmological arrow of time The cosmological arrow of time points in the direction of the universe's expansion. It may be linked to the thermodynamic arrow, with the universe heading towards a heat death (Big Chill) as the amount of usable energy becomes negligible. Alternatively, it may be an artifact of our place in the universe's evolution (see the Anthropic bias), with this arrow reversing as gravity pulls everything back into a Big Crunch. If this arrow of time is related to the other arrows of time, then the future is by definition the direction towards which the universe becomes bigger. Thus, the universe expands - rather than shrinks - by definition. The thermodynamic arrow of time and the second law of thermodynamics are thought to be a consequence of the initial conditions in the early universe.[8] Therefore they ultimately result from the cosmological set-up. The radiative arrow of time Waves, from radio waves to sound waves to those on a pond from throwing a stone, expand outward from their source, even though the wave equations allow for solutions of convergent waves as well as radiative ones. This arrow has been reversed in carefully worked experiments which have created convergent waves,[9] so this arrow probably follows from the thermodynamic arrow in that meeting the conditions to produce a convergent wave requires more order than the conditions for a radiative wave. Put differently, the probability for initial conditions that produce a convergent wave is much lower than the probability for initial conditions that produce a radiative wave. In fact, normally a radiative wave increases entropy, while a convergent wave decreases it,[citation needed] making the latter contradictory to the Second Law of Thermodynamics in usual circumstances. The causal arrow of time A cause precedes its effect: the causal event occurs before the event it affects. Birth, for example, follows a successful conception and not vice versa. Thus causality is intimately bound up with time's arrow. An epistemological problem with using causality as an arrow of time is that, as David Hume maintained, the causal relation per se cannot be perceived; one only perceives sequences of events. Furthermore, it is surprisingly difficult to provide a clear explanation of what the terms cause and effect really mean, or to define the events to which they refer. However, it does seem evident that dropping a cup of water is a cause while the cup subsequently shattering and spilling the water is the effect. Physically speaking, the perception of cause and effect in the dropped cup example is a phenomenon of the thermodynamic arrow of time, a consequence of the second law of thermodynamics.[10] Controlling the future, or causing something to happen, creates correlations between the doer and the effect,[11] and these can only be created as we move forwards in time, not backwards. The particle physics (weak) arrow of time Certain subatomic interactions involving the weak nuclear force violate the conservation of both parity and charge conjugation, but only very rarely. An example is the kaon decay.[12] According to the CPT Theorem, this means they should also be time irreversible, and so establish an arrow of time. Such processes should be responsible for matter creation in the early universe. That the combination of parity and charge conjugation is broken so rarely means that this arrow only "barely" points in one direction, setting it apart from the other arrows whose direction is much more obvious. This arrow had not been linked to any large scale temporal behaviour until the work of Joan Vaccaro, who showed that T violation could be responsible for conservation laws and dynamics.[13] The quantum arrow of time Question dropshade.png Unsolved problem in physics: What links the quantum arrow of time to the thermodynamic arrow? (more unsolved problems in physics) According to the Copenhagen interpretation of quantum mechanics, quantum evolution is governed by the Schrödinger equation, which is time-symmetric, and by wave function collapse, which is time irreversible. As the mechanism of wave function collapse is philosophically obscure, it is not completely clear how this arrow links to the others. Despite the post-measurement state being entirely stochastic in formulations of quantum mechanics, a link to the thermodynamic arrow has been proposed, noting that the second law of thermodynamics amounts to an observation that nature shows a bias for collapsing wave functions into higher entropy states versus lower ones, and the claim that this is merely due to more possible states being high entropy runs afoul of Loschmidt's paradox. According to one physical view of wave function collapse, the theory of quantum decoherence, the quantum arrow of time is a consequence of the thermodynamic arrow of time.[citation needed] The quantum source of time Physicists say that quantum uncertainty gives rise to entanglement, the putative source of the arrow of time. The idea that entanglement might explain the arrow of time was proposed by Seth Lloyd in the 1980s. Lloyd argues that quantum uncertainty, and the way it spreads as particles become increasingly entangled, could replace human uncertainty in the old classical proofs as the true source of the arrow of time. According to Lloyd; “The arrow of time is an arrow of increasing correlations.”[14] The psychological/perceptual arrow of time A related mental arrow arises because one has the sense that one's perception is a continuous movement from the known (past) to the unknown (future). Anticipating the unknown forms the psychological future which always seems to be something one is moving towards, but, like a projection in a mirror, it makes what is actually already a part of memory, such as desires, dreams, and hopes, seem ahead of the observer. The association of "behind ⇔ past" and "ahead ⇔ future" is itself culturally determined. For example, the Aymara language associates "ahead ⇔ past" and "behind ⇔ future".[15] Similarly, the Chinese term for "the day after tomorrow" 後天 ("hòu tiān") literally means "after (or behind) day", whereas "the day before yesterday" 前天 ("qián tiān") is literally "preceding (or in front) day."[16] The words "yesterday" and "tomorrow" both translate to the same word in Hindi: कल ("kal"),[17] meaning "[one] day remote from today."[18] The ambiguity is resolved by verb tense. परसों ("parsoⁿ") is used for both "day before yesterday" and "day after tomorrow", or "two days from today".[19] नरसों ("narsoⁿ") is used for "three days from today."[20] The other side of the psychological passage of time is in the realm of volition and action. We plan and often execute actions intended to affect the course of events in the future. From the Rubaiyat: The Moving Finger writes; and, having writ,   Moves on: nor all thy Piety nor Wit Shall lure it back to cancel half a Line,   Nor all thy Tears wash out a Word of it. - Omar Khayyám (translation by Edward Fitzgerald). See also 1. ^ Weinert, Friedel (2005). The scientist as philosopher: philosophical consequences of great scientific discoveries. Springer. p. 143. ISBN 3-540-21374-0. , Chapter 4, p. 143 2. ^ David Albert on Time and Chance 3. ^ Tuisku, P.; Pernu, T.K.; Annila, A. (2009). "In the light of time". Proceedings of the Royal Society A. 465: 1173–1198. Bibcode:2009RSPSA.465.1173T. doi:10.1098/rspa.2008.0494.  4. ^ A. B. Pippard, Elements of Chemical Thermodynamics for Advanced Students of Physics (1966), p.100. 5. ^ Blum, Harold F. (1951). Time's Arrow and Evolution (First ed.).  6. ^ Morowitz, Harold J. (September 1969). "Book review: Time's arrow and evolution: Third Edition". Icarus. 11 (2): 278–279. Bibcode:1969Icar...11..278M. doi:10.1016/0019-1035(69)90059-1.  7. ^ McN., W. P. (November 1951). "Book reviews: Time's Arrow and Evolution". Yale Journal of Biology and Medicine. 24 (2): 164. PMC 2599115Freely accessible.  8. ^ Susskind, Leonard. "Boltzmann and the Arrow of Time: A Recent Perspective". Cornell University. Cornell University. Retrieved June 1, 2016.  9. ^ Mathias Fink (30 November 1999). "Time-Reversed Acoustic" (PDF). Archived from the original (PDF) on 31 December 2005. Retrieved 27 May 2016.  10. ^ Physical Origins of Time Asymmetry, chapter 6 11. ^ Physical Origins of Time Asymmetry, pp. 109–111. 12. ^ 13. ^ Vaccaro, Joan (2016). "Quantum asymmetry between time and space". Proceedings of the Royal Society A. 472: 20150670. arXiv:1502.04012Freely accessible. Bibcode:2016RSPSA.47250670V. doi:10.1098/rspa.2015.0670. PMC 4786044Freely accessible. PMID 26997899.  14. ^ 15. ^ For Andes tribe, it's back to the future — accessed 2006-09-26 16. ^ Chinese-English Dictionary — accessed 2017-01-11 17. ^ Bahri, Hardev (1989). Learners' Hindi-English Dictionary. Delhi: Rajpal & Sons. p. 95. ISBN 81-7028-002-8.  18. ^ Alexiadou, Artemis (1997). Adverb placement : a case study in antisymmetric syntax. Amsterdam [u.a.]: Benjamins. p. 108. ISBN 978-90-272-2739-3.  19. ^ Hindi English Dictionary परसों — accessed 2017-01-11 20. ^ Hindi English Dictionary नरसों — accessed 2017-01-11 Further reading External links • The Ritz-Einstein Agreement to Disagree, a review of historical perspectives of the subject, prior to the evolvement of quantum field theory. • The Thermodynamic Arrow: Puzzles and Pseudo-Puzzles Huw Price on Time's Arrow • Arrow of time in a discrete toy model • The Arrow of Time Retrieved from "" This content was retrieved from Wikipedia : This page is based on the copyrighted Wikipedia article "Arrow of time"; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA
0570cc0fb87553f2
Chapter 9: Fundamental Physics Section 16: Quantum Phenomena History [of quantum theory] In classical physics quantities like energy were always assumed to correspond to continuous variables. But in 1900 Max Planck noticed that fits to the measured spectrum of electromagnetic radiation produced by hot objects could be explained if there were discrete quanta of electromagnetic energy. And by 1910 work by Albert Einstein, notably on the photoelectric effect and on heat capacities of solids, had given evidence for discrete quanta of energy in both light and matter. In 1913 Niels Bohr then made the suggestion that the discrete spectrum of light emitted by hydrogen atoms could be explained as being produced by electrons making transitions between orbits with discrete quantized angular momenta. By 1920 ideas from celestial mechanics had been used to develop a formalism for quantized orbits which successfully explained various features of atoms and chemical elements. But it was not clear how to extend the formalism say to a problem like propagation of light through a crystal. In 1925, however, Werner Heisenberg suggested a new and more general formalism that became known as matrix mechanics. The original idea was to imagine describing the state of an atom in terms of an array of amplitudes for virtual oscillators with each possible frequency. Particular conditions amounting to quantization were then imposed on matrices of transitions between these, and the idea was introduced that only certain kinds of amplitude combinations could ever be observed. In 1923 Louis de Broglie had suggested that just as light—which in optics was traditionally described in terms of waves—seemed in some respects to act like discrete particles, so conversely particles like electrons might in some respects act like waves. In 1926 Erwin Schrödinger then suggested a partial differential equation for the wave functions of particles like electrons. And when effectively restricted to a finite region, this equation allowed only certain modes, corresponding to discrete quantum states—whose properties turned out to be exactly the same as implied by matrix mechanics. In the late 1920s Paul Dirac developed a more abstract operator-based formalism. And by the end of the 1920s basic practical quantum mechanics was established in more or less the form it appears in textbooks today. In the period since, increasing computational capabilities have allowed coupled Schrödinger equations for progressively more particles to be solved (reasonably accurate solutions for hundreds of particles can now be found), allowing ever larger studies in atomic, molecular, nuclear and solid-state physics. A notable theoretical interest starting in the 1980s was so-called quantum chaos, in which it was found that modes (wave functions) in regions like stadiums that did not yield simple analytical solutions tended to show complicated and seemingly random forms. Basic quantum mechanics is set up to describe how fixed numbers of particles behave—say in externally applied electromagnetic or other fields. But to describe things like fields one must allow particles to be created and destroyed. In the mid-1920s there was already discussion of how to set up a formalism for this, with an underlying idea again being to think in terms of virtual oscillators—but now one for each possible state of each possible one of any number of particles. At first this was just applied to a pure electromagnetic field of non-interacting photons, but by the end of the 1920s there was a version of quantum electrodynamics (QED) for interacting photons and electrons that is essentially the same as today. To find predictions from this theory a so-called perturbation expansion was made, with successive terms representing progressively more interactions, and each having a higher power of the so-called coupling constant α1/137. It was immediately noticed, however, that self-interactions of particles would give rise to infinities, much as in classical electromagnetism. At first attempts were made to avoid this by modifying the basic theory (see page 1044). But by the mid-1940s detailed calculations were being done in which infinite parts were just being dropped—and the results were being found to agree rather precisely with experiments. In the late 1940s this procedure was then essentially justified by the idea of renormalization: that since in all possible QED processes only three different infinities can ever appear, these can in effect systematically be factored out from all predictions of the theory. Then in 1949 Feynman diagrams were introduced (see note below) to represent terms in the QED perturbation expansion—and the rules for these rapidly became what defined QED in essentially all practical applications. Evaluating Feynman diagrams involved extensive algebra, and indeed stimulated the development of computer algebra (including my own interest in the field). But by the 1970s the dozen or so standard processes discussed in QED had been calculated to order α^2—and by the mid-1980s the anomalous magnetic moment of the electron had been calculated to order α^4, and nearly one part in a trillion (see note below). But despite the success of perturbation theory in QED it did not at first seem applicable to other issues in particle physics. The weak interactions involved in radioactive beta decay seemed too weak for anything beyond lowest order to be relevant—and in any case not renormalizable. And the strong interactions responsible for holding nuclei together (and associated for example with exchange of pions and other mesons) seemed too strong for it to make sense to do an expansion with larger numbers of individual interactions treated as less important. So this led in the 1960s to attempts to base theories just on setting up simple mathematical constraints on the overall so-called S matrix defining the mapping from incoming to outgoing quantum states. But by the end of the 1960s theoretical progress seemed blocked by basic questions about functions of several complex variables, and predictions that were made did not seem to work well. By the early 1970s, however, there was increasing interest in so-called gauge or Yang-Mills theories formed in essence by generalizing QED to operate not just with a scalar charge, but with charges viewed as elements of non-Abelian groups. In 1972 it was shown that spontaneously broken gauge theories of the kind needed to describe weak interactions were renormalizable—allowing meaningful use of perturbation theory and Feynman diagrams. And then in 1973 it was discovered that QCD—the gauge theory for quarks and gluons with SU(3) color charges—was asymptotically free (it was known to be renormalizable), so that for processes probing sufficiently small distances, its effective coupling was small enough for perturbation theory. By the early 1980s first-order calculations of most basic QCD processes had been done—and by the 1990s second-order corrections were also known. Schemes for adding up all Feynman diagrams with certain very simple repetitive or other structures were developed. But despite a few results about large-distance analogs of renormalizability, the question of what QCD might imply for processes at larger distances could not really be addressed by such methods. In 1941 Richard Feynman pointed out that amplitudes in quantum theory could be worked out by using path integrals that sum with appropriate weights contributions from all possible histories of a system. (The Schrödinger equation is like a diffusion equation in imaginary time, so the path integral for it can be thought of as like an enumeration of random walks. The idea of describing random walks with path integrals was discussed from the early 1900s.) At first the path integral was viewed mostly as a curiosity, but by the late 1970s it was emerging as the standard way to define a quantum field theory. Attempts were made to see if the path integral for QCD (and later for quantum gravity) could be approximated with a few exact solutions (such as instantons) to classical field equations. By the early 1980s there was then extensive work on lattice gauge theories in which the path integral (in Euclidean space) was approximated by randomly sampling discretized field configurations. But—I suspect for reasons that I discuss in the note below—such methods were never extremely successful. And the result is that beyond perturbation theory there is still no real example of a definitive success from standard relativistic quantum field theory. (In addition, even efforts in the context of so-called axiomatic field theory to set up mathematically rigorous formulations have run into many difficulties—with the only examples satisfying all proposed axioms typically in the end being field theories without any real interactions. In condensed matter physics there are nevertheless cases like the Kondo model where exact solutions have been found, and where the effective energy function for electrons happens to be roughly the same as in a relativistic theory.) As mentioned on page 1044, ordinary quantum field theory in effect deals only with point particles. And indeed a recurring issue in it has been difficulty with constraints and redundant degrees of freedom—such as those associated with extended objects. (A typical goal is to find variables in which one can carry out what is known as canonical quantization: essentially applying the same straightforward transformation of equations that happens to work in ordinary elementary quantum mechanics.) One feature of string theory and its generalizations is that they define presumably consistent quantum field theories for excitations of extended objects—though an analog of quantum field theory in which whole strings can be created and destroyed has not yet been developed. When the formalism of quantum mechanics was developed in the mid-1920s there were immediately questions about its interpretation. But it was quickly suggested that given a wave function ψ from the Schrödinger equation Abs[ψ]^2 should represent probability—and essentially all practical applications have been based on this ever since. From a conceptual point of view it has however often seemed peculiar that a supposedly fundamental theory should talk only about probabilities. Following the introduction of the uncertainty principle and related formalism in the 1920s one idea that arose was that—in rough analogy to relativity theory—it might just be that there are only certain quantities that are observable in definite ways. But this was not enough, and by the 1930s it was being suggested that the validity of quantum mechanics might be a sign that whole new general frameworks for philosophy or logic were needed—a notion supported by the apparent need to bring consciousness into discussions about measurement in quantum mechanics (see page 1063). The peculiar character of quantum mechanics was again emphasized by the idealized experiment of Albert Einstein, Boris Podolsky and Nathan Rosen in 1935. But among most physicists the apparent lack of an ordinary mechanistic way to think about quantum mechanics ended up just being seen as another piece of evidence for the fundamental role of mathematical formalism in physics. One way for probabilities to appear even in deterministic systems is for there to be hidden variables whose values are unknown. But following mathematical work in the early 1930s it was usually assumed that this could not be what was going on in quantum mechanics. In 1952 David Bohm did however manage to construct a somewhat elaborate model based on hidden variables that gave the same results as ordinary quantum mechanics—though involved infinitely fast propagation of information. In the early 1960s John Bell then showed that in any hidden variables theory of a certain general type there are specific inequalities that combinations of probabilities must satisfy (see page 1064). And by the early 1980s experiments had shown that such inequalities were indeed violated in practice—so that there were in fact correlations of the kind suggested by quantum mechanics. At first these just seemed like isolated esoteric effects, but by the mid-1990s they were being codified in the field of quantum information theory, and led to constructions with names like quantum cryptography and quantum teleportation. Particularly when viewed in terms of path integrals the standard formalism of quantum theory tends to suggest that quantum systems somehow do more computation in their evolution than classical ones. And after occasional discussion as early as the 1950s, this led by the late 1980s to extensive investigation of systems that could be viewed as quantum analogs of idealized computers. In the mid-1990s efficient procedures for integer factoring and a few other problems were suggested for such systems, and by the late 1990s small experiments on these were beginning to be done in various types of physical systems. But it is becoming increasingly unclear just how the idealizations in the underlying model really work, and to what extent quantum mechanics is actually in the end even required—as opposed, say, just to classical wave phenomena. (See page 1147.) Partly as a result of discussions about measurement there began to be questions in the 1980s about whether ordinary quantum mechanics can describe systems containing very large numbers of particles. Experiments in the 1980s and 1990s on such phenomena as macroscopic superposition and Bose-Einstein condensation nevertheless showed that standard quantum effects still occur with trillions of atoms. But inevitably the kinds of general phenomena that I discuss in this book will also occur—leading to all sorts of behavior that at least cannot readily be foreseen just from the basic rules of quantum mechanics. From Stephen Wolfram: A New Kind of Science [citation]
21a6004caeec4648
Symmetry, Integrability and Geometry: Methods and Applications (SIGMA) SIGMA 13 (2017), 053, 14 pages      arXiv:1704.00043 Contribution to the Special Issue on Symmetries and Integrability of Difference Equations Symmetries of the Hirota Difference Equation Andrei K. Pogrebkov ab a) Steklov Mathematical Institute of Russian Academy of Science, Moscow, Russia b) National Research University Higher School of Economics, Moscow, Russia Received March 31, 2017, in final form July 02, 2017; Published online July 07, 2017 Continuous symmetries of the Hirota difference equation, commuting with shifts of independent variables, are derived by means of the dressing procedure. Action of these symmetries on the dependent variables of the equation is presented. Commutativity of these symmetries enables interpretation of their parameters as ''times'' of the nonlinear integrable partial differential-difference and differential equations. Examples of equations resulting in such procedure and their Lax pairs are given. Besides these, ordinary, symmetries the additional ones are introduced and their action on the Scattering data is presented. Key words: Hirota difference equation; symmetries; integrable differential-difference and differential equations; additional symmetries. pdf (378 kb)   tex (20 kb) 1. Bogdanov L.V., Konopelchenko B.G., Generalized KP hierarchy: Möbius symmetry, symmetry constraints and Calogero-Moser system, Phys. D 152/153 (2001), 85-96, solv-int/9912005. 2. Boiti M., Pempinelli F., Pogrebkov A.K., Cauchy-Jost function and hierarchy of integrable equations, Theoret. and Math. Phys. 185 (2015), 1599-1613, arXiv:1508.02229. 3. Dryuma V.S., Analytic solution of the two-dimensional Korteweg-de Vries (KdV) equation, JETP Lett. 19 (1974), 387-388. 4. Fioravanti D., Nepomechie R.I., An inhomogeneous Lax representation for the Hirota equation, J. Phys. A: Math. Theor. 50 (2017), 054001, 14 pages, arXiv:1609.06761. 5. Grinevich P.G., Orlov A.Yu., Virasoro action on Riemann surfaces, Grassmannians, $\det \overline\partial_J$ and Segal-Wilson $\tau$-function, in Problems of Modern Quantum Field Theory (Alushta, 1989), Res. Rep. Phys., Springer, Berlin, 1989, 86-106. 6. Hirota R., Nonlinear partial difference equations. II. Discrete-time Toda equation, J. Phys. Soc. Japan 43 (1977), 2074-2078. 7. Hirota R., Discrete analogue of a generalized Toda equation, J. Phys. Soc. Japan 50 (1981), 3785-3791. 8. Kadomtsev B.B., Petviashvili V.I., On the stability of solitary waves in weakly dispersive media, Sov. Phys. Dokl. 192 (1970), 539-541. 9. Krichever I., Wiegmann P., Zabrodin A., Elliptic solutions to difference non-linear equations and related many-body problems, Comm. Math. Phys. 193 (1998), 373-396, hep-th/9704090. 10. Orlov A.Yu., Shul'man E.I., Additional symmetries of the nonlinear Schrödinger equation, Theoret. and Math. Phys. 64 (1985), 862-866. 11. Pogrebkov A.K., On time evolutions associated with the nonstationary Schrödinger equation, in L.D. Faddeev's Seminar on Mathematical Physics, Amer. Math. Soc. Transl. Ser. 2, Vol. 201, Amer. Math. Soc., Providence, RI, 2000, 239-255, math-ph/9902014. 12. Pogrebkov A.K., Hirota difference equation and a commutator identity on an associative algebra, St. Petersburg Math. J. 22 (2011), 473-483. 13. Pogrebkov A.K., Hirota difference equation: inverse scattering transform, Darboux transformation, and solitons, Theoret. and Math. Phys. 181 (2014), 1585-1598, arXiv:1407.0677. 14. Pogrebkov A.K., Commutator identities on associative algebras, the non-Abelian Hirota difference equation and its reductions, Theoret. and Math. Phys. 187 (2016), 823-834. 15. Saito S., Octahedral structure of the Hirota-Miwa equation, J. Nonlinear Math. Phys. 19 (2012), 539-550. 16. Zabrodin A.V., Hirota difference equations, Theoret. and Math. Phys. 113 (1997), 1347-1392, solv-int/9704001. 17. Zabrodin A.V., Bäcklund transformation for the Hirota difference equation, and the supersymmetric Bethe ansatz, Theoret. and Math. Phys. 155 (2008), 567-584. 18. Zakharov V.E., Manakov S.V., Construction of multidimensional nonlinear integrable systems and their solutions, Funct. Anal. Appl. 19 (1985), 89-101. 19. Zakharov V.E., Shabat A.B., A scheme for integrating the nonlinear equations of mathematical physics by the method of the inverse scattering problem. I, Funct. Anal. Appl. 8 (1977), 226-235. Previous article  Next article   Contents of Volume 13 (2017)
6c8861475e6f4114
Next Contents Previous In our discussion of manifolds, it became clear that there were various notions we could talk about as soon as the manifold was defined; we could define functions, take their derivatives, consider parameterized paths, set up tensors, and so on. Other concepts, such as the volume of a region or the length of a path, required some additional piece of structure, namely the introduction of a metric. It would be natural to think of the notion of "curvature", which we have already used informally, is something that depends on the metric. Actually this turns out to be not quite true, or at least incomplete. In fact there is one additional structure we need to introduce - a "connection" - which is characterized by the curvature. We will show how the existence of a metric implies a certain connection, whose curvature may be thought of as that of the metric. The connection becomes necessary when we attempt to address the problem of the partial derivative not being a good tensor operator. What we would like is a covariant derivative; that is, an operator which reduces to the partial derivative in flat space with Cartesian coordinates, but transforms as a tensor on an arbitrary manifold. It is conventional to spend a certain amount of time motivating the introduction of a covariant derivative, but in fact the need is obvious; equations such as $ \partial_{\mu}$T$\scriptstyle \mu$$\scriptstyle \nu$ = 0 are going to have to be generalized to curved space somehow. So let's agree that a covariant derivative would be a good thing to have, and go about setting it up. In flat space in Cartesian coordinates, the partial derivative operator $ \partial_{\mu}$ is a map from (k, l ) tensor fields to (k, l + 1) tensor fields, which acts linearly on its arguments and obeys the Leibniz rule on tensor products. All of this continues to be true in the more general situation we would now like to consider, but the map provided by the partial derivative depends on the coordinate system used. We would therefore like to define a covariant derivative operator $ \nabla$ to perform the functions of the partial derivative, but in a way independent of coordinates. We therefore require that $ \nabla$ be a map from (k, l ) tensor fields to (k, l + 1) tensor fields which has these two properties: 1. linearity: $ \nabla$(T + S) = $ \nabla$T + $ \nabla$S ; 2. Leibniz (product) rule: $ \nabla$(T $ \otimes$ S) = ($ \nabla$T) $ \otimes$ S + T $ \otimes$ ($ \nabla$S) . If $ \nabla$ is going to obey the Leibniz rule, it can always be written as the partial derivative plus some linear transformation. That is, to take the covariant derivative we first take the partial derivative, and then apply a correction to make the result covariant. (We aren't going to prove this reasonable-sounding statement, but Wald goes into detail if you are interested.) Let's consider what this means for the covariant derivative of a vector V$\scriptstyle \nu$. It means that, for each direction $ \mu$, the covariant derivative $ \nabla_{\mu}^{}$ will be given by the partial derivative $ \partial_{\mu}^{}$ plus a correction specified by a matrix ($ \Gamma_{\mu}^{}$)$\scriptstyle \rho$$\scriptstyle \sigma$ (an n × n matrix, where n is the dimensionality of the manifold, for each $ \mu$). In fact the parentheses are usually dropped and we write these matrices, known as the connection coefficients, with haphazard index placement as $ \Gamma^{\rho}_{\mu\sigma}$. We therefore have Equation 3.1 (3.1) Notice that in the second term the index originally on V has moved to the $ \Gamma$, and a new index is summed over. If this is the expression for the covariant derivative of a vector in terms of the partial derivative, we should be able to determine the transformation properties of $ \Gamma^{\nu}_{\mu\lambda}$ by demanding that the left hand side be a (1, 1) tensor. That is, we want the transformation law to be Equation 3.2 (3.2) Let's look at the left side first; we can expand it using (3.1) and then transform the parts that we understand: Equation 3.3 (3.3) The right side, meanwhile, can likewise be expanded: Equation 3.4 (3.4) These last two expressions are to be equated; the first terms in each are identical and therefore cancel, so we have Equation 3.5 (3.5) where we have changed a dummy index from $ \nu$ to $ \lambda$. This equation must be true for any vector V$\scriptstyle \lambda$, so we can eliminate that on both sides. Then the connection coefficients in the primed coordinates may be isolated by multiplying by $ \partial$x$\scriptstyle \lambda$/$ \partial$x$\scriptstyle \lambda{^\prime}$. The result is Equation 3.6 (3.6) This is not, of course, the tensor transformation law; the second term on the right spoils it. That's okay, because the connection coefficients are not the components of a tensor. They are purposefully constructed to be non-tensorial, but in such a way that the combination (3.1) transforms as a tensor - the extra terms in the transformation of the partials and the $ \Gamma$'s exactly cancel. This is why we are not so careful about index placement on the connection coefficients; they are not a tensor, and therefore you should try not to raise and lower their indices. What about the covariant derivatives of other sorts of tensors? By similar reasoning to that used for vectors, the covariant derivative of a one-form can also be expressed as a partial derivative plus some linear transformation. But there is no reason as yet that the matrices representing this transformation should be related to the coefficients $ \Gamma^{\nu}_{\mu\lambda}$. In general we could write something like Equation 3.7 (3.7) where $ \widetilde{\Gamma}^{\lambda}_{\mu\nu}$ is a new set of matrices for each $ \mu$. (Pay attention to where all of the various indices go.) It is straightforward to derive that the transformation properties of $ \widetilde{\Gamma}$ must be the same as those of $ \Gamma$, but otherwise no relationship has been established. To do so, we need to introduce two new properties that we would like our covariant derivative to have (in addition to the two above): 1. commutes with contractions: $ \nabla_{\mu}^{}$(T$\scriptstyle \lambda$$\scriptstyle \lambda$$\scriptstyle \rho$) = ($ \nabla$T)$\scriptstyle \mu$$\scriptstyle \lambda$$\scriptstyle \lambda$$\scriptstyle \rho$ , 2. reduces to the partial derivative on scalars: $ \nabla_{\mu}^{}$$ \phi$ = $ \partial_{\mu}$$ \phi$ . There is no way to "derive" these properties; we are simply demanding that they be true as part of the definition of a covariant derivative. Let's see what these new properties imply. Given some one-form field $ \omega_{\mu}^{}$ and vector field V$\scriptstyle \mu$, we can take the covariant derivative of the scalar defined by $ \omega_{\lambda}^{}$V$\scriptstyle \lambda$ to get Equation 3.8 (3.8) But since $ \omega_{\lambda}^{}$V$\scriptstyle \lambda$ is a scalar, this must also be given by the partial derivative: Equation 3.9 (3.9) This can only be true if the terms in (3.8) with connection coefficients cancel each other; that is, rearranging dummy indices, we must have Equation 3.10 (3.10) But both $ \omega_{\sigma}^{}$ and V$\scriptstyle \lambda$ are completely arbitrary, so Equation 3.11 (3.11) The two extra conditions we have imposed therefore allow us to express the covariant derivative of a one-form using the same connection coefficients as were used for the vector, but now with a minus sign (and indices matched up somewhat differently): Equation 3.12 (3.12) It should come as no surprise that the connection coefficients encode all of the information necessary to take the covariant derivative of a tensor of arbitrary rank. The formula is quite straightforward; for each upper index you introduce a term with a single + $ \Gamma$, and for each lower index a term with a single - $ \Gamma$: Equation 3.13 (3.13) This is the general expression for the covariant derivative. You can check it yourself; it comes from the set of axioms we have established, and the usual requirements that tensors of various sorts be coordinate-independent entities. Sometimes an alternative notation is used; just as commas are used for partial derivatives, semicolons are used for covariant ones: Equation 3.14 (3.14) Once again, I'm not a big fan of this notation. To define a covariant derivative, then, we need to put a "connection" on our manifold, which is specified in some coordinate system by a set of coefficients $ \Gamma^{\lambda}_{\mu\nu}$ (n3 = 64 independent components in n = 4 dimensions) which transform according to (3.6). (The name "connection" comes from the fact that it is used to transport vectors from one tangent space to another, as we will soon see.) There are evidently a large number of connections we could define on any manifold, and each of them implies a distinct notion of covariant differentiation. In general relativity this freedom is not a big concern, because it turns out that every metric defines a unique connection, which is the one used in GR. Let's see how that works. The first thing to notice is that the difference of two connections is a (1, 2) tensor. If we have two sets of connection coefficients, $ \Gamma^{\lambda}_{\mu\nu}$ and $ \widehat{\Gamma}^{\lambda}_{\mu\nu}$, their difference S$\scriptstyle \mu$$\scriptstyle \nu$$\scriptstyle \lambda$ = $ \Gamma^{\lambda}_{\mu\nu}$ - $ \widehat{\Gamma}^{\lambda}_{\mu\nu}$ (notice index placement) transforms as Equation 3.15 (3.15) This is just the tensor transormation law, so S$\scriptstyle \mu$$\scriptstyle \nu$$\scriptstyle \lambda$ is indeed a tensor. This implies that any set of connections can be expressed as some fiducial connection plus a tensorial correction. Next notice that, given a connection specified by $ \Gamma^{\lambda}_{\mu\nu}$, we can immediately form another connection simply by permuting the lower indices. That is, the set of coefficients $ \Gamma^{\lambda}_{\nu\mu}$ will also transform according to (3.6) (since the partial derivatives appearing in the last term can be commuted), so they determine a distinct connection. There is thus a tensor we can associate with any given connection, known as the torsion tensor, defined by Equation 3.16 (3.16) It is clear that the torsion is antisymmetric its lower indices, and a connection which is symmetric in its lower indices is known as "torsion-free." We can now define a unique connection on a manifold with a metric g$\scriptstyle \mu$$\scriptstyle \nu$ by introducing two additional properties: A connection is metric compatible if the covariant derivative of the metric with respect to that connection is everywhere zero. This implies a couple of nice properties. First, it's easy to show that the inverse metric also has zero covariant derivative, Equation 3.17 (3.17) Second, a metric-compatible covariant derivative commutes with raising and lowering of indices. Thus, for some vector field V$\scriptstyle \lambda$, Equation 3.18 (3.18) With non-metric-compatible connections one must be very careful about index placement when taking a covariant derivative. Our claim is therefore that there is exactly one torsion-free connection on a given manifold which is compatible with some given metric on that manifold. We do not want to make these two requirements part of the definition of a covariant derivative; they simply single out one of the many possible ones. We can demonstrate both existence and uniqueness by deriving a manifestly unique expression for the connection coefficients in terms of the metric. To accomplish this, we expand out the equation of metric compatibility for three different permutations of the indices: Equation 3.19 Equation 3.19 (3.19) We subtract the second and third of these from the first, and use the symmetry of the connection to obtain Equation 3.20 (3.20) It is straightforward to solve this for the connection by multiplying by g$\scriptstyle \sigma$$\scriptstyle \rho$. The result is Equation 3.21 (3.21) This is one of the most important formulas in this subject; commit it to memory. Of course, we have only proved that if a metric-compatible and torsion-free connection exists, it must be of the form (3.21); you can check for yourself (for those of you without enough tedious computation in your lives) that the right hand side of (3.21) transforms like a connection. This connection we have derived from the metric is the one on which conventional general relativity is based (although we will keep an open mind for a while longer). It is known by different names: sometimes the Christoffel connection, sometimes the Levi-Civita connection, sometimes the Riemannian connection. The associated connection coefficients are sometimes called Christoffel symbols and written as $ \left\{\vphantom{{}^{\,\,\sigma}_{\mu\nu} }\right.$ $\scriptstyle \sigma$$\scriptstyle \mu$$\scriptstyle \nu$$ \left.\vphantom{{}^{\,\,\sigma}_{\mu\nu} }\right\}$; we will sometimes call them Christoffel symbols, but we won't use the funny notation. The study of manifolds with metrics and their associated connections is called "Riemannian geometry." As far as I can tell the study of more general connections can be traced back to Cartan, but I've never heard it called "Cartanian geometry." Before putting our covariant derivatives to work, we should mention some miscellaneous properties. First, let's emphasize again that the connection does not have to be constructed from the metric. In ordinary flat space there is an implicit connection we use all the time - the Christoffel connection constructed from the flat metric. But we could, if we chose, use a different connection, while keeping the metric flat. Also notice that the coefficients of the Christoffel connection in flat space will vanish in Cartesian coordinates, but not in curvilinear coordinate systems. Consider for example the plane in polar coordinates, with metric Equation 3.22 (3.22) The nonzero components of the inverse metric are readily found to be grr = 1 and g$\scriptstyle \theta$$\scriptstyle \theta$ = r-2. (Notice that we use r and $ \theta$ as indices in an obvious notation.) We can compute a typical connection coefficient: Equation 3.23         Equation 3.23 (3.23) Sadly, it vanishes. But not all of them do: Equation 3.24 (3.24) Continuing to turn the crank, we eventually find Equation 3.25 (3.25) The existence of nonvanishing connection coefficients in curvilinear coordinate systems is the ultimate cause of the formulas for the divergence and so on that you find in books on electricity and magnetism. Contrariwise, even in a curved space it is still possible to make the Christoffel symbols vanish at any one point. This is just because, as we saw in the last section, we can always make the first derivative of the metric vanish at a point; so by (3.21) the connection coefficients derived from this metric will also vanish. Of course this can only be established at a point, not in some neighborhood of the point. Another useful property is that the formula for the divergence of a vector (with respect to the Christoffel connection) has a simplified form. The covariant divergence of V$\scriptstyle \mu$ is given by Equation 3.26 (3.26) It's easy to show (see pp. 106-108 of Weinberg) that the Christoffel connection satisfies Equation 3.27 (3.27) and we therefore obtain Equation 3.28 (3.28) There are also formulas for the divergences of higher-rank tensors, but they are generally not such a great simplification. As the last factoid we should mention about connections, let us emphasize (once more) that the exterior derivative is a well-defined tensor in the absence of any connection. The reason this needs to be emphasized is that, if you happen to be using a symmetric (torsion-free) connection, the exterior derivative (defined to be the antisymmetrized partial derivative) happens to be equal to the antisymmetrized covariant derivative: Equation 3.29 (3.29) This has led some misfortunate souls to fret about the "ambiguity" of the exterior derivative in spaces with torsion, where the above simplification does not occur. There is no ambiguity: the exterior derivative does not involve the connection, no matter what connection you happen to be using, and therefore the torsion never enters the formula for the exterior derivative of anything. Before moving on, let's review the process by which we have been adding structures to our mathematical constructs. We started with the basic notion of a set, which you were presumed to know (informally, if not rigorously). We introduced the concept of open subsets of our set; this is equivalent to introducing a topology, and promoted the set to a topological space. Then by demanding that each open set look like a region of $ \bf R^{n}_{}$ (with n the same for each set) and that the coordinate charts be smoothly sewn together, the topological space became a manifold. A manifold is simultaneously a very flexible and powerful structure, and comes equipped naturally with a tangent bundle, tensor bundles of various ranks, the ability to take exterior derivatives, and so forth. We then proceeded to put a metric on the manifold, resulting in a manifold with metric (or sometimes "Riemannian manifold"). Independently of the metric we found we could introduce a connection, allowing us to take covariant derivatives. Once we have a metric, however, there is automatically a unique torsion-free metric-compatible connection. (In principle there is nothing to stop us from introducing more than one connection, or more than one metric, on any given manifold.) The situation is thus as portrayed in the diagram on the next page. Figure 3.1 Having set up the machinery of connections, the first thing we will do is discuss parallel transport. Recall that in flat space it was unnecessary to be very careful about the fact that vectors were elements of tangent spaces defined at individual points; it is actually very natural to compare vectors at different points (where by "compare" we mean add, subtract, take the dot product, etc.). The reason why it is natural is because it makes sense, in flat space, to "move a vector from one point to another while keeping it constant." Then once we get the vector from one point to another we can do the usual operations allowed in a vector space. Figure 3.2 The concept of moving a vector along a path, keeping constant all the while, is known as parallel transport. As we shall see, parallel transport is defined whenever we have a connection; the intuitive manipulation of vectors in flat space makes implicit use of the Christoffel connection on this space. The crucial difference between flat and curved spaces is that, in a curved space, the result of parallel transporting a vector from one point to another will depend on the path taken between the points. Without yet assembling the complete mechanism of parallel transport, we can use our intuition about the two-sphere to see that this is the case. Start with a vector on the equator, pointing along a line of constant longitude. Parallel transport it up to the north pole along a line of longitude in the obvious way. Then take the original vector, parallel transport it along the equator by an angle $ \theta$, and then move it up to the north pole as before. It is clear that the vector, parallel transported along two paths, arrived at the same destination with two different values (rotated by $ \theta$). Figure 3.3 It therefore appears as if there is no natural way to uniquely move a vector from one tangent space to another; we can always parallel transport it, but the result depends on the path, and there is no natural choice of which path to take. Unlike some of the problems we have encountered, there is no solution to this one - we simply must learn to live with the fact that two vectors can only be compared in a natural way if they are elements of the same tangent space. For example, two particles passing by each other have a well-defined relative velocity (which cannot be greater than the speed of light). But two particles at different points on a curved manifold do not have any well-defined notion of relative velocity - the concept simply makes no sense. Of course, in certain special situations it is still useful to talk as if it did make sense, but it is necessary to understand that occasional usefulness is not a substitute for rigorous definition. In cosmology, for example, the light from distant galaxies is redshifted with respect to the frequencies we would observe from a nearby stationary source. Since this phenomenon bears such a close resemblance to the conventional Doppler effect due to relative motion, it is very tempting to say that the galaxies are "receding away from us" at a speed defined by their redshift. At a rigorous level this is nonsense, what Wittgenstein would call a "grammatical mistake" - the galaxies are not receding, since the notion of their velocity with respect to us is not well-defined. What is actually happening is that the metric of spacetime between us and the galaxies has changed (the universe has expanded) along the path of the photon from here to there, leading to an increase in the wavelength of the light. As an example of how you can go wrong, naive application of the Doppler formula to the redshift of galaxies implies that some of them are receding faster than light, in apparent contradiction with relativity. The resolution of this apparent paradox is simply that the very notion of their recession should not be taken literally. Enough about what we cannot do; let's see what we can. Parallel transport is supposed to be the curved-space generalization of the concept of "keeping the vector constant" as we move it along a path; similarly for a tensor of arbitrary rank. Given a curve x$\scriptstyle \mu$($ \lambda$), the requirement of constancy of a tensor T along this curve in flat space is simply $ {{dT}\over{d\lambda}}$ = $ {{dx^\mu}\over{d\lambda}}$$ {{\partial T}\over{\partial x^\mu}}$ = 0. We therefore define the covariant derivative along the path to be given by an operator Equation 3.30 (3.30) We then define parallel transport of the tensor T along the path x$\scriptstyle \mu$($ \lambda$) to be the requirement that, along the path, Equation 3.31 (3.31) This is a well-defined tensor equation, since both the tangent vector dx$\scriptstyle \mu$/d$ \lambda$ and the covariant derivative $ \nabla$T are tensors. This is known as the equation of parallel transport. For a vector it takes the form Equation 3.32 (3.32) We can look at the parallel transport equation as a first-order differential equation defining an initial-value problem: given a tensor at some point along the path, there will be a unique continuation of the tensor to other points along the path such that the continuation solves (3.31). We say that such a tensor is parallel transported. The notion of parallel transport is obviously dependent on the connection, and different connections lead to different answers. If the connection is metric-compatible, the metric is always parallel transported with respect to it: Equation 3.33 (3.33) It follows that the inner product of two parallel-transported vectors is preserved. That is, if V$\scriptstyle \mu$ and W$\scriptstyle \nu$ are parallel-transported along a curve x$\scriptstyle \sigma$($ \lambda$), we have Equation 3.34 (3.34) This means that parallel transport with respect to a metric-compatible connection preserves the norm of vectors, the sense of orthogonality, and so on. One thing they don't usually tell you in GR books is that you can write down an explicit and general solution to the parallel transport equation, although it's somewhat formal. First notice that for some path $ \gamma$ : $ \lambda$ $ \rightarrow$ x$\scriptstyle \sigma$($ \lambda$), solving the parallel transport equation for a vector V$\scriptstyle \mu$ amounts to finding a matrix P$\scriptstyle \mu$$\scriptstyle \rho$($ \lambda$,$ \lambda_{0}^{}$) which relates the vector at its initial value V$\scriptstyle \mu$($ \lambda_{0}^{}$) to its value somewhere later down the path: Equation 3.35 (3.35) Of course the matrix P$\scriptstyle \mu$$\scriptstyle \rho$($ \lambda$,$ \lambda_{0}^{}$), known as the parallel propagator, depends on the path $ \gamma$ (although it's hard to find a notation which indicates this without making $ \gamma$ look like an index). If we define Equation 3.36 (3.36) where the quantities on the right hand side are evaluated at x$\scriptstyle \nu$($ \lambda$), then the parallel transport equation becomes Equation 3.37 (3.37) Since the parallel propagator must work for any vector, substituting (3.35) into (3.37) shows that P$\scriptstyle \mu$$\scriptstyle \rho$($ \lambda$,$ \lambda_{0}^{}$) also obeys this equation: Equation 3.38 (3.38) To solve this equation, first integrate both sides: Equation 3.39 (3.39) The Kronecker delta, it is easy to see, provides the correct normalization for $ \lambda$ = $ \lambda_{0}^{}$. We can solve (3.39) by iteration, taking the right hand side and plugging it into itself repeatedly, giving Equation 3.40 (3.40) The nth term in this series is an integral over an n-dimensional right triangle, or n-simplex. Equation 3.40a Figure 3.4 It would simplify things if we could consider such an integral to be over an n-cube instead of an n-simplex; is there some way to do this? There are n! such simplices in each cube, so we would have to multiply by 1/n! to compensate for this extra volume. But we also want to get the integrand right; using matrix notation, the integrand at nth order is A($ \eta_{n}^{}$)A($ \eta_{n-1}^{}$) ... A($ \eta_{1}^{}$), but with the special property that $ \eta_{n}^{}$ $ \geq$ $ \eta_{n-1}^{}$ $ \geq$ ... $ \geq$ $ \eta_{1}^{}$. We therefore define the path-ordering symbol, $ \cal {P}$, to ensure that this condition holds. In other words, the expression Equation 3.41 (3.41) stands for the product of the n matrices A($ \eta_{i}^{}$), ordered in such a way that the largest value of $ \eta_{i}^{}$ is on the left, and each subsequent value of $ \eta_{i}^{}$ is less than or equal to the previous one. We then can express the nth-order term in (3.40) as Equation 3.42 (3.42) This expression contains no substantive statement about the matrices A($ \eta_{i}^{}$); it is just notation. But we can now write (3.40) in matrix form as Equation 3.43 (3.43) This formula is just the series expression for an exponential; we therefore say that the parallel propagator is given by the path-ordered exponential Equation 3.44 (3.44) where once again this is just notation; the path-ordered exponential is defined to be the right hand side of (3.43). We can write it more explicitly as Equation 3.45 (3.45) It's nice to have an explicit formula, even if it is rather abstract. The same kind of expression appears in quantum field theory as "Dyson's Formula," where it arises because the Schrödinger equation for the time-evolution operator has the same form as (3.38). As an aside, an especially interesting example of the parallel propagator occurs when the path is a loop, starting and ending at the same point. Then if the connection is metric-compatible, the resulting matrix will just be a Lorentz transformation on the tangent space at the point. This transformation is known as the "holonomy" of the loop. If you know the holonomy of every possible loop, that turns out to be equivalent to knowing the metric. This fact has let Ashtekar and his collaborators to examine general relativity in the "loop representation," where the fundamental variables are holonomies rather than the explicit metric. They have made some progress towards quantizing the theory in this approach, although the jury is still out about how much further progress can be made. With parallel transport understood, the next logical step is to discuss geodesics. A geodesic is the curved-space generalization of the notion of a "straight line" in Euclidean space. We all know what a straight line is: it's the path of shortest distance between two points. But there is an equally good definition -- a straight line is a path which parallel transports its own tangent vector. On a manifold with an arbitrary (not necessarily Christoffel) connection, these two concepts do not quite coincide, and we should discuss them separately. We'll take the second definition first, since it is computationally much more straightforward. The tangent vector to a path x$\scriptstyle \mu$($ \lambda$) is dx$\scriptstyle \mu$/d$ \lambda$. The condition that it be parallel transported is thus Equation 3.46 (3.46) or alternatively Equation 3.47 (3.47) This is the geodesic equation, another one which you should memorize. We can easily see that it reproduces the usual notion of straight lines if the connection coefficients are the Christoffel symbols in Euclidean space; in that case we can choose Cartesian coordinates in which $ \Gamma^{\mu}_{\rho\sigma}$ = 0, and the geodesic equation is just d2x$\scriptstyle \mu$/d$ \lambda^{2}_{}$ = 0, which is the equation for a straight line. That was embarrassingly simple; let's turn to the more nontrivial case of the shortest distance definition. As we know, there are various subtleties involved in the definition of distance in a Lorentzian spacetime; for null paths the distance is zero, for timelike paths it's more convenient to use the proper time, etc. So in the name of simplicity let's do the calculation just for a timelike path - the resulting equation will turn out to be good for any path, so we are not losing any generality. We therefore consider the proper time functional, Equation 3.48 (3.48) where the integral is over the path. To search for shortest-distance paths, we will do the usual calculus of variations treatment to seek extrema of this functional. (In fact they will turn out to be curves of maximum proper time.) We want to consider the change in the proper time under infinitesimal variations of the path, Equation 3.49 (3.49) (The second line comes from Taylor expansion in curved spacetime, which as you can see uses the partial derivative, not the covariant derivative.) Plugging this into (3.48), we get Equation 3.50 (3.50) Since $ \delta$x$\scriptstyle \sigma$ is assumed to be small, we can expand the square root of the expression in square brackets to find Equation 3.51 (3.51) It is helpful at this point to change the parameterization of our curve from $ \lambda$, which was arbitrary, to the proper time $ \tau$ itself, using Equation 3.52 (3.52) We plug this into (3.51) (note: we plug it in for every appearance of d$ \lambda$) to obtain Equation 3.53 Equation 3.53 (3.53) where in the last line we have integrated by parts, avoiding possible boundary contributions by demanding that the variation $ \delta$x$\scriptstyle \sigma$ vanish at the endpoints of the path. Since we are searching for stationary points, we want $ \delta$$ \tau$ to vanish for any variation; this implies Equation 3.54 (3.54) where we have used dg$\scriptstyle \mu$$\scriptstyle \sigma$/d$ \tau$ = (dx$\scriptstyle \nu$/d$ \tau$)$ \partial_{\nu}$g$\scriptstyle \mu$$\scriptstyle \sigma$. Some shuffling of dummy indices reveals Equation 3.55 (3.55) and multiplying by the inverse metric finally leads to Equation 3.56 (3.56) We see that this is precisely the geodesic equation (3.32), but with the specific choice of Christoffel connection (3.21). Thus, on a manifold with metric, extremals of the length functional are curves which parallel transport their tangent vector with respect to the Christoffel connection associated with that metric. It doesn't matter if there is any other connection defined on the same manifold. Of course, in GR the Christoffel connection is the only one which is used, so the two notions are the same. The primary usefulness of geodesics in general relativity is that they are the paths followed by unaccelerated particles. In fact, the geodesic equation can be thought of as the generalization of Newton's law $ \bf f$ = m$ \bf a$ for the case $ \bf f$ = 0. It is also possible to introduce forces by adding terms to the right hand side; in fact, looking back to the expression (1.103) for the Lorentz force in special relativity, it is tempting to guess that the equation of motion for a particle of mass m and charge q in general relativity should be Equation 3.57 (3.57) We will talk about this more later, but in fact your guess would be correct. Having boldly derived these expressions, we should say some more careful words about the parameterization of a geodesic path. When we presented the geodesic equation as the requirement that the tangent vector be parallel transported, (3.47), we parameterized our path with some parameter $ \lambda$, whereas when we found the formula (3.56) for the extremal of the spacetime interval we wound up with a very specific parameterization, the proper time. Of course from the form of (3.56) it is clear that a transformation Equation 3.58 (3.58) for some constants a and b, leaves the equation invariant. Any parameter related to the proper time in this way is called an affine parameter, and is just as good as the proper time for parameterizing a geodesic. What was hidden in our derivation of (3.47) was that the demand that the tangent vector be parallel transported actually constrains the parameterization of the curve, specifically to one related to the proper time by (3.58). In other words, if you start at some point and with some initial direction, and then construct a curve by beginning to walk in that direction and keeping your tangent vector parallel transported, you will not only define a path in the manifold but also (up to linear transformations) define the parameter along the path. Of course, there is nothing to stop you from using any other parameterization you like, but then (3.47) will not be satisfied. More generally you will satisfy an equation of the form Equation 3.59 (3.59) for some parameter $ \alpha$ and some function f ($ \alpha$). Conversely, if (3.59) is satisfied along a curve you can always find an affine parameter $ \lambda$($ \alpha$) for which the geodesic equation (3.47) will be satisfied. An important property of geodesics in a spacetime with Lorentzian metric is that the character (timelike/null/spacelike) of the geodesic (relative to a metric-compatible connection) never changes. This is simply because parallel transport preserves inner products, and the character is determined by the inner product of the tangent vector with itself. This is why we were consistent to consider purely timelike paths when we derived (3.56); for spacelike paths we would have derived the same equation, since the only difference is an overall minus sign in the final answer. There are also null geodesics, which satisfy the same equation, except that the proper time cannot be used as a parameter (some set of allowed parameters will exist, related to each other by linear transformations). You can derive this fact either from the simple requirement that the tangent vector be parallel transported, or by extending the variation of (3.48) to include all non-spacelike paths. Let's now explain the earlier remark that timelike geodesics are maxima of the proper time. The reason we know this is true is that, given any timelike curve (geodesic or not), we can approximate it to arbitrary accuracy by a null curve. To do this all we have to do is to consider "jagged" null curves which follow the timelike one: Figure 3.5 As we increase the number of sharp corners, the null curve comes closer and closer to the timelike curve while still having zero path length. Timelike geodesics cannot therefore be curves of minimum proper time, since they are always infinitesimally close to curves of zero proper time; in fact they maximize the proper time. (This is how you can remember which twin in the twin paradox ages more - the one who stays home is basically on a geodesic, and therefore experiences more proper time.) Of course even this is being a little cavalier; actually every time we say "maximize" or "minimize" we should add the modifier "locally." It is often the case that between two points on a manifold there is more than one geodesic. For instance, on S2 we can draw a great circle through any two points, and imagine travelling between them either the short way or the long way around. One of these is obviously longer than the other, although both are stationary points of the length functional. The final fact about geodesics before we move on to curvature proper is their use in mapping the tangent space at a point p to a local neighborhood of p. To do this we notice that any geodesic x$\scriptstyle \mu$($ \lambda$) which passes through p can be specified by its behavior at p; let us choose the parameter value to be $ \lambda$(p) = 0, and the tangent vector at p to be Equation 3.60 (3.60) for k$\scriptstyle \mu$ some vector at p (some element of Tp). Then there will be a unique point on the manifold M which lies on this geodesic where the parameter has the value $ \lambda$ = 1. We define the exponential map at p, expp : Tp $ \rightarrow$ M, via Equation 3.61 (3.61) where x$\scriptstyle \nu$($ \lambda$) solves the geodesic equation subject to (3.60). Figure 3.6 For some set of tangent vectors k$\scriptstyle \mu$ near the zero vector, this map will be well-defined, and in fact invertible. Thus in the neighborhood of p given by the range of the map on this set of tangent vectors, the the tangent vectors themselves define a coordinate system on the manifold. In this coordinate system, any geodesic through p is expressed trivially as Equation 3.62 (3.62) for some appropriate vector k$\scriptstyle \mu$. We won't go into detail about the properties of the exponential map, since in fact we won't be using it much, but it's important to emphasize that the range of the map is not necessarily the whole manifold, and the domain is not necessarily the whole tangent space. The range can fail to be all of M simply because there can be two points which are not connected by any geodesic. (In a Euclidean signature metric this is impossible, but not in a Lorentzian spacetime.) The domain can fail to be all of Tp because a geodesic may run into a singularity, which we think of as "the edge of the manifold." Manifolds which have such singularities are known as geodesically incomplete. This is not merely a problem for careful mathematicians; in fact the "singularity theorems" of Hawking and Penrose state that, for reasonable matter content (no negative energies), spacetimes in general relativity are almost guaranteed to be geodesically incomplete. As examples, the two most useful spacetimes in GR - the Schwarzschild solution describing black holes and the Friedmann-Robertson-Walker solutions describing homogeneous, isotropic cosmologies - both feature important singularities. Having set up the machinery of parallel transport and covariant derivatives, we are at last prepared to discuss curvature proper. The curvature is quantified by the Riemann tensor, which is derived from the connection. The idea behind this measure of curvature is that we know what we mean by "flatness" of a connection - the conventional (and usually implicit) Christoffel connection associated with a Euclidean or Minkowskian metric has a number of properties which can be thought of as different manifestations of flatness. These include the fact that parallel transport around a closed loop leaves a vector unchanged, that covariant derivatives of tensors commute, and that initially parallel geodesics remain parallel. As we shall see, the Riemann tensor arises when we study how any of these properties are altered in more general contexts. We have already argued, using the two-sphere as an example, that parallel transport of a vector around a closed loop in a curved space will lead to a transformation of the vector. The resulting transformation depends on the total curvature enclosed by the loop; it would be more useful to have a local description of the curvature at each point, which is what the Riemann tensor is supposed to provide. One conventional way to introduce the Riemann tensor, therefore, is to consider parallel transport around an infinitesimal loop. We are not going to do that here, but take a more direct route. (Most of the presentations in the literature are either sloppy, or correct but very difficult to follow.) Nevertheless, even without working through the details, it is possible to see what form the answer should take. Imagine that we parallel transport a vector V$\scriptstyle \sigma$ around a closed loop defined by two vectors A$\scriptstyle \nu$ and B$\scriptstyle \mu$: Figure 3.7 The (infinitesimal) lengths of the sides of the loop are $ \delta$a and $ \delta$b, respectively. Now, we know the action of parallel transport is independent of coordinates, so there should be some tensor which tells us how the vector changes when it comes back to its starting point; it will be a linear transformation on a vector, and therefore involve one upper and one lower index. But it will also depend on the two vectors A and B which define the loop; therefore there should be two additional lower indices to contract with A$\scriptstyle \nu$ and B$\scriptstyle \mu$. Furthermore, the tensor should be antisymmetric in these two indices, since interchanging the vectors corresponds to traversing the loop in the opposite direction, and should give the inverse of the original answer. (This is consistent with the fact that the transformation should vanish if A and B are the same vector.) We therefore expect that the expression for the change $ \delta$V$\scriptstyle \rho$ experienced by this vector when parallel transported around the loop should be of the form Equation 3.63 (3.63) where R$\scriptstyle \rho$$\scriptstyle \sigma$$\scriptstyle \mu$$\scriptstyle \nu$ is a (1, 3) tensor known as the Riemann tensor (or simply "curvature tensor"). It is antisymmetric in the last two indices: Equation 3.64 (3.64) (Of course, if (3.63) is taken as a definition of the Riemann tensor, there is a convention that needs to be chosen for the ordering of the indices. There is no agreement at all on what this convention should be, so be careful.) Knowing what we do about parallel transport, we could very carefully perform the necessary manipulations to see what happens to the vector under this operation, and the result would be a formula for the curvature tensor in terms of the connection coefficients. It is much quicker, however, to consider a related operation, the commutator of two covariant derivatives. The relationship between this and parallel transport around a loop should be evident; the covariant derivative of a tensor in a certain direction measures how much the tensor changes relative to what it would have been if it had been parallel transported (since the covariant derivative of a tensor in a direction along which it is parallel transported is zero). The commutator of two covariant derivatives, then, measures the difference between parallel transporting the tensor first one way and then the other, versus the opposite ordering. Figure 3.8 The actual computation is very straightforward. Considering a vector field V$\scriptstyle \rho$, we take Equation 3.65 (3.65) In the last step we have relabeled some dummy indices and eliminated some terms that cancel when antisymmetrized. We recognize that the last term is simply the torsion tensor, and that the left hand side is manifestly a tensor; therefore the expression in parentheses must be a tensor itself. We write Equation 3.66 (3.66) where the Riemann tensor is identified as Equation 3.67 (3.67) There are a number of things to notice about the derivation of this expression: A useful notion is that of the commutator of two vector fields X and Y, which is a third vector field with components Equation 3.69 (3.69) Both the torsion tensor and the Riemann tensor, thought of as multilinear maps, have elegant expressions in terms of the commutator. Thinking of the torsion as a map from two vector fields to a third vector field, we have Equation 3.70 (3.70) and thinking of the Riemann tensor as a map from three vector fields to a fourth one, we have Equation 3.71 (3.71) In these expressions, the notation $ \nabla_{X}^{}$ refers to the covariant derivative along the vector field X; in components, $ \nabla_{X}^{}$ = X$\scriptstyle \mu$$ \nabla_{\mu}^{}$. Note that the two vectors X and Y in (3.71) correspond to the two antisymmetric indices in the component form of the Riemann tensor. The last term in (3.71), involving the commutator [X, Y], vanishes when X and Y are taken to be the coordinate basis vector fields (since [$ \partial_{\mu}$,$ \partial_{\nu}$] = 0), which is why this term did not arise when we originally took the commutator of two covariant derivatives. We will not use this notation extensively, but you might see it in the literature, so you should be able to decode it. Having defined the curvature tensor as something which characterizes the connection, let us now admit that in GR we are most concerned with the Christoffel connection. In this case the connection is derived from the metric, and the associated curvature may be thought of as that of the metric itself. This identification allows us to finally make sense of our informal notion that spaces for which the metric looks Euclidean or Minkowskian are flat. In fact it works both ways: if the components of the metric are constant in some coordinate system, the Riemann tensor will vanish, while if the Riemann tensor vanishes we can always construct a coordinate system in which the metric components are constant. The first of these is easy to show. If we are in some coordinate system such that $ \partial_{\sigma}$g$\scriptstyle \mu$$\scriptstyle \nu$ = 0 (everywhere, not just at a point), then $ \Gamma^{\rho}_{\mu\nu}$ = 0 and $ \partial_{\sigma}$$ \Gamma^{\rho}_{\mu\nu}$ = 0; thus R$\scriptstyle \rho$$\scriptstyle \sigma$$\scriptstyle \mu$$\scriptstyle \nu$ = 0 by (3.67). But this is a tensor equation, and if it is true in one coordinate system it must be true in any coordinate system. Therefore, the statement that the Riemann tensor vanishes is a necessary condition for it to be possible to find coordinates in which the components of g$\scriptstyle \mu$$\scriptstyle \nu$ are constant everywhere. It is also a sufficient condition, although we have to work harder to show it. Start by choosing Riemann normal coordinates at some point p, so that g$\scriptstyle \mu$$\scriptstyle \nu$ = $ \eta_{\mu\nu}^{}$ at p. (Here we are using $ \eta_{\mu\nu}^{}$ in a generalized sense, as a matrix with either +1 or -1 for each diagonal element and zeroes elsewhere. The actual arrangement of the +1's and -1's depends on the canonical form of the metric, but is irrelevant for the present argument.) Denote the basis vectors at p by $ \hat{e}_{(\mu)}$, with components $ \hat{e}_{(\mu)}^{\sigma}$. Then by construction we have Equation 3.72 (3.72) Now let us parallel transport the entire set of basis vectors from p to another point q; the vanishing of the Riemann tensor ensures that the result will be independent of the path taken between p and q. Since parallel transport with respect to a metric compatible connection preserves inner products, we must have Equation 3.73 (3.73) We therefore have specified a set of vector fields which everywhere define a basis in which the metric components are constant. This is completely unimpressive; it can be done on any manifold, regardless of what the curvature is. What we would like to show is that this is a coordinate basis (which can only be true if the curvature vanishes). We know that if the $ \hat{e}_{(\mu)}$'s are a coordinate basis, their commutator will vanish: Equation 3.74 (3.74) What we would really like is the converse: that if the commutator vanishes we can find coordinates y$\scriptstyle \mu$ such that $ \hat{e}_{(\mu)}$ = $ {{\partial} \over{\partial y^\mu}}$. In fact this is a true result, known as Frobenius's Theorem. It's something of a mess to prove, involving a good deal more mathematical apparatus than we have bothered to set up. Let's just take it for granted (skeptics can consult Schutz's Geometrical Methods book). Thus, we would like to demonstrate (3.74) for the vector fields we have set up. Let's use the expression (3.70) for the torsion: Equation 3.75 (3.75) The torsion vanishes by hypothesis. The covariant derivatives will also vanish, given the method by which we constructed our vector fields; they were made by parallel transporting along arbitrary paths. If the fields are parallel transported along arbitrary paths, they are certainly parallel transported along the vectors $ \hat{e}_{(\mu)}$, and therefore their covariant derivatives in the direction of these vectors will vanish. Thus (3.70) implies that the commutator vanishes, and therefore that we can find a coordinate system y$\scriptstyle \mu$ for which these vector fields are the partial derivatives. In this coordinate system the metric will have components $ \eta_{\mu\nu}^{}$, as desired. The Riemann tensor, with four indices, naively has n4 independent components in an n-dimensional space. In fact the antisymmetry property (3.64) means that there are only n(n - 1)/2 independent values these last two indices can take on, leaving us with n3(n - 1)/2 independent components. When we consider the Christoffel connection, however, there are a number of other symmetries that reduce the independent components further. Let's consider these now. The simplest way to derive these additional symmetries is to examine the Riemann tensor with all lower indices, Equation 3.76 (3.76) Let us further consider the components of this tensor in Riemann normal coordinates established at a point p. Then the Christoffel symbols themselves will vanish, although their derivatives will not. We therefore have Equation 3.77 Equation 3.77 (3.77) In the second line we have used $ \partial_{\mu}^{}$g$\scriptstyle \lambda$$\scriptstyle \tau$ = 0 in RNC's, and in the third line the fact that partials commute. From this expression we can notice immediately two properties of R$\scriptstyle \rho$$\scriptstyle \sigma$$\scriptstyle \mu$$\scriptstyle \nu$; it is antisymmetric in its first two indices, Equation 3.78 (3.78) and it is invariant under interchange of the first pair of indices with the second: Equation 3.79 (3.79) With a little more work, which we leave to your imagination, we can see that the sum of cyclic permutations of the last three indices vanishes: Equation 3.80 (3.80) This last property is equivalent to the vanishing of the antisymmetric part of the last three indices: Equation 3.81 (3.81) All of these properties have been derived in a special coordinate system, but they are all tensor equations; therefore they will be true in any coordinates. Not all of them are independent; with some effort, you can show that (3.64), (3.78) and (3.81) together imply (3.79). The logical interdependence of the equations is usually less important than the simple fact that they are true. Given these relationships between the different components of the Riemann tensor, how many independent quantities remain? Let's begin with the facts that R$\scriptstyle \rho$$\scriptstyle \sigma$$\scriptstyle \mu$$\scriptstyle \nu$ is antisymmetric in the first two indices, antisymmetric in the last two indices, and symmetric under interchange of these two pairs. This means that we can think of it as a symmetric matrix R[$\scriptstyle \rho$$\scriptstyle \sigma$][$\scriptstyle \mu$$\scriptstyle \nu$], where the pairs $ \rho$$ \sigma$ and $ \mu$$ \nu$ are thought of as individual indices. An m × m symmetric matrix has m(m + 1)/2 independent components, while an n × n antisymmetric matrix has n(n - 1)/2 independent components. We therefore have Equation 3.82 (3.82) independent components. We still have to deal with the additional symmetry (3.81). An immediate consequence of (3.81) is that the totally antisymmetric part of the Riemann tensor vanishes, Equation 3.83 (3.83) In fact, this equation plus the other symmetries (3.64), (3.78) and (3.79) are enough to imply (3.81), as can be easily shown by expanding (3.83) and messing with the resulting terms. Therefore imposing the additional constraint of (3.83) is equivalent to imposing (3.81), once the other symmetries have been accounted for. How many independent restrictions does this represent? Let us imagine decomposing Equation 3.84 (3.84) It is easy to see that any totally antisymmetric 4-index tensor is automatically antisymmetric in its first and last indices, and symmetric under interchange of the two pairs. Therefore these properties are independent restrictions on X$\scriptstyle \rho$$\scriptstyle \sigma$$\scriptstyle \mu$$\scriptstyle \nu$, unrelated to the requirement (3.83). Now a totally antisymmetric 4-index tensor has n(n - 1)(n - 2)(n - 3)/4! terms, and therefore (3.83) reduces the number of independent components by this amount. We are left with Equation 3.85 (3.85) independent components of the Riemann tensor. In four dimensions, therefore, the Riemann tensor has 20 independent components. (In one dimension it has none.) These twenty functions are precisely the 20 degrees of freedom in the second derivatives of the metric which we could not set to zero by a clever choice of coordinates. This should reinforce your confidence that the Riemann tensor is an appropriate measure of curvature. In addition to the algebraic symmetries of the Riemann tensor (which constrain the number of independent components at any point), there is a differential identity which it obeys (which constrains its relative values at different points). Consider the covariant derivative of the Riemann tensor, evaluated in Riemann normal coordinates: Equation 3.86 (3.86) We would like to consider the sum of cyclic permutations of the first three indices: Equation 3.87 (3.87) Once again, since this is an equation between tensors it is true in any coordinate system, even though we derived it in a particular one. We recognize by now that the antisymmetry R$\scriptstyle \rho$$\scriptstyle \sigma$$\scriptstyle \mu$$\scriptstyle \nu$ = - R$\scriptstyle \sigma$$\scriptstyle \rho$$\scriptstyle \mu$$\scriptstyle \nu$ allows us to write this result as Equation 3.88 (3.88) This is known as the Bianchi identity. (Notice that for a general connection there would be additional terms involving the torsion tensor.) It is closely related to the Jacobi identity, since (as you can show) it basically expresses Equation 3.89 (3.89) It is frequently useful to consider contractions of the Riemann tensor. Even without the metric, we can form a contraction known as the Ricci tensor: Equation 3.90 (3.90) Notice that, for the curvature tensor formed from an arbitrary (not necessarily Christoffel) connection, there are a number of independent contractions to take. Our primary concern is with the Christoffel connection, for which (3.90) is the only independent contraction (modulo conventions for the sign, which of course change from place to place). The Ricci tensor associated with the Christoffel connection is symmetric, Equation 3.91 (3.91) as a consequence of the various symmetries of the Riemann tensor. Using the metric, we can take a further contraction to form the Ricci scalar: Equation 3.92 (3.92) An especially useful form of the Bianchi identity comes from contracting twice on (3.87): Equation 3.93 (3.93) Equation 3.94 (3.94) (Notice that, unlike the partial derivative, it makes sense to raise an index on the covariant derivative, due to metric compatibility.) If we define the Einstein tensor as Equation 3.95 (3.95) then we see that the twice-contracted Bianchi identity (3.94) is equivalent to Equation 3.96 (3.96) The Einstein tensor, which is symmetric due to the symmetry of the Ricci tensor and the metric, will be of great importance in general relativity. The Ricci tensor and the Ricci scalar contain information about "traces" of the Riemann tensor. It is sometimes useful to consider separately those pieces of the Riemann tensor which the Ricci tensor doesn't tell us about. We therefore invent the Weyl tensor, which is basically the Riemann tensor with all of its contractions removed. It is given in n dimensions by Equation 3.97 (3.97) This messy formula is designed so that all possible contractions of C$\scriptstyle \rho$$\scriptstyle \sigma$$\scriptstyle \mu$$\scriptstyle \nu$ vanish, while it retains the symmetries of the Riemann tensor: Equation 3.98 (3.98) The Weyl tensor is only defined in three or more dimensions, and in three dimensions it vanishes identically. For n $ \geq$ 4 it satisfies a version of the Bianchi identity, Equation 3.99 (3.99) One of the most important properties of the Weyl tensor is that it is invariant under conformal transformations. This means that if you compute C$\scriptstyle \rho$$\scriptstyle \sigma$$\scriptstyle \mu$$\scriptstyle \nu$ for some metric g$\scriptstyle \mu$$\scriptstyle \nu$, and then compute it again for a metric given by $ \Omega^{2}_{}$(x)g$\scriptstyle \mu$$\scriptstyle \nu$, where $ \Omega$(x) is an arbitrary nonvanishing function of spacetime, you get the same answer. For this reason it is often known as the "conformal tensor." After this large amount of formalism, it might be time to step back and think about what curvature means for some simple examples. First notice that, according to (3.85), in 1, 2, 3 and 4 dimensions there are 0, 1, 6 and 20 components of the curvature tensor, respectively. (Everything we say about the curvature in these examples refers to the curvature associated with the Christoffel connection, and therefore the metric.) This means that one-dimensional manifolds (such as S1) are never curved; the intuition you have that tells you that a circle is curved comes from thinking of it embedded in a certain flat two-dimensional plane. (There is something called "extrinsic curvature," which characterizes the way something is embedded in a higher dimensional space. Our notion of curvature is "intrinsic," and has nothing to do with such embeddings.) The distinction between intrinsic and extrinsic curvature is also important in two dimensions, where the curvature has one independent component. (In fact, all of the information about the curvature is contained in the single component of the Ricci scalar.) Consider a cylinder, $ \bf R$ × S1. Figure 3.9 Although this looks curved from our point of view, it should be clear that we can put a metric on the cylinder whose components are constant in an appropriate coordinate system -- simply unroll it and use the induced metric from the plane. In this metric, the cylinder is flat. (There is also nothing to stop us from introducing a different metric in which the cylinder is not flat, but the point we are trying to emphasize is that it can be made flat in some metric.) The same story holds for the torus: Figure 3.10 We can think of the torus as a square region of the plane with opposite sides identified (in other words, S1 × S1), from which it is clear that it can have a flat metric even though it looks curved from the embedded point of view. A cone is an example of a two-dimensional manifold with nonzero curvature at exactly one point. We can see this also by unrolling it; the cone is equivalent to the plane with a "deficit angle" removed and opposite sides identified: Figure 3.11 In the metric inherited from this description as part of the flat plane, the cone is flat everywhere but at its vertex. This can be seen by considering parallel transport of a vector around various loops; if a loop does not enclose the vertex, there will be no overall transformation, whereas a loop that does enclose the vertex (say, just one time) will lead to a rotation by an angle which is just the deficit angle. Figure 3.12 Our favorite example is of course the two-sphere, with metric Equation 3.100 (3.100) where a is the radius of the sphere (thought of as embedded in $ \bf R^{3}_{}$). Without going through the details, the nonzero connection coefficients are Equation 3.101 (3.101) Let's compute a promising component of the Riemann tensor: Equation 3.102 Equation 3.102 (3.102) (The notation is obviously imperfect, since the Greek letter $ \lambda$ is a dummy index which is summed over, while the Greek letters $ \theta$ and $ \phi$ represent specific coordinates.) Lowering an index, we have Equation 3.103 (3.103) It is easy to check that all of the components of the Riemann tensor either vanish or are related to this one by symmetry. We can go on to compute the Ricci tensor via R$\scriptstyle \mu$$\scriptstyle \nu$ = g$\scriptstyle \alpha$$\scriptstyle \beta$R$\scriptstyle \alpha$$\scriptstyle \mu$$\scriptstyle \beta$$\scriptstyle \nu$. We obtain Equation 3.104 (3.104) The Ricci scalar is similarly straightforward: Equation 3.105 (3.105) Therefore the Ricci scalar, which for a two-dimensional manifold completely characterizes the curvature, is a constant over this two-sphere. This is a reflection of the fact that the manifold is "maximally symmetric," a concept we will define more precisely later (although it means what you think it should). In any number of dimensions the curvature of a maximally symmetric space satisfies (for some constant a) Equation 3.106 (3.106) which you may check is satisfied by this example. Notice that the Ricci scalar is not only constant for the two-sphere, it is manifestly positive. We say that the sphere is "positively curved" (of course a convention or two came into play, but fortunately our conventions conspired so that spaces which everyone agrees to call positively curved actually have a positive Ricci scalar). From the point of view of someone living on a manifold which is embedded in a higher-dimensional Euclidean space, if they are sitting at a point of positive curvature the space curves away from them in the same way in any direction, while in a negatively curved space it curves away in opposite directions. Negatively curved spaces are therefore saddle-like. Figure 3.13 Enough fun with examples. There is one more topic we have to cover before introducing general relativity itself: geodesic deviation. You have undoubtedly heard that the defining property of Euclidean (flat) geometry is the parallel postulate: initially parallel lines remain parallel forever. Of course in a curved space this is not true; on a sphere, certainly, initially parallel geodesics will eventually cross. We would like to quantify this behavior for an arbitrary curved space. The problem is that the notion of "parallel" does not extend naturally from flat to curved spaces. Instead what we will do is to construct a one-parameter family of geodesics, $ \gamma_{s}^{}$(t). That is, for each s $ \in$ $ \bf R$, $ \gamma_{s}^{}$ is a geodesic parameterized by the affine parameter t. The collection of these curves defines a smooth two-dimensional surface (embedded in a manifold M of arbitrary dimensionality). The coordinates on this surface may be chosen to be s and t, provided we have chosen a family of geodesics which do not cross. The entire surface is the set of points x$\scriptstyle \mu$(s, t) $ \in$ M. We have two natural vector fields: the tangent vectors to the geodesics, Equation 3.107 (3.107) and the "deviation vectors" Equation 3.108 (3.108) This name derives from the informal notion that S$\scriptstyle \mu$ points from one geodesic towards the neighboring ones. Figure 3.14 The idea that S$\scriptstyle \mu$ points from one geodesic to the next inspires us to define the "relative velocity of geodesics," Equation 3.109 (3.109) and the "relative acceleration of geodesics," Equation 3.110 (3.110) You should take the names with a grain of salt, but these vectors are certainly well-defined. Since S and T are basis vectors adapted to a coordinate system, their commutator vanishes: [S, T] = 0 . We would like to consider the conventional case where the torsion vanishes, so from (3.70) we then have Equation 3.111 (3.111) With this in mind, let's compute the acceleration: Equation 3.112 (3.112) Let's think about this line by line. The first line is the definition of a$\scriptstyle \mu$, and the second line comes directly from (3.111). The third line is simply the Leibniz rule. The fourth line replaces a double covariant derivative by the derivatives in the opposite order plus the Riemann tensor. In the fifth line we use Leibniz again (in the opposite order from usual), and then we cancel two identical terms and notice that the term involving T$\scriptstyle \rho$$ \nabla_{\rho}^{}$T$\scriptstyle \mu$ vanishes because T$\scriptstyle \mu$ is the tangent vector to a geodesic. The result, Equation 3.113 (3.113) is known as the geodesic deviation equation. It expresses something that we might have expected: the relative acceleration between two neighboring geodesics is proportional to the curvature. Physically, of course, the acceleration of neighboring geodesics is interpreted as a manifestation of gravitational tidal forces. This reminds us that we are very close to doing physics by now. There is one last piece of formalism which it would be nice to cover before we move on to gravitation proper. What we will do is to consider once again (although much more concisely) the formalism of connections and curvature, but this time we will use sets of basis vectors in the tangent space which are not derived from any coordinate system. It will turn out that this slight change in emphasis reveals a different point of view on the connection and curvature, one in which the relationship to gauge theories in particle physics is much more transparent. In fact the concepts to be introduced are very straightforward, but the subject is a notational nightmare, so it looks more difficult than it really is. Up until now we have been taking advantage of the fact that a natural basis for the tangent space Tp at a point p is given by the partial derivatives with respect to the coordinates at that point, $ \hat{e}_{(\mu)}$ = $ \partial_{\mu}$. Similarly, a basis for the cotangent space T*p is given by the gradients of the coordinate functions, $ \hat{\theta}^{(\mu)}$ = dx$\scriptstyle \mu$. There is nothing to stop us, however, from setting up any bases we like. Let us therefore imagine that at each point in the manifold we introduce a set of basis vectors $ \hat{e}_{(a)}$ (indexed by a Latin letter rather than Greek, to remind us that they are not related to any coordinate system). We will choose these basis vectors to be "orthonormal", in a sense which is appropriate to the signature of the manifold we are working on. That is, if the canonical form of the metric is written $ \eta_{ab}^{}$, we demand that the inner product of our basis vectors be Equation 3.114 (3.114) where g( , ) is the usual metric tensor. Thus, in a Lorentzian spacetime $ \eta_{ab}^{}$ represents the Minkowski metric, while in a space with positive-definite metric it would represent the Euclidean metric. The set of vectors comprising an orthonormal basis is sometimes known as a tetrad (from Greek tetras, "a group of four") or vielbein (from the German for "many legs"). In different numbers of dimensions it occasionally becomes a vierbein (four), dreibein (three), zweibein (two), and so on. (Just as we cannot in general find coordinate charts which cover the entire manifold, we will often not be able to find a single set of smooth basis vector fields which are defined everywhere. As usual, we can overcome this problem by working in different patches and making sure things are well-behaved on the overlaps.) The point of having a basis is that any vector can be expressed as a linear combination of basis vectors. Specifically, we can express our old basis vectors $ \hat{e}_{(\mu)}$ = $ \partial_{\mu}$ in terms of the new ones: Equation 3.115 (3.115) The components ea$\scriptstyle \mu$ form an n × n invertible matrix. (In accord with our usual practice of blurring the distinction between objects and their components, we will refer to the ea$\scriptstyle \mu$ as the tetrad or vielbein, and often in the plural as "vielbeins.") We denote their inverse by switching indices to obtain e$\scriptstyle \mu$a, which satisfy Equation 3.116 (3.116) These serve as the components of the vectors $ \hat{e}_{(a)}$ in the coordinate basis: Equation 3.117 (3.117) In terms of the inverse vielbeins, (3.114) becomes Equation 3.118 (3.118) or equivalently Equation 3.119 (3.119) This last equation sometimes leads people to say that the vielbeins are the "square root" of the metric. We can similarly set up an orthonormal basis of one-forms in T*p, which we denote $ \hat{\theta}^{(a)}$. They may be chosen to be compatible with the basis vectors, in the sense that Equation 3.120 (3.120) It is an immediate consequence of this that the orthonormal one-forms are related to their coordinate-based cousins $ \hat{\theta}^{(\mu)}$ = dx$\scriptstyle \mu$ by Equation 3.121 (3.121) Equation 3.122 (3.122) The vielbeins ea$\scriptstyle \mu$ thus serve double duty as the components of the coordinate basis vectors in terms of the orthonormal basis vectors, and as components of the orthonormal basis one-forms in terms of the coordinate basis one-forms; while the inverse vielbeins serve as the components of the orthonormal basis vectors in terms of the coordinate basis, and as components of the coordinate basis one-forms in terms of the orthonormal basis. Any other vector can be expressed in terms of its components in the orthonormal basis. If a vector V is written in the coordinate basis as V$\scriptstyle \mu$$ \hat{e}_{(\mu)}$ and in the orthonormal basis as Va$ \hat{e}_{(a)}$, the sets of components will be related by Equation 3.123 (3.123) So the vielbeins allow us to "switch from Latin to Greek indices and back." The nice property of tensors, that there is usually only one sensible thing to do based on index placement, is of great help here. We can go on to refer to multi-index tensors in either basis, or even in terms of mixed components: Equation 3.124 (3.124) Looking back at (3.118), we see that the components of the metric tensor in the orthonormal basis are just those of the flat metric, $ \eta_{ab}^{}$. (For this reason the Greek indices are sometimes referred to as "curved" and the Latin ones as "flat.") In fact we can go so far as to raise and lower the Latin indices using the flat metric and its inverse $ \eta^{ab}_{}$. You can check for yourself that everything works okay (e.g., that the lowering an index with the metric commutes with changing from orthonormal to coordinate bases). By introducing a new set of basis vectors and one-forms, we necessitate a return to our favorite topic of transformation properties. We've been careful all along to emphasize that the tensor transformation law was only an indirect outcome of a coordinate transformation; the real issue was a change of basis. Now that we have non-coordinate bases, these bases can be changed independently of the coordinates. The only restriction is that the orthonormality property (3.114) be preserved. But we know what kind of transformations preserve the flat metric - in a Euclidean signature metric they are orthogonal transformations, while in a Lorentzian signature metric they are Lorentz transformations. We therefore consider changes of basis of the form Equation 3.125 (3.125) where the matrices $ \Lambda_{a'}^{}$a(x) represent position-dependent transformations which (at each point) leave the canonical form of the metric unaltered: Equation 3.126 (3.126) In fact these matrices correspond to what in flat space we called the inverse Lorentz transformations (which operate on basis vectors); as before we also have ordinary Lorentz transformations $ \Lambda^{a'}_{}$a, which transform the basis one-forms. As far as components are concerned, as before we transform upper indices with $ \Lambda^{a'}_{}$a and lower indices with $ \Lambda_{a'}^{}$a. So we now have the freedom to perform a Lorentz transformation (or an ordinary Euclidean rotation, depending on the signature) at every point in space. These transformations are therefore called local Lorentz transformations, or LLT's. We still have our usual freedom to make changes in coordinates, which are called general coordinate transformations, or GCT's. Both can happen at the same time, resulting in a mixed tensor transformation law: Equation 3.127 (3.127) Translating what we know about tensors into non-coordinate bases is for the most part merely a matter of sticking vielbeins in the right places. The crucial exception comes when we begin to differentiate things. In our ordinary formalism, the covariant derivative of a tensor is given by its partial derivative plus correction terms, one for each index, involving the tensor and the connection coefficients. The same procedure will continue to be true for the non-coordinate basis, but we replace the ordinary connection coefficients $ \Gamma^{\lambda}_{\mu\nu}$ by the spin connection, denoted $ \omega_{\mu}^{}$ab. Each Latin index gets a factor of the spin connection in the usual way: Equation 3.128 (3.128) (The name "spin connection" comes from the fact that this can be used to take covariant derivatives of spinors, which is actually impossible using the conventional connection coefficients.) In the presence of mixed Latin and Greek indices we get terms of both kinds. The usual demand that a tensor be independent of the way it is written allows us to derive a relationship between the spin connection, the vielbeins, and the $ \Gamma^{\nu}_{\mu\lambda}$'s. Consider the covariant derivative of a vector X, first in a purely coordinate basis: Equation 3.129 (3.129) Now find the same object in a mixed basis, and convert into the coordinate basis: Equation 3.130 (3.130) Comparison with (3.129) reveals Equation 3.131 (3.131) or equivalently Equation 3.132 (3.132) A bit of manipulation allows us to write this relation as the vanishing of the covariant derivative of the vielbein, Equation 3.133 (3.133) which is sometimes known as the "tetrad postulate." Note that this is always true; we did not need to assume anything about the connection in order to derive it. Specifically, we did not need to assume that the connection was metric compatible or torsion free. Since the connection may be thought of as something we need to fix up the transformation law of the covariant derivative, it should come as no surprise that the spin connection does not itself obey the tensor transformation law. Actually, under GCT's the one lower Greek index does transform in the right way, as a one-form. But under LLT's the spin connection transforms inhomogeneously, as Equation 3.134 (3.134) You are encouraged to check for yourself that this results in the proper transformation of the covariant derivative. So far we have done nothing but empty formalism, translating things we already knew into a new notation. But the work we are doing does buy us two things. The first, which we already alluded to, is the ability to describe spinor fields on spacetime and take their covariant derivatives; we won't explore this further right now. The second is a change in viewpoint, in which we can think of various tensors as tensor-valued differential forms. For example, an object like X$\scriptstyle \mu$a, which we think of as a (1, 1) tensor written with mixed indices, can also be thought of as a "vector-valued one-form." It has one lower Greek index, so we think of it as a one-form, but for each value of the lower index it is a vector. Similarly a tensor A$\scriptstyle \mu$$\scriptstyle \nu$ab, antisymmetric in $ \mu$ and $ \nu$, can be thought of as a "(1, 1)-tensor-valued two-form." Thus, any tensor with some number of antisymmetric lower Greek indices and some number of Latin indices can be thought of as a differential form, but taking values in the tensor bundle. (Ordinary differential forms are simply scalar-valued forms.) The usefulness of this viewpoint comes when we consider exterior derivatives. If we want to think of X$\scriptstyle \mu$a as a vector-valued one-form, we are tempted to take its exterior derivative: Equation 3.135 (3.135) It is easy to check that this object transforms like a two-form (that is, according to the transformation law for (0, 2) tensors) under GCT's, but not as a vector under LLT's (the Lorentz transformations depend on position, which introduces an inhomogeneous term into the transformation law). But we can fix this by judicious use of the spin connection, which can be thought of as a one-form. (Not a tensor-valued one-form, due to the nontensorial transformation law (3.134).) Thus, the object Equation 3.136 (3.136) as you can verify at home, transforms as a proper tensor. An immediate application of this formalism is to the expressions for the torsion and curvature, the two tensors which characterize any given connection. The torsion, with two antisymmetric lower indices, can be thought of as a vector-valued two-form T$\scriptstyle \mu$$\scriptstyle \nu$a. The curvature, which is always antisymmetric in its last two indices, is a (1, 1)-tensor-valued two-form, Rab$\scriptstyle \mu$$\scriptstyle \nu$. Using our freedom to suppress indices on differential forms, we can write the defining relations for these two tensors as Equation 3.137 (3.137) Equation 3.138 (3.138) These are known as the Maurer-Cartan structure equations. They are equivalent to the usual definitions; let's go through the exercise of showing this for the torsion, and you can check the curvature for yourself. We have Equation 3.139 (3.139) which is just the original definition we gave. Here we have used (3.131), the expression for the $ \Gamma^{\lambda}_{{\mu\nu}}$'s in terms of the vielbeins and spin connection. We can also express identities obeyed by these tensors as Equation 3.140 (3.140) Equation 3.141 (3.141) The first of these is the generalization of R$\scriptstyle \rho$[$\scriptstyle \sigma$$\scriptstyle \mu$$\scriptstyle \nu$] = 0, while the second is the Bianchi identity $ \nabla_{[\lambda\vert}^{}$R$\scriptstyle \rho$$\scriptstyle \sigma$|$\scriptstyle \mu$$\scriptstyle \nu$] = 0. (Sometimes both equations are called Bianchi identities.) The form of these expressions leads to an almost irresistible temptation to define a "covariant-exterior derivative", which acts on a tensor-valued form by taking the ordinary exterior derivative and then adding appropriate terms with the spin connection, one for each Latin index. Although we won't do that here, it is okay to give in to this temptation, and in fact the right hand side of (3.137) and the left hand sides of (3.140) and (3.141) can be thought of as just such covariant-exterior derivatives. But be careful, since (3.138) cannot; you can't take any sort of covariant derivative of the spin connection, since it's not a tensor. So far our equations have been true for general connections; let's see what we get for the Christoffel connection. The torsion-free requirement is just that (3.137) vanish; this does not lead immediately to any simple statement about the coefficients of the spin connection. Metric compatibility is expressed as the vanishing of the covariant derivative of the metric: $ \nabla$g = 0. We can see what this leads to when we express the metric in the orthonormal basis, where its components are simply $ \eta_{ab}^{}$: Equation 3.142 Equation 3.142 (3.142) Then setting this equal to zero implies Equation 3.143 (3.143) Thus, metric compatibility is equivalent to the antisymmetry of the spin connection in its Latin indices. (As before, such a statement is only sensible if both indices are either upstairs or downstairs.) These two conditions together allow us to express the spin connection in terms of the vielbeins. There is an explicit formula which expresses this solution, but in practice it is easier to simply solve the torsion-free condition Equation 3.144 (3.144) using the asymmetry of the spin connection, to find the individual components. We now have the means to compare the formalism of connections and curvature in Riemannian geometry to that of gauge theories in particle physics. (This is an aside, which is hopefully comprehensible to everybody, but not an essential ingredient of the course.) In both situations, the fields of interest live in vector spaces which are assigned to each point in spacetime. In Riemannian geometry the vector spaces include the tangent space, the cotangent space, and the higher tensor spaces constructed from these. In gauge theories, on the other hand, we are concerned with "internal" vector spaces. The distinction is that the tangent space and its relatives are intimately associated with the manifold itself, and were naturally defined once the manifold was set up; an internal vector space can be of any dimension we like, and has to be defined as an independent addition to the manifold. In math lingo, the union of the base manifold with the internal vector spaces (defined at each point) is a fiber bundle, and each copy of the vector space is called the "fiber" (in perfect accord with our definition of the tangent bundle). Besides the base manifold (for us, spacetime) and the fibers, the other important ingredient in the definition of a fiber bundle is the "structure group," a Lie group which acts on the fibers to describe how they are sewn together on overlapping coordinate patches. Without going into details, the structure group for the tangent bundle in a four-dimensional spacetime is generally GL (4,$ \bf R$), the group of real invertible 4 × 4 matrices; if we have a Lorentzian metric, this may be reduced to the Lorentz group SO(3, 1). Now imagine that we introduce an internal three-dimensional vector space, and sew the fibers together with ordinary rotations; the structure group of this new bundle is then SO(3). A field that lives in this bundle might be denoted $ \phi^{A}_{}$(x$\scriptstyle \mu$), where A runs from one to three; it is a three-vector (an internal one, unrelated to spacetime) for each point on the manifold. We have freedom to choose the basis in the fibers in any way we wish; this means that "physical quantities" should be left invariant under local SO(3) transformations such as Equation 3.145 (3.145) where OA'A(x$\scriptstyle \mu$) is a matrix in SO(3) which depends on spacetime. Such transformations are known as gauge transformations, and theories invariant under them are called "gauge theories." For the most part it is not hard to arrange things such that physical quantities are invariant under gauge transformations. The one difficulty arises when we consider partial derivatives, $ \partial_{\mu}$$ \phi^{A}_{}$. Because the matrix OA'A(x$\scriptstyle \mu$) depends on spacetime, it will contribute an unwanted term to the transformation of the partial derivative. By now you should be able to guess the solution: introduce a connection to correct for the inhomogeneous term in the transformation law. We therefore define a connection on the fiber bundle to be an object A$\scriptstyle \mu$AB, with two "group indices" and one spacetime index. Under GCT's it transforms as a one-form, while under gauge transformations it transforms as Equation 3.146 (3.146) (Beware: our conventions are so drastically different from those in the particle physics literature that I won't even try to get them straight.) With this transformation law, the "gauge covariant derivative" Equation 3.147 (3.147) transforms "tensorially" under gauge transformations, as you are welcome to check. (In ordinary electromagnetism the connection is just the conventional vector potential. No indices are necessary, because the structure group U(1) is one-dimensional.) It is clear that this notion of a connection on an internal fiber bundle is very closely related to the connection on the tangent bundle, especially in the orthonormal-frame picture we have been discussing. The transformation law (3.146), for example, is exactly the same as the transformation law (3.134) for the spin connection. We can also define a curvature or "field strength" tensor which is a two-form, Equation 3.148 (3.148) in exact correspondence with (3.138). We can parallel transport things along paths, and there is a construction analogous to the parallel propagator; the trace of the matrix obtained by parallel transporting a vector around a closed curve is called a "Wilson loop." We could go on in the development of the relationship between the tangent bundle and internal vector bundles, but time is short and we have other fish to fry. Let us instead finish by emphasizing the important difference between the two constructions. The difference stems from the fact that the tangent bundle is closely related to the base manifold, while other fiber bundles are tacked on after the fact. It makes sense to say that a vector in the tangent space at p "points along a path" through p; but this makes no sense for an internal vector bundle. There is therefore no analogue of the coordinate basis for an internal space -- partial derivatives along curves have nothing to do with internal vectors. It follows in turn that there is nothing like the vielbeins, which relate orthonormal bases to coordinate bases. The torsion tensor, in particular, is only defined for a connection on the tangent bundle, not for any gauge theory connections; it can be thought of as the covariant exterior derivative of the vielbein, and no such construction is available on an internal bundle. You should appreciate the relationship between the different uses of the notion of a connection, without getting carried away. Next Contents Previous
67b8620c1f2761c4
Take the 2-minute tour × Until recently, I thought that graph theory is a topic which is well-suited for math olympiads, but which is a very small field of current mathematical research with not so many connections to "deeper" areas of mathematics. But then I stumbled over Bela Bollobas "Modern Graph Theory" in which he states: The time has now come when graph theory should be part of the education of every serious student of mathematics and computer sciences, both for its own sake and to enhance the appreciation of mathematics as a whole. Thus, I'm wondering whether I should deepen my knowledge of graph theory. I find topics like spectral and random graph theory very interesting, but I don't think that I am ever going to do research on purely graph theoretic questions. To the contrary, I'm mainly interested in areas like algebraic topology, algebraic number theory and differential topology, and I'm wondering if its useful to have some knowledge of graph theory when engaging in these topics. So my question is: Why should students like me, which are aspiring a mathematical research carreer in mathematical areas which are not directly related to graph theory, study graphs? share|improve this question I have a really basic basis on graph theory from a course I had as an undergraduate, only one brief topic, and I also would like to hear what people has to say about this. –  Marra Apr 26 '13 at 20:45 Topology, algebraic geometry , and number theory come together when one studies Dessins D'enfants. See en.wikipedia.org/wiki/Dessin_d%27enfant –  Baby Dragon Apr 26 '13 at 21:21 Also graphs are easy enough to define and think about, but are complicated enough that almost anything can be phrased in grapg theory. –  Baby Dragon Apr 26 '13 at 21:23 Are you surprised to see a graph theory book selling graph theory as an essential tool? (Regardless the truthfulness of this proposition...) –  Asaf Karagila Apr 26 '13 at 22:08 3 Answers 3 up vote 4 down vote accepted If you're more interested in algebraic topology, I suggest not to spend much time studying the combinatorial aspects of graph theory. It is true that graphs in this guise do appear in such areas; for instance, one uses Dynkin diagrams (which are graphs) to classify algebraic groups and also Lie groups. It's really very elegant and useful for work in algebraic groups, but you need very little graph theory for this. Graphs are often used where there is some combinatorial structure, but again I doubt (but perhaps I am wrong) that knowing lots of graph theory (as one would find in the typical book like Bondy's) would help too much. "Graph theory" covers much more than just this, however. For instance an esperantist family (generalisation of expander family) of graphs arise naturally as a certain family of Cayley graphs associated to finite groups that are quotients of fundamental groups (as Riemann surfaces) of algebraic curves, which come from any family of etale covers. This can be used to prove interesting results about families of various arithmetic objects and how they behave generically. An excellent starting point for these topics is the paper by Ellenberg, Hall, and Kowalski, "Expander graphs, gonality, and variation of Galois representations". This source hopefully should spark your imagination about such topics and encourage you to read up on such topics. The kind of graph theory covered in a typical undergraduate course I think isn't so prevalent in every day algebraic topology and related fields since the stuff in "typical graph theory" studies properties that aren't invariant under homotopy, and homotopy invariants is the stuff that algebraic topology is built upon. There is however, a kind of "graph theory" that is extremely useful in topology and number theory: it's the theory of simplicial sets (and simplicial objects in any category)! This doesn't just look at graphs though, but objects that are built from higher simplicies too. The basic theory of simplicial objects in algebraic topology covers homotopy-type stuff. Simplicial objects, for instance simplicial sets, are completely combinatorially-defined. For instance "nice" simplicial sets called fibrant ones have a notion of fundamental group and there is a functor from simplicial sets to spaces called "geometric realization" that sends a simplicial set to a space, which for a graph would be the obvious topological space, and the notion of fundamental group agrees with the combinatorially defined one. Simplicial sets are so fundamental to many areas of algebra such as: $K$-theory (they are typically used to define the higher $K$-groups), higher category theory (which is a generalisation of category theory and also has applications to $K$-theory), homological algebra (essential tool, the cat of nonnegative chain complexes of $R$-modules is equivalent to the category of simplicial objects in the category of $R$-modules), algebraic topology itself of course, algebraic geometry (for things like $\mathbb{A}^1$ homotopy theory), and tons more stuff that I don't know about I'm sure. Good sources for simplicial objects are: • May, "Simplicial Objects in Algebraic Topology" • Ch. 8 of Weibel's "An Introduction to Homological Algebra" (you probably should start here!) • Goerss's book "Simplicial Homotopy Theory" • Moerdijk and Toen's "Simplicial Methods for Operads and Algebraic Geometry" (Part 2 is about algebraic geometry) • Ferrario and Piccinini's "Simplicial Structures in Topology" (more topology) share|improve this answer Mathematics is not so neatly divided into different subjects as it might seem right now. It is some kind of vast mountain, and most of it is obscured by clouds and very hard to see. It is valuable to try to look at the mountain from many different perspectives; in doing so you might see some part of the mountain you couldn't see otherwise, and that helps you better understand the mountain as a whole (which is valuable even if you currently think you are only interested in one small part of the mountain). Graph theory is one of those perspectives. More specifically, here are some interesting connections I've learned about between graph theory and other fields of mathematics over the years. • Graphs can be used to analyze decompositions of tensor products of representations in representation theory. See, for example, this blog post. This is related to a beautiful picture called the McKay correspondence; see, for example, this blog post. (There are some more sophisticated aspects of the McKay correspondence involving algebraic geometry I don't touch on in that post, though.) • Graphs can be used as a toy model for Riemannian manifolds. For example, like a Riemannian manifold, they have a Laplacian. This lets you write down various analogues of differential equations on a graph, such as the heat equation and the wave equation. In this blog post I describe the Schrödinger equation on a finite graph as a toy model of quantum mechanics. • Graphs can also be used as a toy model for algebraic curves. For example, like an algebraic curve, they have a notion of divisor and divisor class group. See, for example, this paper. • Graphs can also be used as a toy model for number fields. For example, like (the ring of integers of) a number field, they have a notion of prime and zeta function, and there is even an analogue of the Riemann hypothesis in this setting. See, for example, this book. But there is something to be said for learning about graphs for their own sake. share|improve this answer The only reason is that it is an active field nowadays and you should have as many possibilities in front of you when choosing your path. If you end up working in contact geometry graph theory probably won't help you a whole lot. share|improve this answer See the abstract to mathunion.org/ICM/ICM1986.1/Main/icm1986.1.0531.0539.ocr.pdf –  Baby Dragon Apr 26 '13 at 21:52 Your Answer
3a8c4200beb34d39
Take the 2-minute tour × I'm reading the Wikipedia page for the Dirac equation: $J = -\frac{i\hbar}{2m}(\phi^*\nabla\phi - \phi\nabla\phi^*)$ with the conservation of probability current and density following from the Schrödinger equation: $\nabla\cdot J + \frac{\partial\rho}{\partial t} = 0.$ The fact that the density is positive definite and convected according to this continuity equation, implies that we may integrate the density over a certain domain and set the total to 1, and this condition will be maintained by the conservation law. A proper relativistic theory with a probability density current must also share this feature. Now, if we wish to maintain the notion of a convected density, then we must generalize the Schrödinger expression of the density and current so that the space and time derivatives again enter symmetrically in relation to the scalar wave function. We are allowed to keep the Schrödinger expression for the current, but must replace by probability density by the symmetrically formed expression $\rho = \frac{i\hbar}{2m}(\psi^*\partial_t\psi - \psi\partial_t\psi^*).$ which now becomes the 4th component of a space-time vector, and the entire 4-current density has the relativistically covariant expression $J^\mu = \frac{i\hbar}{2m}(\psi^*\partial^\mu\psi - \psi\partial^\mu\psi^*)$ The continuity equation is as before. Everything is compatible with relativity now, but we see immediately that the expression for the density is no longer positive definite - the initial values of both ψ and ∂tψ may be freely chosen, and the density may thus become negative, something that is impossible for a legitimate probability density. Thus we cannot get a simple generalization of the Schrödinger equation under the naive assumption that the wave function is a relativistic scalar, and the equation it satisfies, second order in time. I am not sure how one gets a new $\rho$ and $J^\mu$. How does one do to derive these two? And can anyone show me why the expression for density not positive definite? share|improve this question any comment...? –  Paul Reubens Oct 7 '12 at 6:41 please see below, hope that helps –  user11547 Oct 7 '12 at 17:17 1 Answer 1 up vote 1 down vote accepted This particular writing of the problem in the article I have always thought was sloppy as well. The most confusing part of the discussion is the statement "The continuity equation is as before". At first one writes the continuity equation as: $$\nabla \cdot J + \dfrac{\partial\rho}{\partial t} = 0$$ Although the del operator can be defined to be infinite dimensional, it is frequently reserved for three dimensions and so the construction of the sentence does not provide a clear interpretation. If you look up conserved current you find the 4-vector version of the continuity equation: $$\partial_\mu j^\mu = 0$$ What is important about the derivation in the wikipedia article is the conversion of the non time dependent density to a time dependent density, or rather: $$\rho = \phi^*\phi$$ $$\rho = \dfrac{i\hbar}{2m}(\psi^*\partial_t\psi - \psi\partial_t\psi^*)$$ the intent is clear, the want to make the time component have the same form as the space components. The equation of the current is now: $$J^\mu = \dfrac{i\hbar}{2m}(\psi^*\partial^\mu\psi - \psi\partial^\mu\psi^*)$$ which now contains the time component. So the continuity equation that should be used is: $$\partial_\mu J^\mu = 0$$ where the capitalization of $J$ appears to be arbitrary choice in the derivation. One can verify that this is the intent by referring to the article on probability current. From the above I can see that the sudden insertion of the statement that one can arbitrarily pick $$\psi$$ and $$\dfrac{\partial \psi}{\partial t}$$ isn't well explained. This part the article was a source of confusion for me as well until one realized that the author was trying to get to a discussion about the Klein Gordon equation A quick search of web for "probability current and klein gordan equation" finds good links, including a good one from the physics department at UC Davis. If you follow the discussion in the paper you can see it confirms that the argument is really trying to get to a discussion about the Klein Gordon equation and make the connection to probability density. Now, if one does another quick search for "negative solutions to the klein gordan equation" one can find a nice paper from the physics department of the Ohio University. There we get some good discussion around equation 3.13 in the paper which reiterates that, when we redefined the density we introduced some additional variability. So the equation: $$\rho = \dfrac{i\hbar}{2mc^2}(\psi^*\partial_t\psi - \psi\partial_t\psi^*)$$ (where in the orginal, c was set at 1) really is at the root of the problem (confirming the intent in the original article). However, it probably still doesn't satisfy the question, "can anyone show me why the expression for density not positive definite?", but if one goes on a little shopping spree you can find the book Quantum Field Theory Demystified by David McMahon (and there are some free downloads out there, but I won't link to them out of respect for the author), and if you go to pg 116 you will find the discussion: Remembering the free particle solution $$\varphi(\vec{x},t) = e^{-ip\cdot x} = e^{-i(Et- px)}$$ the time derivatives are $$\dfrac{\partial\varphi}{\partial t} = -iEe^{-i(Et- px)}$$ $$\dfrac{\partial\varphi^*}{\partial t} = iEe^{i(Et- px)}$$ We have $$\varphi^*\dfrac{\partial\varphi}{\partial t} = e^{i(Et- px)}[-iEe^{-i(Et- px)}] = -iE$$ $$\varphi\dfrac{\partial\varphi^*}{\partial t} = e^{-i(Et- px)}[iEe^{i(Et- px)}] = iE$$ So the probability density is $$\rho = i(\varphi^*\dfrac{\partial\varphi}{\partial t} - \varphi\dfrac{\partial\varphi^*}{\partial t}) = i(-iE-iE) = 2E$$ Looks good so far-except for those pesky negative energy solutions. Remember that $$E = \pm\sqrt{p^2+m^2}$$ In the case of the negative energy solution $$\rho = 2E =-2\sqrt{p^2+m^2}<0$$ which is a negative probability density, something which simply does not make sense. Hopefully that helps, the notion of a negative probability does not make sense because we define probability on the interval [0,1], so by definition negative probabilities have no meaning. This point is sometimes lost on people when they try to make sense of things, but logically any discussion of negative probabilities is non-sense. This is why QFT ended up reinterpreting the Klein Gordan equation and re purposing it for an equation that governs creation and annihilation operators. share|improve this answer Your Answer
1741dab44cfe47e1
Take the 2-minute tour × I have a question regarding the proof of the time-independent Schrödinger equation. So if we have a time-Independent Hamiltonian, we can solve the Schrödinger equation by adopting separation of variables method: we write our general solution as $ \psi(r,t) = \psi(r)*f(t)$ and we get to the two equation : one for the f(t) and one for the $\psi(r)$. $$ f(t)=e^{-\frac{i}{\hbar}Ht} $$ and $$ H\psi(r) = E\psi(r) $$ Books in general refer to the second equation as the TISE and it is seen as an eigenvalue problem for the hamiltonian, in order to find the stationary states. Now what i don't understand is why we see that equation as an eigenvalue problem for the hamiltonian, since we have a wave-function $\psi(x)$ which is supposed to be eigenstate for H. So if $\psi(x)$ is eigenstate for H, it means that H is diagonal in coordinate basis, but i know it's not true since H has the term $\frac{P^2}{2m}$ in it, which is not diagonal in coordinate basis. Where am i wrong?? Thank you very much share|improve this question $H$ isn't diagonal in coordinate basis, but in the $\psi(r)$ Eigenbasis you're computing... –  Christoph Jan 21 '14 at 10:27 ok thank you it deals with $\psi(r)$ and not the coordinates actually. But why are we sure that $\psi(r)$ is eigenfunction of $H$? When i separate the variables i just say i have a function $f(t)$ and a function $\psi(r)$, it can be any function to be determined –  Danny Jan 21 '14 at 10:48 1 Answer 1 Start with the time-dependent Schrödinger equation $$ \hat H \Psi(r, t) = i \hbar \partial_t \Psi (r, t). $$ Using the ansatz $\Psi(r,t) = \psi(r) f(t)$ yields $$ f(t) \hat H \psi(r) = i \hbar \psi(r) \partial_t f(t)$$ and, via the standard separation of variables trick, $$ i\hbar\frac{\dot f(t)}{f(t)} = \text{const} = \frac{\hat H\psi(r)}{\psi(r)}; $$ the two sides are equal, but the LHS depends only on $t$ while the RHS depends only on $r$, so they each have to be constant. Let us call this constant $E$. Then, for the time-dependent part, we get $$ \dot f(t) = -i \frac{E}{\hbar} f(t) $$ Which is manifestly solved by $$ f(t) = \exp\left\{-i \frac{E}{\hbar} t\right\}. $$ For the spatial part, we find the time-independent Schrödinger equation $$ \hat H \psi(r) = E \psi(r), $$ which as you observe can be viewed as an eigenvalue equation for $\hat H$. This motivates the choice of name for $E$: Physically, the eigenvalue of the Hamiltonian is the energy. share|improve this answer Your Answer
003451d4ee10ec5e
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer For an observable $A$ and a Hamiltonian $H$, Wikipedia gives the time evolution equation for $A(t) = e^{iHt/\hbar} A e^{-iHt/\hbar}$ in the Heisenberg picture as $$\frac{d}{dt} A(t) = \frac{i}{\hbar} [H, A] + \frac{\partial A}{\partial t}.$$ From their derivation it sure looks like $\frac{\partial A}{\partial t}$ is supposed to be the derivative of the original operator $A$ with respect to $t$ and $\frac{d}{dt} A(t)$ is the derivative of the transformed operator. However, the Wikipedia derivation then goes on to say that $\frac{\partial A}{\partial t}$ is the derivative with respect to time of the transformed operator. But if that's true, then what does $\frac{d}{dt} A(t)$ mean? Or is that just a mistake? (I need to know which term to get rid of if $A$ is time-independent in the Schrodinger picture. I think it's $\frac{\partial A}{\partial t}$ but you can never be too sure of these things.) share|cite|improve this question @Qiaochu Yuan: Please link to the Wikipedia page, so we can improve the Wikipedia page if needed. – Qmechanic Jun 19 '11 at 13:55 @Qmechanic: thanks! It looks like the general issue is that we want to differentiate functions of time and other variables which are themselves functions of time. But here $A$, as an operator, is only (possibly) a function of time, yes? – Qiaochu Yuan Jun 19 '11 at 13:58 @Qiaochu Yuan: Often, $\hat{A}(t)= \sum_r\sum_{i's} f_{i_1,\ldots i_r}(t) ~\hat{z}^{i_1}(t)\ldots \hat{z}^{i_r}(t)$, where $\hat{z}^{i}(t)$ are the fundamental phase space operators (positions and momenta). Explicit differentiation means differentiation of the $f_i$ coefficient functions. – Qmechanic Jun 19 '11 at 14:21 @Qiaochu Yuan: The relation $\hat{A}(t)=e^{\frac{i}{\hbar}{\hat{H}}t}\hat{A}(0)e^{-\frac{i}{\hbar}\hat{H}t}$ is only true if $\hat{H}$ and $\hat{A}(t)$ do not depend explicitly on time $t$. (Here and above I have only addressed the Heisenberg picture.) – Qmechanic Jun 19 '11 at 16:11 up vote 8 down vote accepted There is no mistake on the Wikipedia page and all the equations and statements are consistent with each other. In $$A_{\rm Heis.}(t) = e^{iHt/\hbar} A e^{-iHt/\hbar}$$ the letter $A$ in the middle of the product represents the Schrödinger picture operator $A = A_{\rm Schr.}$ that is not evolving with time because in the Schrödinger picture, the dynamical evolution is guaranteed by the evolution of the state vector $|\psi\rangle$. However, this doesn't mean that the time derivative $dA_{\rm Schr.}/dt=0$. Instead, we have $$ \frac{dA_{\rm Schr.}}{dt} = \frac{\partial A_{\rm Schr.}}{\partial t} $$ Here, $A_{\rm Schr.}$ is meant to be a function of $x_i, p_j$, and $t$. In most cases, there is no dependence of the Schrödinger picture operators on $t$ - which we call an "explicit dependence" - but it is possible to consider a more general case in which this explicit dependence does exist (some terms in the energy, e.g. the electrostatic energy in an external field, may be naturally time-dependent). In Schrödinger's picture, $dx_{i,\rm Schr.}/dt=0$ and $dp_{j,\rm Schr.}/dt=0$ which is why the total derivative of $A_{\rm Schr.}$ with respect to time is given just by the partial derivative with respect to time. Imagine, for example, $$ A_{\rm Schr.}(t) = c_1 x^2 + c_2 p^2 + c_3 (t) (xp+px) $$ We would have $$ \frac{dA_{\rm Schr.}(t)}{dt} = \frac{\partial c_3(t)}{\partial t} (xp+px).$$ These Schrödinger's picture operators are called "untransformed" on that Wikipedia page. The transformed ones are the Heisenberg picture operators given by $$A_{\rm Heis.}(t) = e^{iHt/\hbar} A_{\rm Schr.}(t) e^{-iHt/\hbar}$$ Their time derivative, $dA_{\rm Heis.}(t)/dt$, is more complicated. An easy differentiation gives exactly the formula involving $[H,A_{\rm Heis.}]$ that you quoted as well. $$\frac{d}{dt} A_{\rm Heis.}(t) = \frac{i}{\hbar} [H, A_{\rm Heis.}(t)] + \frac{\partial A_{\rm Heis.}(t)}{\partial t}.$$ The two terms in the commutator arise from the $t$-derivatives of the two exponentials in the formula for the Heisenberg $A_{\rm Heis.}(t)$ while the partial derivative arises from $dA_{\rm Schr.}/dt$ we have always had. (These simple equations remain this simple even for a time-dependent $A_{\rm Schr.}$; however, we have to assume that the total $H$ is time-independent, otherwise all the equations would get more complicated.) The two exponentials on both sides never disappear by any kind of derivative, so obviously, all the appearances of $A$ in the differential equation above are $A_{\rm Heis.}$. The displayed equation above is the (only) dynamical equation for the Heisenberg picture so it is self-contained and doesn't include any objects from other pictures. In the Heisenberg picture, it is no longer the case that $dx_{\rm Heis.}(t)/dt=0$ (not!) and the similar identity fails for $p_{\rm Heis.}(t)$ as well. $A_{\rm Heis.}(t)$ is a general function of all the basic operators $x_{i,\rm Heis.}(t)$ and $p_{j,\rm Heis.}(t)$, as well as time $t$. share|cite|improve this answer This is terrible notation (on the part of the math / physics community as a whole, not this answer in particular). So if I'm reading this answer correctly, $\frac{\partial A}{\partial t}$ really means $\frac{\partial A_{\text{Schr}}(x, p, t)}{\partial t}$ (keeping $x$ and $p$ constant) whereas $\frac{d}{dt} A(t)$ means $\frac{d}{dt} \left( e^{iHt/\hbar} A_{\text{Schr}}(x(t), p(t), t) e^{-iHt/\hbar} \right)$? – Qiaochu Yuan Jun 19 '11 at 15:21 You make it sound contrived but there is nothing contrived about it. If $A_{Heis}(t) = e.A_{Schr}.e$, then obviously $d A_{Heis}(t)/dt = d(e.A_{Schr}.e)/dt$. This is called substitution and I am sure that most mathematicians would agree that this is true. Obviously, when we use symbols, we must know what they mean. The Wikipedia page assumes that the reader knows which $A(t)$ is the Schr. picture and which one is the Heis. picture. I just made this distinction more explicit. But when people know what the symbols mean, the notation of derivatives is unambiguous and standard. – Luboš Motl Jun 19 '11 at 15:52 @Qiaochu Yuan: The point is that implicit and explicit dependences change their meaning when going back and forth between Schroedinger and Heisenberg picture because the basic operators get changed as well. – Qmechanic Jun 19 '11 at 15:52 Just to be sure, in the equation $dA/dt = i/\hbar [H,A] +\partial A/\partial t$, all the symbols $A$ are obviously still $A_{Heis}$, including one in the last term. This whole equation is in Heisenberg picture - it is the dynamical equation determining the time evolution of all of physics in the Heisenberg picture so of course that it cannot rely on other pictures. – Luboš Motl Jun 19 '11 at 15:53 Dear Qiaochu, even if $A_{Heis}$ meant nothing else, we still wrote its "definition" in terms of $A_{Schr}$, by the conjugation, so it is definitely not an undefined object. It is totally well-defined - if you decide that the conjugation is its definition - and all questions about it may be answered by pure thought. Yes, it is true that $\partial x_{Heis} / \partial t = 0$ and similarly for $p_{Heis}$. However, $dx_{Heis}/dt$ is not zero, and neither is $dp_{Heis}/dt$. – Luboš Motl Jun 19 '11 at 16:12 It's easiest to derive this from the Schrödinger picture: Let $B(t)$ be a time-dependent operator in the Schrödinger picture. The corresponding operator in the Heisenberg picture is $A(t) = e^{iHt/\hbar} B(t) e^{-iHt/\hbar}$. Differentiation with respect to $t$ gives $$ \frac{d}{dt} A(t) = e^{iHt/\hbar} \left(\frac{i}{\hbar} H B(t) + \frac{\partial}{\partial t}B(t) - \frac{i}{\hbar} B(t) H) \right) e^{-iHt/\hbar} $$ $$ = e^{iHt/\hbar} \left(\frac{i}{\hbar} [H,B(t)] + \frac{\partial}{\partial t}B(t)\right) e^{-iHt/\hbar} = \frac{i}{\hbar} [H,A(t)] + \frac{\partial A}{\partial t} $$ In other words, the last partial derivative is to be understood in the sense that you take the operator $\frac{\partial B}{\partial t}$ and "evolve it in time" via the Schrödinger equation. Useful non-example: the velocity operator $\vec v$. The velocity operator is the derivative of the position operator, but it's the total derivative as the system evolves. Hence, $$ \vec v = \frac{i}{\hbar} [H,\vec r] .$$ In the Schrödinger picture, the position operator is, of course, time independent. Since $H$ is time independent as well, this is also the right velocity operator in the Schrödinger picture. share|cite|improve this answer The Heisenberg picture is defined as $$A_{\mathrm{H}}(t) = e^{iHt/\hbar} A_{\mathrm{S}}(t) e^{-iHt/\hbar}$$ differentiating both sides we obtain $$i\hbar \frac{\mathrm{d}}{\mathrm{d} t} A_{\mathrm{H}}(t) = [ A_{\mathrm{H}}(t), H] + i\hbar \left( \frac{\mathrm{d}}{\mathrm{d} t} A_{\mathrm{S}}(t) \right)_{\mathrm{H}} \>\>\>\>\>\>\>\>\>\>\>\>\>\> (1)$$ Some textbooks rewrite the last term using the notation [*] $$\frac{\partial}{\partial t} A_{\mathrm{H}}(t) \equiv \left( \frac{\mathrm{d}}{\mathrm{d} t} A_{\mathrm{S}}(t) \right)_{\mathrm{H}}$$ [*] I agree on that this notation is awkward for mathematicians (it is not a true partial derivative) and the more rigorous physics textbooks use (1) with the total time derivative. share|cite|improve this answer As always in the Hamiltonian formulation of mechanics, whether classical or quantum, $$\partial A\over\partial t$$ means the way $A$ varies explicitly in time simply from the occurrence of $t$ explicitly in its formula. But some of the other parts of the formula of $A$ might change with time also, thus contributing something to the total change in $A$ as time goes by, notated $$dA\over dt.$$ This is the same as the notation in the chain rule in several variables where $df=\frac{\partial f}{\partial x}dx + \frac{\partial f}{\partial t}dt$. The differential on the Left Hand Side is the « total differential » $df$ but it is the sum of two terms, only one of which is the explicit dependence of $f$ on $t$. share|cite|improve this answer Your Answer
335bda35bd81fac3
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer I would like to know what quantization is, I mean I would like to have some elementary examples, some soft nontechnical definition, some explanation about what do mathematicians quantize?, can we quantize a function?, a set?, a theorem?, a definition?, a theory? share|cite|improve this question Ugh, can someone rewrite this question? – Scott Morrison Nov 20 '09 at 2:42 I fear that the OP might be misinterpreting the meaning of the word "theory" in QFT. – José Figueroa-O'Farrill Nov 20 '09 at 17:46 I rewrote the question. – Kristal Cantwell May 26 '13 at 16:49 11 Answers 11 up vote 84 down vote accepted As I'm sure you'll see from the many answers you'll get, there are lots of notions of "quantization". Here's another perspective. Recall the primary motivation of, say, algebraic geometry: a geometric space is determined by its algebra of functions. Well, actually, this isn't quite true --- a complex manifold, for example, tends to have very few entire functions (any bounded entire function on C is constant, and so there are no nonconstant entire functions on a torus, say), so in algebraic geometry, they use "sheaves", which are a way of talking about local functions. In real geometry, though (e.g. topology, or differential geometry), there are partitions of unity, and it is more-or-less true that a space is determined by its algebra of total functions. Some examples: two smooth manifolds are diffeomorphic if and only if the algebras of smooth real-valued functions on them are isomorphic. Two locally compact Hausdorff spaces are homeomorphic if and only if their algebras of continuous real-valued functions that vanish at infinity (i.e. for any epsilon there is a compact set so that the function is less than epsilon outside the compact set) are isomorphic. (From a physics point of view, it should be taken as a definition of "space" that it depends only on its algebra of functions. Said functions are the possible "observables" or "measurements" --- if you can't measure the difference between two systems, you have no right to treat them as different.) So anyway, it can be useful to recast geometric ideas into algebraic language. Algebra is somehow more "finite" or "computable" than geometry. But not every algebra arises as the algebra of functions on a geometric space. In particular, by definition the multiplication in the algebra is "pointwise multiplication", which is necessarily commutative (the functions are valued in R or C, usually). So from this point of view, "quantum mathematics" is when you try to take geometric facts, written algebraically, and interpret them in a noncommutative algebra. For example, a space is locally compact Hausdorff iff its algebra of continuous functions is commutative c-star algebra, and any commutative c-star algebra is the algebra of continuous functions on some space (in fact, on its spectrum). So a "quantum locally compact Hausdorff space" is a non-commutative c-star algebra. Similarly, "quantum algebraic space" is a non-commutative polynomial algebra. Anyway, I've explained "quantum", but not "quantization". That's because so far there's just geometry ("kinetics"), and no physics ("dynamics"). Well, a noncommutative algebra has, along with addition and multiplication, an important operation called the "commutator", defined by $[a,b]=ab-ba$. Noncommutativity says precisely that this operation is nontrivial. Let's pick a distinguished function H, and consider the operation $[H,-]$. This is necessarily a differential operator on the algebra, in the sense that it is linear and satisfies the Leibniz product rule. If the algebra were commutative, then differential operators would be the same as vector fields on the corresponding geometric space, and thus are the same as differential equations on the space. In fact, that's still true for noncommutative algebras: we define the "time evolution" by saying that for any function (=algebra element) f, it changes in time with differential [H,f]. (Using this rule on coordinate functions defines the geometric differential equation; in noncommutative land, there does not exist a complete set of coordinate functions, as any set of coordinate functions would define a commutative algebra.) Ok, so it might happen that for the functions you care about, $[a,b]$ is very small. To make this mathematically precise, let's say that (for the subalgebra of functions that do not have very large values) there is some central algebra element $\hbar$, such that $[a,b]$ is always divisible by $\hbar$. Let $A$ be the algebra, and consider the $A/\hbar A$. If $\hbar$ is supposed to be a "very small number", then taking this quotient should only throw away fine-grained information, but some sort of "classical" geometry should still survive (notice that since $[a,b]$ is divisible by $\hbar$, it goes to $0$ in the quotient, so the quotient is commutative and corresponds to a classical geometric space). We can make this precise by demanding that there is a vector-space lift $(A/\hbar A) \to A$, and that $A$ is generated by the image of this lift along with the element $\hbar$. Anyway, so with this whole set up, the quotient $A/\hbar A$ actually has a little more structure than just being a commutative algebra. In particular, since $[a,b]$ is divisible by $\hbar$, let's consider the element $\{a,b\} = \hbar^{-1} [a,b]$. (Let's suppose that $\hbar$ is not a zero-divisor, so that this element is well-defined.) Probably, $\{a,b\}$ is not small, because we have divided a small thing by a small thing, so that it does have a nonzero image in the quotient. This defines on the quotient the structure of a Poisson algebra. In particular, you can check that $\{H,-\}$ is a differential operator for any (distinguished) element $H$, and so still defines a "mechanics", now on a classical space. Then quantization is the process of reversing the above quotient. In particular, lots of spaces that we care about come with canonical Poisson structures. For example, for any manifold, the algebra of functions on its cotangent bundle has a Possion bracket. "Quantizing a manifold" normally means finding a noncommutative algebra so that some quotient (like the one above) gives the original algebra of functions on the cotangent bundle. The standard way to do this is to use Hilbert spaces and bounded operators, as I think another answerer described. share|cite|improve this answer Concerning "...a space is locally compact Hausdorff iff its algebra of continuous functions is commutative c-star algebra": Unless I misunderstand the statement, it is not true: For any topological space $X$, the bounded functions $X\to \mathbb C$ form a commutative $C^*$-algebra. – Rasmus Bentmann Nov 11 '11 at 21:52 @Rasmus: hrm, it's now been a while since c-star-algebra class, and it's not my area. But my understanding is the following. First, when I say "algebra of functions", I never mean the algebra of bounded functions. In the real world, I usually want "all" functions, but when I am working c-star-algebraically, I mean "function that's less than $\epsilon$ outside a compact". Given $X$, the algebra of bounded functions is the algebra of functions on the Stone-Cech completion $\beta X$ of $X$, and it's not surprising for $\beta X$ to have better properties than $X$. – Theo Johnson-Freyd Nov 12 '11 at 6:41 But of course you're right, there's something wrong with the statement, because any indiscrete space has only the constant functions, which clearly form a commutative c-star algebra. Probably I should have added the word "Hausdorff" somewhere --- there's no chance of recovering non-Hausdorff structure from continuous $\mathbb C$-valued functions. – Theo Johnson-Freyd Nov 12 '11 at 6:43 I don't know what it means for a mathematician to quantize something, but I can give you a rough description, and a few specific examples, from a physicist's point of view. Motivational fluff When quantum mechanics was first discovered, people tended to think of it as a modified version of classical mechanics [1]. In those days, very few quantum systems were known, so people would create quantum systems by "quantizing" classical ones. To quantize a classical system is to come up with a quantum system that "behaves similarly" in some sense. For example, you generally want there to be an intuitive correspondence between the observables of a classical system and the observables of its quantization, and you generally want the expectation values of the quantized observables to obey the same equations of motion as their classical counterparts. Because the goal of quantization is to find a quantum system that's "analogous" in some way to a given classical system, it's not a mathematically well-defined procedure, and there's no unique way of doing it. How you attempt to quantize a system, and how you decide whether or not you've succeeded, depends entirely on your motivation and goals. The harder stuff I've been using the phrase "quantum system" a lot---what do I really mean? In my opinion, one of the best ways to find out is to read Section 16.5 of Probability via Expectation, by Peter Whittle. Roughly speaking, a quantum system has two basic parts: • A complex inner product space $H$, called the state space [2]. Each ray of $H$ represents a possible "pure state" of the system. A pure state is somewhat analogous to a probability distribution, in that it tells you how to assign expectation values to "observables"; in particular, it tells you how to assign probabilities to propositions. • A collection of self-adjoint linear maps from $H$ to itself, called observables. An observable is somewhat analogous to a random variable; it represents a property of the system that can be measured and found to have a certain value. The values that an observable can take are given by its eigenvalues (or, in the infinite-dimensional case, its spectrum). Say $A$ is an observable, $a$ is an eigenvalue of $A$, and $v_1, \ldots, v_n \in H$ form an orthonormal basis for the eigenspace of $a$. If the state of the system is the ray generated by the unit vector $\psi \in H$, the probability that the observable $A$ will be found to have the value $a$ is $\langle v_1, \psi \rangle + \ldots + \langle v_n, \psi \rangle$, where $\langle \cdot, \cdot \rangle$ is the inner product. You can then easily show that the expectation value of the observable $A$ is $\langle \psi A \psi \rangle$. Observables whose only eigenvalues are $1$ and $0$—that is, projection operators on $H$—play a special role, because they correspond to logical propositions about the system. The expectation value of a projection operator is just the probability of the proposition. Most interesting quantum systems have another part, which is often very important: • A set of unitary maps from $H$ to itself, which might be called transformations. These represent "automorphisms" of the system. In physics, many quantum systems have a one-parameter group of transformations, often denoted $U(t)$, that represent time evolution; the idea is that if the state of the system is currently (the ray generated by) $\psi$, the state will be $U(t)\psi$ after $t$ units of time have passed. Physical systems often have other transformation groups as well; for example, a quantum system that's supposed to have a "spatial orientation" will generally have a group of transformations that form a representation of $SO(3)$. A few examples • Quantum random walks are, as the name suggests, quantized random walks. More generally, you can quantize the idea of a Markov chain. For a great introduction, see the paper "Quantum walks and their algorithmic applications", by Andris Ambainis. • In Sections 2 and 3 of the notes "A Short Introduction to Noncommutative Geometry", Peter Bongaarts describes quantized versions of compact topological spaces and classical mechanical systems. • In Section 4 of the book Noncommutative Geometry (caution---big PDF), Alain Connes introduces a quantized version of calculus. Here, the observables representing complex variables are non-self-adjoint because complex variables can take on complex values. An observable representing a complex variable must therefore be allowed to have complex eigenvalues. I hope this helps! [1] Today, in contrast, most physicists think of classical mechanics as an approximation to quantum mechanics. [2] If $H$ is infinite-dimensional, it's typically a separable Hilbert space. You may even need $H$ to be something fancier, like a rigged Hilbert space. share|cite|improve this answer Just to restate some facts already stated in other answers, quantization can mean a few different things. In deformation quantization, we start with a classical theory given by a Poisson manifold. Then, (by definition) the algebra of functions forms a Poisson algebra. A quantization of this algebra is a noncommutative algebra with operators $X_f$ for $f$ a function. There is also a formal parameter $\hbar$. This algebra satifies $$ X_f\ X_g = X_{fg} + \mathcal{O}(\hbar)\ . $$ The idea of quantization is that the Poisson bracket becomes a commutator, or $$ [X_f,X_g] = \hbar X_{\lbrace f,g \rbrace} + \mathcal{O}(\hbar^2)\ . $$ Thus, we have a noncommutative version of classical mechanics. The existence of such an algebra is a theorem of Kontsevich (the case of a symplectic manifold was solved much earlier, but I forget by whom). In mathematics, there are plenty of interesting analogous situations where you have a noncommutative thingie which is, in some sense, a formal deformation of a commutative thingie. You can see the other direction of the above as an example of the following general fact. Given a filtered algebra whose associated graded is commutative, there is a natural Poisson structure on the associated graded. In physics, however, it's not enough to just deform the algebra of functions; we have to now represent things on a Hilbert space. This introduces a whole host of other problems. In geometric quantization, this is split into two steps. Let's say we have a symplectic manifold whose symplectic form is integral. Then we can construct a line bundle with connection whose curvature is that symplectic form. The Hilbert space is the space of $L^2$ sections of this bundle. This is much too large, however, so you have to cut it down (which is step 2). In various cases, well-defined procedures exist, but I don't believe this is well-understood in general. For example, I'm not sure it's possible to represent every function as an operator. It's probably worth pointing out that, from the point of view of physics, quantization is backwards. It is the quantum theory that is fundamental, and the classical theory should arise as some limit of the quantum theory. There's some interesting mathematics there, and also a whole lot of philosophy too. share|cite|improve this answer I believe that the symplectic case was solved independently by De Wilde-Lecomte, Omori-Maeda-Yoshioka and Fedosov. – José Figueroa-O'Farrill Nov 20 '09 at 17:43 The word has many meanings in mathematics, most of them quite vague. One general way of describing what quantization is for a mathematician is the following: you have your favorite object $X$, and you find that there is a family of other objects $X_q$ parametrized by a parameter $q$ which varies in some set (or is only a ‘formal parameter’ in the way that the variable in a polynomial ring is ‘formally’ an element in a over-ring of the coefficient ring) such that for a special value $q_0$ of the parameter $q$, or, in the ‘formal’ case, when the parameter degenerates in some specific way, you have that $X_{q_0}$ is your original favorite $X$, and if the objects $X_q$ are in some sense (more) non-commutative than $X$, one says that the family $X_q$ is a quantization of $X$. Very vague, I know. And this is only interesting if both your $X$ is interesting, if the $X_q$ themselves are interesting, and if there is some connection between the two. For example, integer numbers are undeniably interesting objects, and they have a ‘quantization’, given by the (one of the couple of) usual quantum integers where this is very visible. The thing is, usually, starting from some interesting $X$, there are really not very many ways in which you can do this. For example, if you start with an enveloging algebra of a simple Lie algebra over $\mathbb C$, then there is just one way to do this (up to the appropriate way of ignoring that there are really many ways to do this) share|cite|improve this answer I think you mean quantization is some kind of deformation theory. – Allen Sep 28 '12 at 11:23 There are some good long answers already, so I'm going to try to give as short an answer as possible. A quantization of $X$ is some $X_\hbar$ depending on a parameter $\hbar$ (occasionally $q=e^\hbar$ instead) such that $X=X_0$ and $X_\hbar$ is generically "less commutative" than $X$. This is by analogy with quantum physics where $X_0$ is classical physics and $\hbar$ measures the failure of position and momentum to commute. share|cite|improve this answer In mathematics, quantization often refers to some kind of deformation of a classical object. The Heisenberg Uncertainty Principle says that the position and momentum operators do not commute. In fact, $[X,P]=i\hbar$. In the limit as $\hbar\to 0$, these operators commute once again. Technically speaking, this is nonsense as $\hbar$ is a universal constant, but in mathematics, we are free to play with parameters. A couple of examples include: • the noncommutative torus, the universal $C^\ast$-algebra generated by two unitaries satisfying $uv=e^{i\theta} vu$. As $\theta\to 0$, we get $C(\mathbb{T}^2)$, the continuous functions on the $2$-torus. We usually think of the deformed algebra as a quantization of the commutative one. • some quantum groups are deformations of universal enveloping algebras, i.e., we get the universal enveloping algebra as $q\to 1$. share|cite|improve this answer As a physicist who has taken a bunch of Quantum Mechanics and Solid State physics, when we say "quantize your system" it means: You set up your classical Lagrangian $L$ (in terms of kinetic $K$ and potential $U$ energy), given generalized coordinates $q_i,p_i$ (usually position and momentum, but could also be angles and angular velocity). You then take the Hamiltonian $H$ of that system, which in most cases becomes $H=K+U$. This is all in terms of your generalized coordinates. Once that is done, "quantizing" the system (or your variables) means to simply set $[q_i,p_j]=i\hbar \delta_{ij}$. The quantum mechanics is now in effect. This is known as $\textit{canonical quantization}$. Quantum Field Theory is a perturbation to Quantum Mechanics, where you perform a second quantization. For instance, in using electrodynamics in quantum mechanics you simply quantize the atomic-motion (which interacts with the $\textbf{E}$-field); this is the "semiclassical approach". Second quantization further quantizes this electromagnetic field, so that now the light and the atom both have discrete structures. share|cite|improve this answer Given a theory, described by an action $S(\phi)$ with field $\phi \in \mathcal{P}$, where $\mathcal{P}$ is usually the set of sections of a bundle over some manifold $M$. The action admits $\mathcal{G}$ a set of gauge symmetries, $\phi \rightarrow \phi'$ such that $S(\phi) = S(\phi')$. One has quantized this theory when one has calculated, or has an algorithm that can calculate $\int_{\mathcal{P} / \mathcal{G}} \mathcal{O}(\phi) e^{iS(\phi)/\hbar} \mathcal{D}\phi$ for any function $\mathcal{O}(\phi)$ on $\mathcal{P} / \mathcal{G}$. In the case of quantum field theory $\mathcal{D}\phi$ is usually ill-defined and the integral usually diverges. However, for a certain class of theories, so-called renormalizable theories, one can, more-or-less, make sense of this integral. An excellent treatment of perturbative renormalization, from a mathematical point-of-view, is found in Kevin Costello's soon to be published book, Renormalization and effective field theory. share|cite|improve this answer A very basic answer: think about the classical Hamiltonian, $$ a(x,\xi)=\vert \xi\vert^2-\frac{\kappa}{\vert x\vert},\quad \text{$\kappa>0$ parameter}. $$ The classical motion is described by the integral curves of the Hamiltonian vector field of $a$, $$ \dot x=\frac{\partial a}{\partial\xi},\quad \dot \xi=-\frac{\partial a}{\partial x}. $$ The attempt of describing the motion of an electron around a proton by classical mechanics leads to the study of the previous integral curves and is extremely unstable since the function $a$ is unbounded from below. If classical mechanics were governing atomic motion, matter would not exist, or would be so unstable that could not sustain its observed structure for a long time, with electrons collapsing onto the nucleus. Now, you change the perspective and you decide, quite arbitrarily that atomic motion will be governed by the spectral theory of the quantization of $a$, i.e. by the selfadjoint operator $$ -\Delta-\frac{\kappa}{\vert x\vert}=A. $$ It turns out that the spectrum of that operator is bounded from below by some fixed negative constant, and this a way to explain stability of matter. Moreover the eigenvalues of $A$ are describing with an astonishing accuracy the levels of energy of an electron around a proton (hydrogen atom). My point is that, although quantization has many various mathematical interpretations, its success is linked to a striking physical phenomenon: matter is existing with some stability and no explanation of that fact has a classical mechanics interpretation. The atomic mechanics should be revisited, and quantization is quite surprisingly providing a rather satisfactory answer. For physicists, it remains a violence that so refined mathematical objects (unbounded operators acting on - necessarily - infinite dimensional space) have so many things to say about nature. It's not only Einstein's "God does not play dice", but also Feynman's "Nobody understands Quantum Mechanics" or Wigner's "Unreasonable effectiveness of Mathematics." share|cite|improve this answer I vote for "Nobody understands Quantum Mechanics". You are not Joking, Mr. Feynman :-) – Patrick I-Z Nov 20 '13 at 1:10 I'm gonna be a bit more down to earth and cover the basics of Weyl quantization (in units where $\hbar = 1$)... The Hamiltonian is typically introduced first: starting from the de Broglie relation $p = k$ and the Einstein-Planck relation $E = \omega$ we can regard the (Weyl) correspondence principle heuristically as arising by viewing Fourier analysis through the lens of spectral theory for self-adjoint operators: i.e., we have $p \rightarrow -i\partial_x, \quad H \rightarrow i\partial_t$ which leads immediately to the Schrödinger equation, in which the energy levels are associated with eigenvalues of the Hamiltonian. The Euclidean version is obtained by a Wick rotation: $t = -i\tau \Rightarrow \partial_t = \partial_{-i\tau} = i\partial_\tau \Rightarrow H \rightarrow -\partial_\tau.$ The time evolution operator encoding the dynamics is just $U(t) = e^{-iHt}$. The rest is details or field theory. share|cite|improve this answer Here is a link to an article on quantization in physics: The article contains links to other articles on quantization including cananonical quantization and geometric quantization, and weyl quantization. quantization involves converting classical fields to operators acting on quantum states of the field theory. share|cite|improve this answer Your Answer
950975e0b0590a5a
Take the 2-minute tour × I'm reading the Wikipedia page for the Dirac equation: The Dirac equation is superficially similar to the Schrödinger equation for a free massive particle: A) $-\frac{\hbar^2}{2m}\nabla^2\phi = i\hbar\frac{\partial}{\partial t}\phi.$ The left side represents the square of the momentum operator divided by twice the mass, which is the non-relativistic kinetic energy. Because relativity treats space and time as a whole, a relativistic generalization of this equation requires that space and time derivatives must enter symmetrically, as they do in the Maxwell equations that govern the behavior of light — the equations must be differentially of the same order in space and time. In relativity, the momentum and the energy are the space and time parts of a space-time vector, the 4-momentum, and they are related by the relativistically invariant relation B) $\frac{E^2}{c^2} - p^2 = m^2c^2$ which says that the length of this vector is proportional to the rest mass m. Substituting the operator equivalents of the energy and momentum from the Schrödinger theory, we get an equation describing the propagation of waves, constructed from relativistically invariant objects, C) $\left(\nabla^2 - \frac{1}{c^2}\frac{\partial^2}{\partial t^2}\right)\phi = \frac{m^2c^2}{\hbar^2}\phi$ I am not sure how the equation A and B lead to equation C. It seems that it is related to substituting special relativity value into quantum mechanics operators, but I just keep failing to get a result... share|improve this question Dirac himself talks about how he derived most of his ideas in this great lecture video he did in 1973. A little shaky, but very informative on the background of his ideas. youtube.com/… –  AnimatedPhysics Aug 31 '13 at 14:40 2 Answers 2 up vote 4 down vote accepted First, C) isn't the Dirac Equation, it's the Klein-Gordon equation Now, to your main question. A) comes from the classical equation for a free massive particle: $\dfrac{p^2}{2m} = E$ by making the operator (operating on $\phi$) substitutions: $p^2 \rightarrow - \hbar^2 \nabla^2$ $E \rightarrow i \hbar \dfrac{\partial}{\partial t}$ C) comes from B) by further recognizing that: $E^2 \rightarrow -\hbar^2 \dfrac{\partial^2}{\partial t^2}$ share|improve this answer $$E^2 = p^2c_0^2 + m^2c_0^4$$ $$E^2\Psi=-\hbar^2\frac{\partial^2\Psi}{\partial t^2}$$ $$\left(p^2c_0^2+m^2c_0^4\right)\Psi=-\hbar^2\frac{\partial^2\Psi}{\partial t^2}$$ $$-c_0^2\hbar^2\nabla^2\Psi+m^2c_0^4\Psi=-\hbar^2\frac{\partial^2\Psi}{\partial t^2}$$ $$\frac{m^2c_0^2}{\hbar^2}\Psi=\left(\nabla^2+\frac{\partial^2}{\partial \left(ic_0t\right)^2}\right)\Psi$$ $$\left(\frac{mc_0}{\hbar}\right)^2\Psi=\eta^{\mu\nu}\partial_{\mu}\partial_{\nu}\Psi$$ Where the signature of the metric tensor is taken to be $\left(-c_0^2,1,1,1\right)$. And there you have it, the fatally flawed Klein-Gordon Equation which cannot accomodate for potentials nor impose the norm-squared of the wavefunction to be non-negative which Schrodinger discarded, Dirac repaired, and the Higgs field was satisfied with! share|improve this answer protected by Qmechanic Aug 31 '13 at 19:06 Would you like to answer one of these unanswered questions instead?
4582aa9e0d363398
The Computational chemistry reference article from the English Wikipedia on 24-Jul-2004 (provided by Fixed Reference: snapshots of Wikipedia from Computational chemistry Watch child sponsorship videos Computational chemistry is the branch of theoretical chemistry whose major goals are to create efficient computer programs that calculate the properties of molecules (such as total energy, dipole moment, vibrational frequencies) and to apply these programs to concrete chemical objects. It is also sometimes used to cover the areas of overlap between computer science and chemistry. In theoretical chemistry chemists and physicists together develop algorithms and computer programs to allow precise predictions of atomic and molecular properties and/or reaction paths for chemical reactions. Computational chemists in contrast mostly "simply" use existing computer programs and methodologies and apply these to specific chemical questions. There are two different approaches in this: 1. computational studies can be carried out in order to find a starting point for a laboratory synthesis; 2. computational studies can be used to explore the reaction mechanisms and explain observations on laboratory reactions. There are several major areas within this topic: The programs used in computational chemistry are based on many different quantum-chemical methods that solve the molecular Schrödinger equation. The methods that do not include empirical or semi-empirical parameters in their equations are called ab initio methods. The most popular classes of ab initio methods are: Hartree-Fock, Moller-Plesset perturbation theory, configuration interaction, coupled cluster, reduced density matrices and density functional theory. Each class contains several methods that use different variants of the corresponding class, typically geared either to calculating a specific molecular property, or, to application to a special set of molecules. The abundance of these approaches shows that there is no single method suitable for all purposes. It is, in principle, possible to use one exact method (for example, full configuration interaction) and apply it to all the molecules, but, although such methods are well-known and available in many programs, the computational cost of their use grows factorially (even faster than exponentially) in the number of electrons that the molecule has. Therefore a great number of approximate methods strive to achieve the best trade-off between accuracy and computational cost. Presently computational chemistry can routinely and very accurately calculate the properties of the molecules that contain no more than, say, 10 electrons. The treatment of molecules that contain a few dozen electrons is practically feasible only by more approximate methods, such as DFT. There is some dispute within the field on whenether the latter methods are sufficient to accurately describe complex chemical reactions, such as those in biochemistry. A number of software packages that are self-sufficient and include many quantum-chemical methods are available. Among the most widely used are GAUSSIAN, GAMESS, Q-Chem, ACES, MOLPRO, DALTON, Spartan and PSI Table of contents 1 See Also 2 References 3 External link See Also External link Analytical chemistry | Organic chemistry | Inorganic chemistry | Physical chemistry | Polymer chemistry | Biochemistry | Materials science | Environmental chemistry | Pharmacy | Thermochemistry | Electrochemistry | Nuclear chemistry | Computational chemistry Periodic table | List of compounds
c2dfe2d1baa19254
Scientific Explanation First published Fri May 9, 2003; substantive revision Wed Sep 24, 2014 Issues concerning scientific explanation have been a focus of philosophical attention from Pre-Socratic times through the modern period. However, recent discussion really begins with the development of the Deductive-Nomological (DN) model. This model has had many advocates (including Popper 1935, 1959, Braithwaite 1953, Gardiner, 1959, Nagel 1961) but unquestionably the most detailed and influential statement is due to Carl Hempel (Hempel 1942, 1965, and Hempel & Oppenheim 1948). These papers and the reaction to them have structured subsequent discussion concerning scientific explanation to an extraordinary degree. After some general remarks by way of background and orientation (Section 1), this entry describes the DN model and its extensions, and then turns to some well-known objections (Section 2). It next describes a variety of subsequent attempts to develop alternative models of explanation, including Wesley Salmon's Statistical Relevance (Section 3) and Causal Mechanical (Section 4) models and the Unificationist models due to Michael Friedman and Philip Kitcher (Section 5). Section 6 provides a summary and discusses directions for future work. 1. Background and Introduction. As will become apparent, “scientific explanation” is a topic that raises a number of interrelated issues. Some background orientation will be useful before turning to the details of competing models. A presupposition of most recent discussion has been that science sometimes provides explanations (rather than something that falls short of explanation—e.g., “mere description”) and that the task of a “theory” or “model” of scientific explanation is to characterize the structure of such explanations. It is thus assumed that there is (at some suitably abstract and general level of description) a single kind or form of explanation that is “scientific”. In fact, the notion of “scientific explanation” suggests at least two contrasts—first, a contrast between those “explanations” that are characteristic of “science” and those explanations that are not, and, second, a contrast between “explanation” and something else. However, with respect to the first contrast, the tendency in much of the recent philosophical literature has been to assume that there is a substantial continuity between the sorts of explanations found in science and at least some forms of explanation found in more ordinary non-scientific contexts, with the latter embodying in a more or less inchoate way features that are present in a more detailed, precise, rigorous etc. form in the former. It is further assumed that it is the task of a theory of explanation to capture what is common to both scientific and at least some more ordinary forms of explanation. These assumptions help to explain (what may otherwise strike the reader as curious) why, as this entry will illustrate, discussions of scientific explanation so often move back and forth between examples drawn from bona-fide science (e.g., explanations of the trajectories of the planets that appeal to Newtonian mechanics) and more homey examples involving the tipping over of inkwells. With respect to the second contrast, most models of explanation assume that it is possible for a set of claims to be true, accurate, supported by evidence, and so on and yet unexplanatory (at least of anything that the typical explanation-seeker is likely to want explained). For example, all of the accounts of scientific explanation described below would agree that an account of the appearance of a particular species of bird of the sort found in a bird guidebook is, however accurate, not an explanation of anything of interest to biologists (e.g., the development, characteristic features, or behavior of that species). Instead, such an account is “merely descriptive”. However, different models of explanation provide different accounts of what the contrast between the explanatory and merely descriptive consists in. A related point is that while most theorists of scientific explanation have proposed models that are intended to cover at least some cases of explanation that we would not think of as part of science, they have nonetheless assumed some implicit restriction on the kinds of explanation they have sought to reconstruct. It has often been noted that the word “explanation” is used in a wide variety of ways in ordinary English—we speak of explaining the meaning of a word, explaining the background to philosophical theories of explanation, explaining how to bake a pie, explaining why one made a certain decision (where this is to offer a justification) and so on. Although the various models discussed below have sometimes been criticized for their failure to capture all of these forms of “explanation” (see, e.g., Scriven, 1959), it is clear that they were never intended to do this. Instead, their intended explicandum is, very roughly, explanations of why things happen, where the “things” in question can be either particular events or something more general—e.g., regularities or repeatable patterns in nature. Paradigms of this sort of explanation include the explanation for the advance in the perihelion of mercury provided by General Relativity, the explanation of the extinction of the dinosaurs in terms of the impact of a large asteroid at the end of the Cretaceous period, the explanation provided by the police for why a traffic accident occurred (the driver was drinking and there was ice on the road), and the standard explanation provided in economics textbooks for why monopolies will, in comparison with firms in perfectly competitive markets, raise prices and reduce output. Finally, a few words about the broader epistemological/methodological background to the models described below. Many philosophers think of concepts like “explanation”, “law”, “cause”, and “support for counterfactuals” as part of an interrelated family or circle of concepts that are “modal” in character . For familiar “empiricist” reasons, Hempel and many other early defenders of the DN model regarded these concepts as not well understood, at least prior to analysis. It was assumed that it would be “circular” to explain one concept from this family in terms of others from the same family and that they must instead be explicated in terms of other concepts from outside the modal family—concepts that more obviously satisfied (what were taken to be) empiricist standards of intelligibility and testability. For example, in Hempel's version of the DN model, the notion of a “law” plays a key role in explicating the concept of “explanation”, and his assumption is that laws are just regularities that meet certain further conditions that are also acceptable to empiricists. As we shall see, these empiricist standards (and an accompanying unwillingness to employ modal concepts as primitives) have continued to play a central role in the models of explanation developed subsequent to the DN model. There are many interesting historical questions about the DN model that remain largely unexplored. Why did “scientific explanation” emerge when it did as a major topic for philosophical discussion? Why were the “logical empiricist”philosophers of science who defended the DN model so willing to accept the idea that science provides “explanations”, given the tendency of many earlier writers in the positivist tradition to think of “explanation” as a rather subjective or “metaphysical” matter and to contrast it unfavorably with “description”, which they regarded as a more legitimate goal for empirical science? And why was discussion, at least initially, organized around “explanation” rather than “causation”, since (as we shall observe) it is often the latter notion that seems to be of central interest in subsequent debates and since the former notion seems (to many contemporary sensibilities) somewhat vague and ill-defined? At least part of the answer to this last question seems to be that (again as explained in more detail below) Hempel and other defenders of the DN model inherited standard empiricist or Humean scruples about the notion of causation. They assumed that causal notions are only (scientifically or metaphysically) acceptable to the extent that it is possible to paraphrase or re-describe them in ways that satisfied empiricist criteria for meaningfulness and legitimacy. One obvious way of doing this was to take causal claims to be tantamount to claims about the obtaining of “regularities” (that is patterns of uniform association in nature). It is just this idea that is captured by the DN model (see below). Part of the initial appeal of the topic of “scientific explanation” was thus that it functioned as a more respectable surrogate for (or entry point into) the problematic topic of causation[1]. Another motivation was the interest of Hempel and other early defenders of the DN model in forms of explanation such as “functional explanation” (thought to be employed in such special sciences as biology and anthropology) that were not obviously causal. This also made it natural to frame discussion around a broad category of explanation rather than narrower notions of “causation” (cf. Hempel, 1965b). Suggested Readings: Salmon (1989) is a superb critical survey of all the models of scientific explanation discussed in this entry. Pitt (1988) and Ruben (1993) are anthologies that contain a number of influential articles. 2. The DN Model 2.1 The Basic Idea According to the Deductive-Nomological Model, a scientific explanation consists of two major “constituents”: an explanandum, a sentence “describing the phenomenon to be explained” and an explanans, “the class of those sentences which are adduced to account for the phenomenon” (Hempel and Oppenheim, 1948, reprinted in Hempel, 1965, p. 247). For the explanans to successfully explain the explanandum several conditions must be met. First, “the explanandum must be a logical consequence of the explanans” and “the sentences constituting the explanans must be true” (Hempel, 1965, p. 248). That is, the explanation should take the form of a sound deductive argument in which the explanandum follows as a conclusion from the premises in the explanans. This is the “deductive” component of the model. Second, the explanans must contain at least one “law of nature” and this must be an essential premise in the derivation in the sense that the derivation of the explanandum would not be valid if this premise were removed. This is the “nomological” component of the model—“nomological” being a philosophical term of art which, suppressing some niceties, means (roughly) “lawful”. In its most general formulation, the DN model is meant to apply both to the explanation of “general regularities” or “laws” such as (to use Hempel and Oppenheim's examples) why light conforms to the law of refraction and also to the explanation of particular events, conceived as occurring at a particular time and place, such as the bent appearance of the partially submerged oars of a rowboat on a particular occasion of viewing. As an additional illustration of a DN explanation of a particular event, consider a derivation of the position of Mars at some future time from Newton's laws of motion, the Newtonian inverse square law governing gravity, and information about the mass of the sun, the mass of Mars and the present position and velocity of each. In this derivation the various Newtonian laws figure as essential premises and they are used, in conjunction with appropriate information about initial conditions (the masses of Mars and the sun and so on), to derive the explanandum (the future position of Mars) via a deductively valid argument. The DN criteria are thus satisfied. 2.2 The Role of Laws in the DN Model The notion of a sound deductive argument is (arguably) relatively clear (or at least something that can be regarded as antecedently understood from the point of view of characterizing scientific explanation). But what about the other major component of the DN model—that of a law of nature? The basic intuition that guides the DN model goes something like this: Within the class of true generalizations, we may distinguish between those that are only “accidentally true” and those that are “laws”. To use Hempel's examples, the generalization • (2.2.1) All members of the Greensbury School Board for 1964 are bald is, if true, only accidentally so. In contrast, • (2.2.2) All gases expand when heated under constant pressure is a law. Thus, according to the DN model, the latter generalization can be used, in conjunction with information that some particular sample of gas has been heated under constant pressure, to explain why it has expanded. By contrast, the former generalization (2.2.1) in conjunction with the information that a particular person $n$ is a member of the 1964 Greensbury school board, cannot be used to explain why $n$ is bald. While this example may seem clear enough, what exactly is it that distinguishes true accidental generalizations from laws? This has been the subject of a great deal of philosophical discussion, most of which must be beyond the scope of this entry.[2] For reasons explained in Section 1, Hempel assumes that an adequate account must explain the notion of law in terms of notions that lie outside the modal family.[3] In his (1965) he considers a number of familiar proposals having this character[4] and finds them all wanting, remarking that the problem of characterizing the notion of law has proved “highly recalcitrant” (1965, p.338). It seems fair to say, however, that his underlying assumption is that, at bottom, laws are just exceptionless generalizations describing regularities that meet certain additional distinguishing conditions that he is not at present able to formulate. In subsequent decades, there have been a number of other proposed criteria for lawhood. Although each proposal has its adherents, none has won general acceptance.[5] What implications does this have for the DN model? One possible assessment is that all the DN model really requires is that there be agreement in a substantial range of particular cases about which generalizations are laws. If such agreement exists; it matters little for the DN model if we are unable to formulate completely general criteria that distinguish between laws and accidentally true generalizations in all possible cases. For example, even without an adequate account of lawhood, we can surely agree that (2.2.2) is a law and (2.2.1) is not and this is all we need to conclude that (2.2.2) can figure in DN explanations while (2.2.1) cannot. Unfortunately, however, matters are not always so straightforward. One important issue raised by the DN model concerns the explanatory status of the so-called special sciences—biology, psychology, economics and so on. These sciences are full of generalizations that appear to play an explanatory role and yet fail to satisfy many of the standard criteria for lawfulness. For example, although Mendel's law of segregation (M) (which states that in sexually reproducing organisms each of the two alternative forms (alleles) of a gene specifying a trait at a locus in a given organism has 0.5 probability of ending up in a gamete) is widely used in models in evolutionary biology, it has a number of exceptions, such as meiotic drive. A similar point holds for the principles of rational choice theory (such as the generalization that preferences are transitive) which figure centrally in economics. Other widely used generalizations in the special sciences have very narrow scope in comparison with paradigmatic laws, hold only over restricted spatio-temporal regions, and lack explicit theoretical integration. There is considerable disagreement over whether such generalizations are laws. Some philosophers (e.g., Woodward, 2000) suggest that such generalizations satisfy too few of the standard criteria to count as laws but can nevertheless figure in explanations; if so, it apparently follows that we must abandon the DN requirement that all explanations must appeal to laws. Others (e. g., Mitchell, 1997), emphasizing different criteria for lawfulness, conclude instead that generalizations like (M) are laws and hence no threat to the requirement that explanations must invoke laws. In the absence of a more principled account of laws, it is hard to evaluate these competing claims and hence hard to assess the implications of the DN model for the special sciences. More generally, in the absence of a generally accepted account of lawhood, the rationale for the fundamental contrast between laws and non-laws which is at the heart of what the DN model requires is unclear: it is hard to assess the claim that all explanations must cite laws, without a clear account of what a law is and what it contributes to successful explanation. At the very least, providing such an account is an important item of unfinished business for advocates of the DN model. 2.3 Inductive Statistical Explanation The DN model is meant to capture explanation via deduction from deterministic laws and this raises the obvious question of the explanatory status of statistical laws. Do such laws explain at all and if so, what do they explain, and under what conditions? In his (1965) Hempel distinguishes two varieties of statistical explanation. The first of these, deductive-statistical (DS) explanation, involves the deduction of “a narrower statistical uniformity” from a more general set of premises, at least one of which involves a more general statistical law. Since DS explanation involves deduction of the explanandum from a law, it conforms to the same general pattern as the DN explanation of regularities. However, in addition to DS explanation, Hempel also recognizes a distinctive sort of statistical explanation, which he calls inductive-statistical or IS explanation, involving the subsumption of individual events (like the recovery of a particular person from streptococcus infection) under (what he regards as) statistical laws (such as a law specifying the probability of recovery, given that penicillin has been taken). While the explanandum of a DN or DS explanation can be deduced from the explanans, one cannot deduce that some particular individual, John Jones, has recovered from the above statistical law and the information that he has taken penicillin. At most what can be deduced from this information is that recovery is more or less probable. In IS explanation, the relation between explanans and explanandum is, in Hempel's words, “inductive,” rather than deductive—hence the name inductive-statistical explanation. The details of Hempel's account are complex, but the underlying idea is roughly this: an IS explanation will be good or successful to the extent that its explanans confers high probability on its explanandum outcome. Thus if it is statistical law that the probability of recovery from streptococcus, given that one has taken penicillin, is high, and Jones has taken penicillin and recovered, this information can be used to provide an IS explanation of Jones' recovery. However if the probability of recovery is low (e.g. less than 0.5), given that Jones has taken penicillin, then, even if Jones recovers, we cannot use this information to provide an IS explanation of his recovery. 2.4 Motivation for the DN Model: Nomic Expectability and a Regularity Account of Causation Why suppose that all (or even some) explanations have a DN or IS structure? There are two ideas which play a central motivating role in Hempel's (1965) discussion. The first connects the information provided by a DN argument with a certain conception of what it is to achieve understanding of why something happens—it appeals to an idea about the object or point of giving an explanation. Hempel writes … a DN explanation answers the question “Why did the explanandum-phenomenon occur?” by showing that the phenomenon resulted from certain particular circumstances, specified in $C_1, C_2, \ldots, C_k$, in accordance with the laws $L_1, L_2, \ldots, L_r$. By pointing this out, the argument shows that, given the particular circumstances and the laws in question, the occurrence of the phenomenon was to be expected; and it is in this sense that the explanation enables us to understand why the phenomenon occurred. (1965, p. 337, italics in original) One can think of IS explanation as involving a natural generalization of this idea. While an IS explanation does not show that the explanandum-phenomenon was to be expected with certainty, it does the next best thing: it shows that the explanandum-phenomenon is at least to be expected with high probability and in this way provides understanding. Stated more generally, both the DN and IS models, share the common idea that, as Salmon (1989) puts it, “the essence of scientific explanation can be described as nomic expectability—that is expectability on the basis of lawful connections” (1989, p. 57). The second main motivation for the DN/IS model has to do with the role of causal claims in scientific explanation. There is considerable disagreement among philosophers about whether all explanations in science and in ordinary life are causal and also disagreement about what the distinction (if any) between causal and non-causal explanations consists in.[6] Nonetheless, virtually everyone, including Hempel, agrees that many scientific explanations cite information about causes. However, Hempel, along with most other early advocates of the DN model, is unwilling to take the notion of causation as primitive in the theory of explanation—that is, he was unwilling to simply say that $X$ figures in an explanation of $Y$ if and only if $X$ causes $Y$. Instead, adherents of the DN model have generally looked for an account of causation that satisfies the empiricist requirements described in Section 1. In particular, advocates of the DN model have generally accepted a broadly Humean or regularity theory of causation, according to which (very roughly) all causal claims imply the existence of some corresponding regularity (a “law”) linking cause to effect. This is then taken to show that all causal explanations “imply,” perhaps only “implicitly,” that such a law/regularity exists and hence that laws are “involved” in all such explanations, just as the DN model claims. To illustrate of this line of argument, consider • (2.4.1) The impact of my knee on the desk caused the tipping over of the inkwell. (2.4.1) is a so-called singular causal explanation, advanced by Michael Scriven (1962) as a counterexample to the claim that the DN model describes necessary conditions for successful explanation. According to Scriven, (2.4.1) explains the tipping over of the inkwell even though no law or generalization figures explicitly in (2.4.1) and (2.4.1) appears to consist of a single sentence, rather than a deductive argument. Hempel's response (1965, 360ff) is that the occurrence of “caused” in (2.4.1) should not be left unanalyzed or taken as explanatory just as it stands. Instead (2.4.1) should be understood as “implicitly” or “tacitly” claiming there is a “law” or regularity linking knee impacts to tipping over of inkwells. According to Hempel, it is the implicit claim that some such law holds that “distinguishes” (2.4.1) from “a mere sequential narrative” in which the spilling is said to follow the impact but without any claim of causal connection—a narrative that (Hempel thinks) would clearly not be explanatory. This linking law is the nomological premise in the DN argument that, according to Hempel, is “implicitly” asserted by (2.2.1). There are two related but distinct ways of understanding this argument, both of which are suggested by portions of Hempel's discussion. According to the first, Hempel's claim is that the real underlying structure of (2.4.1) is something like: • (2.4.2) $(L)$ Whenever knees impact tables on which an inkwell sits and further conditions $K$ are met (where $K$ specifies that the impact is sufficiently forceful, etc.), the inkwell will tip over. (Reference to $K$ is necessary since the impact of knees on table with inkwells does not always result in tipping.) • $(I)$ My knee impacted a tables on which an inkwell sits and further conditions $K$ are met. • $(E)$ The inkwell tips over. Hence, to the extent that it is explanatory, (2.4.1) “implicitly” satisfies the DN/IS requirements after all—it is a DN/IS argument (namely 2.4.2) in disguise. There is a second interpretation of Hempel's argument that, unlike the first interpretation, does not require that we think of the full content of (2.4.2) as somehow already implicit in (2.4.1) Instead, (2.4.2) plays the role of an ideal against which (2.4.1) should be measured. (2.4.2) spells out what information a complete, fully adequate explanation for $E$ would need to contain—information that is present in (2.4.1) only in a partial or incomplete way. On this view of the matter, we think of (2.4.1) as an explanation-sketch (cf. Hempel, 1965b, 423ff) which conveys some of the information conveyed by (2.4.2) or points in the direction of the more complete explanation (2.4.2). Ideally, singular causal explanations like (2.4.1) should be replaced by explicit DN explanations like (2.4.2). On either interpretation, however, the basic idea is that a proper explication of the role of causal claims in explanation leads via a Humean or regularity theory of causation, to the conclusion that, at least ideally, explanations should satisfy the DN/IS model. Let us call this line of argument the “hidden structure” argument in recognition of the role it assigns to a hidden (or at least non-explicit) DN structure that is claimed to be associated with (2.4.1). This strategy will be examined in section 2.6, but let me first comment on a feature of the discussion so far that may seem puzzling. The boundaries of the category “scientific explanation” are far from clear, but while (2.4.1) is arguably an explanation, it is not what one usually thinks of as “science”—instead it is a claim from “ordinary life” or “common sense”. This raises the question of why adherents of the DN/IS model don't simply respond to the alleged counterexample (2.4.1) by denying that it is an instance of the category “scientific explanation”—that is, by claiming that the DN/IS model is not an attempt to reconstruct the structure of explanations like (2.4.1) but is rather only meant to apply to explanations that are properly regarded as “scientific”. The fact that this response is not often adopted by advocates of the DN model is an indication of the extent to which, as noted in section 1, it is implicitly assumed in most discussions of scientific explanation that there are important similarities or continuities in structure between explanations like (2.4.1) and explanations that are more obviously scientific and that these similarities that should be captured by some common account that applies to both. Indeed, it is a striking feature not just of Hempel (1965) but of many other treatments of scientific explanation that much of the discussion in fact focuses on “ordinary life” singular causal explanations similar to (2.4.1), the tacit assumption being that conclusions about the structure of such explanations have fairly direct implications for understanding explanation in science. 2.5. Explanatory Understanding and Nomic Expectability: Counterexamples to Sufficiency As explained above, examples like (2.4.1) are potential counterexamples to the claim that the DN model provides necessary conditions for explanation. There are also a number of well-known counterexamples to the claim that the DN model provides sufficient conditions for successful scientific explanation. Here are two illustrations. Explanatory Asymmetries. There are many cases in which a derivation of an explanandum $E$ from a law $L$ and initial conditions $I$ seems explanatory but a “backward” derivation of $I$ from $E$ and the same law $L$ does not seem explanatory, even though the latter, like the former, appears to meet the criteria for successful DN explanation. For example, one can derive the length $s$ of the shadow cast by a flagpole from the height $h$ of the pole and the angle θ of the sun above the horizon and laws about the rectilinear propagation of light. This derivation meets the DN criteria and seems explanatory. On the other hand, a derivation (2.5.1) of $h$ from $s$ and $\theta$ and the same laws also meets the DN criteria but does not seem explanatory. Examples like this suggest that at least some explanations possess directional or asymmetric features to which the DN model is insensitive. Explanatory Irrelevancies. A derivation can satisfy the DN criteria and yet be a defective explanation because it contains irrelevancies besides those associated with the directional features of explanation. Consider an example due to Wesley Salmon (Salmon, 1971, p.34): • (2.5.2) $(L)$ All males who take birth control pills regularly fail to get pregnant • $(K)$ John Jones is a male who has been taking birth control pills regularly • $(E)$ John Jones fails to get pregnant It is arguable that $(L)$ meets the criteria for lawfulness imposed by Hempel and many other writers. (If one wants to deny that $L$ is a law one needs some principled, generally accepted basis for this judgment and, as explained above, it is unclear what this basis is.) Moreover, (2.5.2) is certainly a sound deductive argument in which $L$ occurs as an essential premise. Nonetheless, most people judge that $(L)$ and $(K)$ are no explanation of $E$. There are many other similar illustrations. For example (Kyburg 1965), it is presumably a law (or at least an exceptionless, counterfactual supporting generalization) that all samples of table salt that have been hexed by being touched with the wand of a witch dissolve when placed in water. One may use this generalization as a premise in a DN derivation which has as its conclusion that some particular hexed sample of salt has dissolved in water. But again the hexing is irrelevant to the dissolving and such a derivation is no explanation. One obvious diagnosis of the difficulties posed by examples like (2.5.1) and (2.5.2) focuses on the role of causation in explanation. According to this analysis, to explain an outcome we must cite its causes and (2.5.1) and (2.5.2) fail to do this. As Salmon (1989, p.47) puts it, “a flagpole of a certain height causes a shadow of a given length and thereby explains the length of the shadow”. By contrast, “the shadow does not cause the flagpole and consequently cannot explain its height”. Similarly, taking birth control pills does not cause Jones' failure to get pregnant and this is why (2.5.2) fails to be an acceptable explanation. On this analysis, what (2.5.1) and (2.5. 2) show is that a derivation can satisfy the DN criteria and yet fail to identify the causes of an explanandum—when this happens the derivation will fail to be explanatory. As explained above, advocates of the DN model would not regard this diagnosis as very illuminating, unless accompanied by some account of causation that does not simply take this notion as primitive. (Salmon in fact provides such an account, which we will consider in Section 4.) We should note, however, that an apparent lesson of (2.5.1) and (2.5.2) is that the regularity account of causation favored by DN theorists is at best incomplete: the occurrence of $c$, $e$ and the existence of some regularity or law linking them (or $x$'s having property $P$ and $x$'s having property $Q$ and some law linking these) is not a sufficient condition for the truth of the claim that $c$ caused $e$ or $x$'s having $P$ is causally or explanatorily relevant to $x$'s having Q. More generally, if the counterexamples (2.5.1) and (2.5.2) are accepted, it follows that the DN model fails to state sufficient conditions for explanation. Explaining an outcome isn't just a matter of showing that it is nomically expectable. There are two possible reactions one might have to this observation. One is that the idea that explanation is a matter of nomic expectability is correct as far as it goes, but that something more is required as well. According to this assessment, the DN/IS model does state a necessary condition for successful explanation and, moreover, a condition that is a non-redundant part of a set of conditions that are jointly sufficient for explanation. However, some other, independent feature, $X$ (which will account for the directional features of explanation and insure the kind of explanatory relevance that is apparently missing in the birth control example) must be added to the DN model to achieve a successful account of explanation. The idea is thus that Nomic Expectability $+\ X = $ Explanation. Something like this idea is endorsed, by the unificationist models of explanation developed by Friedman (1974) and Kitcher (1989), which are discussed in Section 5 below. A second, more radical possible conclusion is that the DN account of the goal or rationale of explanation is mistaken in some much more fundamental way and that the DN model does not even state necessary conditions for successful explanation. As noted above, unless the hidden structure argument is accepted, this conclusion is strongly suggested by examples like (2.4.1) (“The impact of my knee caused the tipping over of the inkwell”) which appear to involve explanation without the explicit citing of a law or a deductive structure. To assess whether the DN/IS model provides necessary conditions for explanation, we thus must consider the hidden structure strategy in more detail. 2.6 The Hidden Structure Strategy It might seem that the contention of the hidden structure strategy that singular causal explanations like (2.4.1) are implicit DN/IS explanations or sketches of such explanations is at best relevant to the question of whether the DNIS model provides an adequate reconstruction of this particular sort of explanation. In fact, however, Hempel's strategy of treating explanations as devices for conveying information, but in a “partial” or “incomplete” way, about underlying “ideal” explanations of a prima-facie quite different form that are at least partly epistemically hidden from those who use the original, non-ideal explanation has continued to be very popular in recent theorizing about scientific explanation. This strategy forms the basis, for example, for Peter Railton's (1978, 1981) contrast between an “ideal explanatory text” which contains all of the causal and nomological information relevant to some outcome of interest and the “non-ideal” explanations like (2.4.1)that we actually give. According to Railton, the latter provide “explanatory information” in virtue of conveying information about some limited portion or aspect of the ideal text and are explanatory in virtue of doing so. The hidden structure strategy also plays an important role in the unificationist account of explanation developed by Philip Kitcher (1989) who likewise insists we must “distinguish between what is said on an occasion in which explanatory information is given and the ideal underlying explanation” (Kitcher, 1989, p. 414.) Indeed, any account of explanation that, like Kitcher's unificationist model, insists that laws (or generalizations of considerable generality) and deductive structure are necessary conditions for successful explanation will need to appeal to something like hidden structure strategy since it is generally accepted that there are many apparent explanations that do not conform to such conditions in their overt structure. Although the hidden structure strategy deserves more attention than it can receive here, several points seem clear. First, the notion of one explanation “conveying information about” another “underlying” explanation requires considerable spelling out. Depending on what “underlying” is understood to mean, it is arguable that there are many explanations underlying (2.4.1)—(i) the explanation (2.4.2), assuming that condition $K$ can be specified in a non-trivial way, (ii) an explanation at the level of classical physics that makes reference to laws governing inelastic collisions, the behavior of liquids when not confined to containers, and so on, and (iii) an explanation in which the behavior of the whole system is characterized in terms of some more fundamental physical theory (quantum mechanics, superstring theory etc.). Are all of these explanations implicit in (2.4.1) or does (2.4.1) convey partial information about all of them? In what sense of “implicit” or “conveys information about” could this possibly be true? Railton (1981) suggests that an explanatory claim provides information about an underlying ideal text if the former reduces uncertainty about some of the properties of the text, in the sense of ruling in or out various possibilities concerning its structure. As Railton recognizes, this has proposal has many counterintuitive consequences. To use Railton's own example, “the relevant ideal text contains more than $10^2$ words in English”, if true, counts as an explanation for an episode of radioactive decay (1981, p. 246). Similarly, the claim that $X$ and $Y$ are correlated, will count as a partial explanation of $X$ and $Y$ on the plausible assumption that this claim conveys the information that one of three possibilities is likely to be true—either $X$ causes $Y$ or $Y$ causes $X$ or they have a common cause—and thus reduces uncertainty about the contents of the ideal underlying text. This contrasts with the widespread judgment that correlations in themselves are not explanatory. Indeed, on a view like Railton's, even the claim that some outcome has no causes or is governed by no laws counts as an “explanation” of that outcome, supposing that claim is true. In fact, such a claim is apparently maximally explanatory, since it conveys everything that there is to be said about the ideal explanatory text associated with that event. Examples like these suggest that not every claim that reduces uncertainty about the contents of an ideal explanatory text should be regarded as itself explanatory—such a view allows too much to count as an explanation. Is it plausible to regard the text that contains all of the full details of causal and nomological information relevant to some outcome as at least an “ideal” against which various candidate explanations of that outcome are to be judged? Suppose we are presented with an explanation from economics or psychology that does not appeal to any generalization that we are prepared to count as a law but that underlying this “non-ideal” explanation is some incredibly complex set of facts described in terms of classical mechanics and electromagnetism, along with the relevant laws of these theories. If, as almost certainly will be the case, this underlying “explanation” is computationally intractable, and full of irrelevant detail (see section 4 below for more on what this might mean), one might wonder in what sense it is an ideal against which the original explanation should be measured. Will the economics explanation really be better according as to whether it conveys as much information as possible about these underlying details? Finally, consider the connection between explanation and understanding. One ordinarily thinks of an explanation as something that provides understanding. Relatedly, part of the task of a theory of explanation is to identify those structural features of explanations (or the information they convey) in virtue of which they provide understanding. For example, as noted above, the DN model connects understanding with the provision of information about nomic expectability—the idea is that understanding why an outcome occurs is a matter of seeing that it was to be expected on the basis of a law. The problem this raises for the hidden structure strategy is that the information associated with the hidden structure alleged to underlie “non-ideal” explanations like (2.4.1) is typically unknown or epistemically inaccessible to those who use the explanation. It is hard to see how this structure or information can contribute to understanding if it is epistemically hidden in this way. For example, it seems plausible that many (if not almost all) users of (2.4.1) (both those who might offer it as an explanation and those recipients who take it to provide understanding) are unaware of the DN structure that underlies it—indeed it is plausible that many users lack the notion of a law of nature and of a deductively valid argument and hence any notion that there is any (unknown) DN argument underlying (2.4.1). If this is the case, how can the mere obtaining of this DN structure, independently of anyone's awareness of its existence, function so as to provide understanding when (2.4.1) is used? Instead, it seems that the features of (2.4.1) that endow it with explanatory import—that make it an explanation—must be features that can be known or grasped or recognized by those who use the explanation. A similar point will hold for many other candidate explanations that fail to conform to the DN requirements such as explanations from sciences like economics and psychology that seem to lack laws. What can we conclude from this discussion of the hidden structure strategy? If the strategy fails, there will be a large number of apparent explanations that fail to satisfy the necessary conditions for explanation imposed by the DN/IS model. On the other hand, it is possible that there are ways of developing the hidden structure strategy that respond adequately to the difficulties described above. If so, the idea that the DN/IS requirements are at least necessary conditions for ideal explanation may be defensible after all, although the counterexamples to the sufficiency of the model noted in will remain. Suggested Readings. The most authoritative and comprehensive statement of the DN and IS models is probably Hempel 1965b. This is reprinted in Hempel, 1965a, along with a number of other papers that touch on various aspects of the problem of scientific explanation. In addition to the references cited in this section, Salmon, 1989, pp. 46ff describes a number of well-known counterexamples to the DN/IS models and discusses their significance. 3. The SR Model 3.1 The Basic Idea Much of the subsequent literature on explanation has been motivated by attempts to capture the features of causal or explanatory relevance that appear to be left out of examples like (2.5.1) and (2.5.2), typically within the empiricist constraints described above. Wesley Salmon's statistical relevance (or SR) model (Salmon, 1971) is a very influential attempt to capture these features in terms of the notion of statistical relevance or conditional dependence relationships. Given some class or population $A$, an attribute $C$ will be statistically relevant to another attribute $B$ if and only if $P(B \mid A.C) \ne P(B \mid A)$—that is, if and only if the probability of $B$ conditional on $A$ and $C$ is different from the probability of $B$ conditional on $A$ alone. The intuition underlying the SR model is that statistically relevant properties (or information about statistically relevant relationships) are explanatory and statistically irrelevant properties are not. In other words, the notion of a property making a difference for an explanandum is unpacked in terms of statistical relevance relationships. To illustrate this idea, suppose that in the birth control pills example (2.5.2) the original population $T$ includes both genders. Then \begin{align} P(\text{Pregnancy} &\mid T.\text{Male.Takes birth control pills}) \\ &= P(\text{Pregnancy} \mid T.\text{Male}) \\ &= 0 \end{align} \begin{align} P(\text{Pregnancy} &\mid T.\text{Female}.\text{Takes birth control pills}) \\ &\ne P(\text{Pregnancy} \mid T.\text{Female}) \end{align} assuming that not all women in the population take birth control pills. In other words, if you are a male in this population, taking birth control pills is statistically irrelevant to whether you become pregnant, while if you are a female it is relevant. In this way we can capture the idea that taking birth control pills is explanatorily irrelevant to pregnancy among males but not among females. To characterize the SR model more precisely we need the notion of a homogenous partition. A homogenous partition of $A$ is a set of subclasses or cells $C_i$ of $A$ that are mutually exclusive and exhaustive, where $P(B \mid A.C_i) \ne P(B \mid A.C_j)$ for all $C_i \ne C_j$ and where no further statistically relevant partition of any of the cells $A, C_i$ can be made with respect to $B$—that is, there are no additional attributes $D_k$ in $A$ such that $P(B \mid A.C_i) \ne P(B \mid A.C_i.D_k)$. On the SR model, an explanation of why some member $x$ of the class characterized by attribute $A$ has attribute $B$ consists of the following information: 1. the prior probability of $B$ within $A$ : $P(B \mid A) = p$. 2. A homogeneous partition of $A$ with respect to $B$, $(A. C_1, \ldots, A. C_n)$, together with the probability of $B$ within each cell of the partition: $P(B \mid A.C_i) = p_i$ and 3. The cell of the partition to which $x$ belongs. To employ one of Salmon's examples, suppose we want to construct an SR explanation of why $x$ who has a strep infection = $S$, recovers quickly = $Q$. Let $T (-T)$ according to whether $x$ is (is not) treated with penicillin, and $R (-R)$ = according to whether the subject has a penicillin-resistant strain. Assume for the sake of argument that no other factors are relevant to quick recovery. There are four possible combinations of these properties: $T.R$, $-T.R$, $T.-R$, $-T.-R$, but let us assume that \begin{align} P&(Q \mid S.T.R) \\ &= P(Q \mid S.-T.R) \\ &= P(Q \mid S.-T.-R) \\ &≠ P(Q \mid S. T.-R). \end{align} That is, the probability of quick recovery, given that one has strep, is the same for those who have the resistant strain regardless of whether or not they are treated and also the same for those who have not been treated. By contrast, the probability of recovery is different (presumably greater) among those with strep who have been treated and do not have the resistant strain. In this case $[S. (T.R \vee -T.R \vee -R.-T)]$, $[S.T.-R]$ is a homogenous partition of $S$ with respect to $Q$. The SR explanation of $x$'s recovery will consist of a statement of the probability of quick recovery among all those with strep (this is (i) above), a statement of the probability of recovery in each of the two cells of the above partition ((ii) above), and the cell to which $x$ belongs, which is $S.T.R$ ((iii) above). Intuitively, the idea is that this information tells us about the relevance of each of the possible combinations of the properties $T$ and $R$ to quick recovery among those with strep and is explanatory for just this reason. 3.2 The SR Model and Low Probability Events The SR model has a number of distinctive features that have generated substantial discussion. First, note that according to the SR model, and in contrast to the DN/IS model, an explanation is not an argument—either in the sense of a deductively valid argument in which the explanandum follows as a conclusion from the explanans or in the sense of an inductive argument in which the explanandum follows with high probability from the explanans, as in the case of IS explanation. Instead, an explanation is an assembly of information that is statistically relevant to an explanandum. Salmon argues (and takes the birth control example (2.6.2) to illustrate) that the criteria that a good argument must satisfy (e.g., criteria that insure deductive soundness or some inductive analogue) are simply different from those a good explanation must satisfy. Among other things, as Salmon puts it, “irrelevancies [are] harmless in arguments but fatal in explanations” (1989, p. 102). As explained above, in associating successful explanation with the provision of information about statistical relevance relationships, the SR model attempts to accommodate this observation. A second, closely related point is that the SR model departs from the IS model in abandoning the idea that a statistical explanation of an outcome must provide information from which it follows the outcome occurred with high probability. As the reader may check, the statement of the SR model above imposes no such high probability requirement; instead, even very unlikely outcomes will be explained as long as the criteria for SR explanation are met. Suppose that, in the above example, the probability of quick recovery from strep, given treatment and the presence of a non-resistant strain, is rather low (e.g., 0.2). Nonetheless, if the criteria (i)–(iii) above—a homogeneous partition with correct probability values for each cell in the partition—are satisfied, we may use this information to explain why $x$, who had a non-resistant strain of strep and was treated, recovered quickly. Indeed, according to the SR model, we may explain why some $x$ which is $A$ is $B$, even if the conditional probability of $B$ given $A$ and the cell $C_i$ to which $x$ belongs $(p_i = P(B \mid A.C_i))$ is less than the prior probability $(p = P(B \mid A))$ of $B$ in $A$. For example, if the prior probability of quick recovery among all those with any form of strep is 0.5 and the probability of quick recovery of those with a resistant strain who are untreated is 0.1, we may nonetheless explain why $y$, who meets these last conditions $(-T.R)$, recovered quickly (assuming he did) by citing the cell to which he belongs (the fact that he had the resistant strain and was untreated), the probability of recovery given that he falls in this cell, and the other sort of information described above. More generally, what matters on the SR model is not whether the value of the probability of the explanandum-outcome is high or low (or even high or low in comparison with its prior probability) but rather whether the putative explanans cites all and only statistically relevant factors and whether the probabilities it invokes are correct. One consequence of this, which Salmon endorses while acknowledging that many will regard it as unintuitive, is that on the SR model, the same explanans $E$ may explain both an explanandum $M$ and explananda that are inconsistent with $M$, such as $-M$. For example, the same explanans will explain both why a subject with strep and certain other properties (e.g., $T$ and $-R$) recovers quickly, if he does, and also why he does not recover if he does not. By contrast, on the DN or IS models, if $E$ explains $M,E$ cannot also explain $-M$. The intuition that, contrary to the IS model, the value that a candidate explanans assigns to an explanandum-outcome should not matter for the goodness of the explanation it provides can be motivated in the following way. Consider a genuinely indeterministic coin which is biased strongly $(p = 0.9)$ toward heads when tossed. Suppose that if it is not tossed the coin has probability of 0.5 of being in either the heads or tails position and that whether or not the coin is tossed is the only factor that is statistically relevant to whether it is heads or tails. According to the IS model, if the coin is tossed and comes up heads, we can explain this outcome by appealing to the fact that the coin was tossed (since under this condition the probability of heads is high) but if the coin is tossed and comes up tails we cannot explain this outcome, since its probability is low . The contrary intuition underlying the SR model is that we understand both outcomes equally well. The bias of the coin and the fact that the coin has been tossed are the only factors relevant to either outcome and those factors are common to both outcomes—once we have cited the toss (and specified the probability values for heads and tails on tossing), we left nothing out that influences either outcome. Similarly, Salmon argues, if it is really true that the partition in the example involving quick recovery from strep is objectively homogenous—if there are no other factors that are statistically relevant to quick recovery besides whether the subject has been treated and has a resistant strain—then once we have specified the probability of quick recovery under all combinations of these factors, and the combination of factors possessed by the subject whose recovery (or not, as the case may be) we want to explain, we have specified all information relevant to recovery and in this sense fully explained the outcome for the subject.[7] 3.3 What Do Statistical Theories Explain? In assessing these claims, it will be useful to take a step back and ask just what it is that these competing models of statistical explanation (Hempel's IS model and Salmon's SR model) are intended to be reconstructions of. In the literature on this topic two classes of examples or applications figure prominently. First, there are examples drawn from quantum- mechanics (QM). Suppose, for example, a particle has a probability $p$ that is strictly between 0 and 1 of penetrating a potential barrier. Models of statistical explanation assume that if the particle does penetrate the barrier, QM explains this outcome—the IS and SR models are intended to capture the structure of such explanations. Second, there are examples drawn from biomedical (or epidemiological) and social scientific applications—recovery from strep or, to cite one of Salmon's extended illustrations (Salmon, 1971), the factors relevant to juvenile delinquency in teen-age boys. This is, to say the least, a heterogeneous class of examples. In the case of QM, the usual understanding is that the various no-hidden variable results establish that any empirically adequate theory of quantum mechanical phenomena must be irreducibly indeterministic. It is thus plausible that when we use the Schrödinger equation to derive the probability that a particle with a certain kinetic energy will tunnel through a potential barrier of a certain shape, this representation satisfies the SR model's “objective homogeneity” condition—there are no additional omitted variables that would affect the probability of barrier penetration. By contrast, it seems quite unlikely that this homogeneity condition will be satisfied in most (indeed, in any) of the biomedical and sociological illustrations that have figured in the literature on statistical explanation. In the case of recovery from strep, for example, it is very plausible that there are many other factors besides the two mentioned above that affect the probability of recovery—these additional factors will include the state of the subject's immune system, various features of the subject's general level of health, the precise character of the strain of disease to which the subject is exposed (resistant versus non-resistant is almost certainly too coarse-grained a dichotomy) and so on. Similarly for episodes of juvenile delinquency. In these cases, in contrast to the cases from quantum mechanics, we lack a theory or body of results that delimits the factors that are potentially relevant to the probability of the outcome that interests us. Thus, in realistic examples of assemblages of statistically relevant factors from biomedicine and social science, the objective homogeneity condition is unlikely to be satisfied, or in any practical sense, satisfiable. A related difference concerns the way in which statistical evidence figures in these two sorts of applications. Some quantum mechanical phenomena such as radioactive decay are irreducibly indeterministic. By contrast, in the biomedical and social scientific applications, while the relevant evidence is “statistical”, there is typically no corresponding assumption that the phenomena of interest are irreducibly indeterministic. This particularly clear in connection with the social scientific examples (such as risk factors for juvenile delinquency) that Salmon discusses. Here the relevant methodology involves so-called causal modeling or structural equation techniques. At least on the most straightforward way of applying such procedures, the equations that govern whether a particular individual becomes a juvenile delinquent are (if interpreted literally) deterministic. According to such approaches, the phenomena being modeled look as though they are indeterministic because some of the variables which are relevant to their behavior, the influence of which is summarized by a so-called error term, are unknown or unmeasured. Statistical information about the incidence of juvenile delinquency among individuals in various conditions plays the role of evidence that is used to estimate parameters (the coefficients) in the deterministic equations that are taken to describe the processes governing the onset of delinquency. A similar point holds for at least many biomedical examples.[8] Several preliminary conclusions are suggested by these observations. First, it is far from obvious that we should try to construct a single, unified model of statistical explanation that applies to both quantum mechanics and macroscopic phenomena like delinquency or recovery from infection. Second, and relatedly, while explanation in QM satisfies the objective homogeneity condition, it is dubious that the sorts of “statistical explanations” found in the social and biomedical sciences do so. In other words, if an objective homogeneity condition is imposed on statistical explanation, it is not clear that there will be any examples of successful statistical explanation outside of quantum mechanics. With these observations in mind, let us revisit the question of what is explained by statistical theories, whether quantum mechanical or macroscopic. As we have seen, both Hempel and Salmon, as well as most subsequent contributors to the literature on statistical explanation, have tended to assume that statistical theories that assign a probability to some outcome strictly between 0 and 1 should nonetheless be interpreted as explaining that outcome. Given this common starting point, Salmon is quite persuasive in arguing that it is arbitrary to hold, as Hempel does, that only individual outcomes with high probability can be explained. But why should we accept the starting point? Why not take Salmon's argument instead to be a reason for rejecting the idea that statistical theories explain individual outcomes, whether of high or low probability? If we take this view, we need not conclude that a theory like QM is unexplanatory. Instead, we may take the explananda of QM to be facts about the probabilities or expectation values of outcomes rather than individual outcomes themselves. On this view, the explananda that are explained by QM are a (proper) subset of those that can be derived from it—at least in this respect, the explanations provided by QM are like DS explanations in structure. Woodward (1989) argues that this construal allows us to say all that we might legitimately wish to say about the explanatory virtues of QM. If this is correct, there is no obvious need for a separate theory of statistical explanation of individual outcomes of the sort that Hempel and Salmon sought to devise (But see footnote 7). In the case of juvenile delinquency and causal modeling techniques it is, if anything, even more intuitive that what is being explained is not, e.g. why some particular boy, Albert, became a juvenile delinquent, but rather something more general—e.g., why the expected incidence of delinquency is higher among certain subgroups than others. Again such explananda are deducible from the system of equations used to model juvenile delinquency. Taking this view of what is explained by statistical theories allows us to avoid various unintuitive consequences of Hempel's model (e.g., that high probability but not low probability outcomes are explained) and of Salmon's model (e.g., the same explanans $E$ explains both $M$ and $-M$. At the very least, those who have sought to construct models of statistical explanation of individual outcomes need to provide a more detailed elucidation of why such models are needed and of the features of scientific theorizing they are designed to capture.[9] 3.4 Causation and Statistical Relevance Relationships As we have just seen, the SR model raises a number of interesting questions about the statistical explanation of individual outcomes—questions that are important independently of the details of the SR model itself. This section will abstract away from such questions and focus instead on the root motivation for the SR model. We may take this to consist of two ideas: (i) explanations must cite causal relationships and (ii) causal relationships are captured by statistical relevance relationships. Even if (i) is accepted, a fundamental problem with the SR model is that (ii) is false—as a substantial body of work[10] has made clear, casual relationships are greatly underdetermined by statistical relevance relationships. Consider another example from Salmon (1971): a system in which atmospheric pressure $A$ is a common cause of the occurrence of a storm $S$ and the reading of a barometer $B$ with no causal relationship between $B$ and $S$. Salmon claims that in such a system $B$ and $S$ will be correlated but that $B$ is statistically irrelevant to $S$ given $A$—i.e. $P(S \mid| A.B) = P(S \mid A)$. By contrast, (Salmon claims) $A$ remains relevant to $S$ given $B$—i.e., $P(S \mid A.B) \ne P(S \mid B)$. Similarly, $S$ is irrelevant to $B$ given $A$ but $A$ remains relevant $B$ given $S$. In this way, Salmon's SR model attempts to capture the idea that $A$ is explanatorily (and causally) relevant to $S$ while $B$ is not and that $A$ is explanatorily and causally relevant to $B$ while $S$ is not. These contentions about the connection between causal claims and statistical relevance relations are consequences of a more general principle called the Causal Markov condition which has been extensively discussed in the recent literature on causation.[11] A set of variables standing in a causal relationship and an associated probability distribution over those variables satisfy the Causal Markov condition if and only if conditional on its direct causes every variable is independent of every other variable except possibly for its effects. Two relevant points have emerged from discussion of this condition. The first, which was in effect noted by Salmon himself in work subsequent to his (1971), is that there are circumstances in which the Causal Markov condition fails and hence in which causal claims do not imply the screening off relationships described above. This can happen, for example, if the variables to which the condition is applied are characterized in an insufficiently fine-grained way.[12] The second and more fundamental observation is that, depending on the details of the case, many different sets of causal relationships may be compatible with the same statistical relevance relationships, even assuming that the Causal Markov condition is satisfied. For example, a structure in which $B$ causes $A$ which in turn causes $S$ will, if we assume the Causal Markov condition (that is, make assumptions like Salmon's connecting causation and statistical relevance relationships), lead to exactly the same statistical relevance relationships as in the example in which $A$ is a common cause of $B$ and $S$. Similarly if $S$ causes $A$ which in turn causes $B$. In structures with more variables, this underdetermination of causal relationships by statistical relevance relationships may be far more extreme. Thus a list of statistical relevance relationships, which is what the SR model provides, need not tell us which causal relationships are operative. To the extent that explanation has to do with the identification of the causal relationships on which an explanandum-outcome depends, the SR model fails to fully capture these. Selected Readings. Salmon, 1971a provides a detailed statement and defense of the SR model. This essay, as well as papers by Jeffrey (1969) and Greeno (1970) which defend views broadly similar to the SR model, are collected in Salmon, 1971b. Additional discussion of the model as well as a more recent characterization of “objective homogeneity” can be found in Salmon, 1984. Cartwright, 1979 contains some influential criticisms of the SR model. Theorems specifying the precise extent of the underdetermination of causal claims by evidence about statistical relevance relationships can be found in Spirtes, Glymour and Scheines, 1993, 2000, chapter 4. 4. The Causal Mechanical Model 4.1 The Basic Idea In more recent work (especially, Salmon, 1984) Salmon abandoned the attempt to characterize explanation or causal relationships in purely statistical terms. Instead, he developed a new account which he called the Causal Mechanical (CM) model of explanation—an account which is similar in both content and spirit to so-called process theories of causation of the sort defended by philosophers like Philip Dowe (2000). We may think of the CM model as an attempt to capture the “something more” involved in causal and explanatory relationships over and above facts about statistical relevance, again while attempting to remain within a broadly Humean framework. The CM model employs several central ideas. A causal process is a physical process, like the movement of a baseball through space, that is characterized by the ability to transmit a mark in a continuous way. (“Continuous” generally, although perhaps not always, means “spatio-temporally continuous”.) Intuitively, a mark is some local modification to the structure of a process—for example, a scuff on the surface of a baseball or a dent an automobile fender. A process is capable of transmitting a mark if, once the mark is introduced at one spatio-temporal location, it will persist to other spatio-temporal locations even in the absence of any further interaction. In this sense the baseball will transmit the scuff mark from one location to another. Similarly, a moving automobile is a causal process because a mark in the form of a dent in a fender will be transmitted by this process from one spatio-temporal location to another. Causal processes contrast with pseudo-processes which lack the ability to transmit marks. An example is the shadow of a moving physical object. The intuitive idea is that, if we try to mark the shadow by modifying its shape at one point (for example, by altering a light source or introducing a second occluding object), this modification will not persist unless we continually intervene to maintain it as the shadow occupies successive spatio-temporal positions. In other words, the modification will not be transmitted by the structure of the shadow itself, as it would in the case of a genuine causal process. We should note for future reference that, as characterized by Salmon, the ability to transmit a mark is clearly a counterfactual notion, in several senses. To begin with, a process may be a causal process even if it does not in fact transmit any mark, as long as it is true that if it were appropriately marked, it would transmit the mark. Moreover, the notion of marking itself involves a counterfactual contrast—a contrast between how a process behaves when marked and how it would behave if left unmarked. Although Salmon, like Hempel, has always been suspicious of counterfactuals, his view at the time that he first introduced the CM model was that the counterfactuals involved in the characterization of mark transmission were relatively unproblematic, in part because they seemed experimentally testable in a fairly direct way. Nonetheless the reliance of the CM model, as originally formulated, on counterfactuals shows that it does not completely satisfy the Humean strictures described above. In subsequent work, described in Section 4.4 below, Salmon attempted to construct a version of the CM model that completely avoids reliance on counterfactuals. The other major element in Salmon's model is the notion of a causal interaction. A casual interaction involves a spatio-temporal intersection between two causal processes which modifies the structure of both—each process comes to have features it would not have had in the absence of the interaction. A collision between two cars that dents both is a paradigmatic causal interaction. According to the CM model, an explanation of some event $E$ will trace the causal processes and interactions leading up to $E$ (Salmon calls this the etiological aspect of the explanation), or at least some portion of these, as well as describing the processes and interactions that make up the event itself (the constitutive aspect of explanation). In this way, the explanation shows how $E$ “fit[s] into a causal nexus”(1984, p.9). The suggestion that explanation involves “fitting” an explanandum into a causal nexus does not give us any very precise characterization of what the relationship between $E$ and other causal processes and interactions must be if information about the latter is to explain $E$. Nonetheless, it seems clear enough how the intuitive idea is meant to apply to specific examples. Suppose that a cue ball, set in motion by the impact of a cue stick, strikes a stationary eight ball with the result that the eight ball is put in motion and the cue ball changes direction. The impact of the stick also transmits some blue chalk to the cue ball which is then transferred to the eight ball on impact. The cue stick, the cue ball, and the eight ball are causal processes, as is shown by the transmission of the chalk mark, and the collision of the cue stick with the cue ball and the collision of the cue and eight balls are causal interactions. Salmon's idea is that citing such facts about processes and interactions explains the motion of the balls after the collision; by contrast, if one of these balls casts a shadow that moves across the other, this will be causally and explanatorily irrelevant to its subsequent motion since the shadow is a pseudo-process. 4.2 The CM Model and Explanatory Relevance As the cue ball example illustrates, the CM model takes as its paradigms of causal interaction examples such as collisions in which there is “action by contact” and no spatio-temporal gaps in the transmission of causal influence. There is little doubt that explanations in which there are no such gaps (no “action at a distance”) often strike us as particularly satisfying.[13] However, as Christopher Hitchcock shows in an illuminating paper (Hitchcock, 1995), even here the CM model leaves out something important. Consider the usual elementary textbook “scientific explanation” of the motion of the balls in the above example following their collision. This explanation proceeds by deriving that motion from information about their masses and velocity before the collision, the assumption that the collision is perfectly elastic, and the law of the conservation of linear momentum. We usually think of the information conveyed by this derivation as showing that it is the mass and velocity of the balls, rather than, say, their color or the presence of the blue chalk mark, that is explanatorily relevant to their subsequent motion. However, it is hard to see what in the CM model allows us to pick out the linear momentum of the balls, as opposed to these other features, as explanatorily relevant. Part of the difficulty is that to express such relatively fine-grained judgments of explanatory relevance (that it is linear momentum rather than chalk marks that matters) we need to talk about relationships between properties or magnitudes and it is not clear how to express such judgments in terms of facts about causal processes and interactions. Both the linear momentum and the chalk mark communicated to the cue ball by the cue stick are marks transmitted by the spatio-temporally continuous causal process consisting of the motion of the cue ball. Both marks are then transmitted via an interaction to the eight ball. There appears to be nothing in Salmon's notion of mark transmission or the notion of a causal process that allows one to distinguish between the explanatorily relevant momentum and the explanatorily irrelevant blue chalk mark. Ironically, as Hitchcock goes on to note, a similar observation may be made about the birth control pills example (2.5.2) originally devised by Salmon to illustrate the failure of the DN model to capture the notion of explanatory relevance. Spatio-temporally continuous causal processes that transmit marks as well as causal interactions are at work when male Mr. Jones ingests birth control pills—the pills dissolve, components enter his bloodstream, are metabolized or processed in some way, and so on. Similarly, spatio-temporally continuous causal processes (albeit different processes) are at work when female Ms. Jones takes birth control pills. However, the pills are irrelevant to Mr. Jones non-pregnancy, and relevant to Ms. Jones' non-pregnancy. Again, it looks as though the relevance or irrelevance of the birth control pills to Mr. or Ms. Jones' failure to become pregnant cannot be captured just by asking whether the processes leading up to these outcomes are causal processes in Salmon's sense. A similar point holds for the hexed salt example (2.6.3)—there are a spatio-temporally continuous causal processes running from the witch's wand that touches the salt sample to the individual Na and Cl ions formed when the salt dissolves but this is not sufficient for the hexing to be causally (or explanatorily) relevant to the dissolving. A more general way of putting the problem revealed by these examples is that those features of a process $P$ in virtue of which it qualifies as a causal process (ability to transmit mark $M$) may not be the features of $P$ that are causally or explanatorily relevant to the outcome $E$ that we want to explain ($M$ may be irrelevant to $E$ with some other property $R$ of $P$ being the property which is causally relevant to $E$). So while mark transmission may well be a criterion that correctly distinguishes between causal processes and pseudo-processes, it does not, as it stands, provide the resources for distinguishing those features or properties of a causal process that are causally or explanatorily relevant to an outcome and those features that are irrelevant. 4.3 The CM Model and Complex Systems A second set of worries has to do with the application of the CM model to systems which depart in various respects from simple physical paradigms such as the collision described above. There are a number of examples of such systems. First, there are theories like Newtonian gravitational theory which involve “action at a distance” in a physically interesting sense. Second, there are a number of examples from the literature on causation that do not involve physically interesting forms of action at a distance but which arguably involve causal interactions without intervening spatio-temporally continuous processes or transfer of energy and momentum from cause to effect. These include cases of causation by omission and causation by “double prevention” or “disconnection.”[14] In all these cases, a literal application of the CM model seems to yield the judgment that no explanation has been provided—that Newtonian gravitational theory is unexplanatory and so on. Many philosophers have been reluctant to accept this assessment. Yet another class of examples that raise problems for the CM model involves putative explanations of the behavior of complex or “higher level” systems—explanations that do not explicitly cite spatio-temporally continuous causal processes involving transfer of energy and momentum, even though we may think that such processes are at work at a more “underlying” level. Most explanations in disciplines like biology, psychology and economics fall under this description, as do a number of straightforwardly physical explanations. As an illustration, suppose that a mole of gas is confined to a container of volume $V_1$, at pressure $P_1$, and temperature $T_1$. The gas is then allowed to expand isothermally into a larger container of volume $V_2$. One standard way of explaining the behavior of the gas—its rate of diffusion and its subsequent equilibrium pressure $P_2$—appeals to the generalizations of phenomenological thermodynamics—e.g., the ideal gas law, Graham's law of diffusion, and so on. Salmon appears to regard putative explanations based on at least the first of these generalizations as not explanatory because they do not trace continuous causal processes—he thinks of the individual molecules as causal processes but not the gas as a whole.[15] However, it is plainly impossible to trace the causal processes and interactions represented by each of the $6 \times 10^{23}$ molecules making up the gas and the successive interactions (collisions) it undergoes with every other molecule. The usual statistical mechanical treatment, which Salmon presumably would regard as explanatory, does not attempt to do this. Instead, it makes certain general assumptions about the distribution of molecular velocities and the forces involved in molecular collisions and then uses these, in conjunction with the laws of mechanics, to derive and solve a differential equation (the Boltzmann transport equation) describing the overall behavior of the gas. This treatment abstracts radically from the details of the causal processes involving particular individual molecules and instead focuses on identifying higher level variables that aggregate over many individual causal processes and that figure in general patterns that govern the behavior of the gas. This example raises a number of questions. Just what does the CM model require in the case of complex systems in which we cannot trace individual causal processes, at least at a fine-grained level? How exactly does the causal mechanical model avoid the (disastrous) conclusion that any successful explanation of the behavior of the gas must trace the trajectories of individual molecules? Does the statistical mechanical explanation described above successfully trace causal processes and interactions or specify a causal mechanism in the sense demanded by the CM model, and if so, what exactly does tracing causal processes and interactions involve or amount to in connection with such a system? As matters now stand both the CM model and the process theories of causation that are its more recent descendants are incomplete. There is another aspect of this example that is worthy of comment. Even if, per impossible, an account that traced individual molecular trajectories were to be produced, there are important respects in which it would not provide the sort of explanation of the macroscopic behavior of the gas that we are likely to be looking for—and not just because such an account would be far too complex to be followed by a human mind. There are a very large number of different possible trajectories of the individual molecules in addition to the trajectories actually taken that would produce the macroscopic outcome—the final pressure $P_2$—that we want to explain. This information is certainly explanatorily relevant to the macroscopic behavior of the gas and we would like our account of explanation to accommodate this fact. Very roughly, given the laws governing molecular collisions, one can show that almost all (i.e., all except a set of measure zero) of the possible initial positions and momenta consistent with the initial macroscopic state of the gas, as characterized by $P_1$, $T_1$, and $V_1$, will lead to molecular trajectories such that the gas will evolve to the macroscopic outcome in which the gas diffuses to an equilibrium state of uniform density through the chamber at pressure $P_2$. Similarly, there is a large range of different microstates of the gas compatible with each of the various other possible values for the temperature of the gas and each of these states will lead to a different final pressure $P_{2^*}$. If we just trace the causal processes (in the form of actual molecular trajectories) that lead to $P_2$, as the CM model requires, we will fail to represent or capture this information about the full range of conditions under which $P_2$ and alternatives to it will occur. A similar point holds for explanations of the behavior of other sorts of complex systems, such as those studied in biology and economics. Consider the standard explanation, in terms of an upward shift of the supply curve, with an unchanged demand curve, for the increase in the price of oranges following a freeze. Underlying the behavior of this market are individual spatio-temporally continuous causal processes and interactions in Salmon's sense—there are a myriad of individual transactions in which money in some form is exchanged for physical goods, all of which involve transfers of matter or energy, there is exchange of information about intentions or commitments to buy or sell at various prices, all of which must take place in some physical medium and involve transfers of energy, and so on. However, it also seems plain that producing a full description of these processes (supposing for the sake of argument that it was possible to do this) will produce little or no insight into why these systems behave as they do. Again, this is not just because any such “explanation” will overwhelm our information processing abilities. It is also the case that a great deal of the information contained in such a description will be irrelevant to the behavior we are trying to explain, for the same reason that a detailed description of the individual molecular trajectories will contain information that is irrelevant to the behavior of the gas. For example, while the detailed description of the individual causal processes involved in the operation of the market for oranges presumably will describe whether individual consumers purchase oranges by cash, check, or credit card, whether information about the freeze is communicated by telephone or email, and so on, all of this is to a first approximation irrelevant to the equilibrium price—given the supply and demand curves, the equilibrium price will be the same as long as there is a market in which consumers are able to purchase oranges by some means, information about the freeze and about prices is available to buyers and sellers in some form, and so on.[16] Moreover, those factors that are explanatorily relevant to the equilibrium price, such as the shape of the demand and supply curves, are not in any obvious sense themselves connected by spatio-temporally continuous processes to the price (it is unclear what this claim even means), although as emphasized above, the unknown processes underlying the attainment of equilibrium are presumably spatio-temporally continuous. Again the issue is how an account like Salmon's can capture this feature of successful explanation of the behavior of complex systems—how the account guides us to find the “right” level of description of the phenomena we are trying to explain. In fact, as the above examples illustrate, the requirements that Salmon imposes on causal processes-and in particular the requirement of spatio-temporal continuity—often seem to lead us away from the right level of description. The level at which the spatio-temporal continuity constraint is most obviously respected (the level at which, e.g., we describe a particular consumer as exchanging cash for oranges or a grower as making an agreement via telephone with a retailer to sell at a certain price) seems to be the wrong level for achieving understanding. 4.4 More Recent Developments In more recent work (e.g., Salmon, 1994), prompted in part by a desire to avoid certain counterexamples advanced by Philip Kitcher (Kitcher, 1989) to his characterization of mark transmission, Salmon attempted to fashion a theory of causal explanation that completely avoids any appeal to counterfactuals. In this new theory which is influenced by the conserved process theory of causation of Dowe (Dowe, 2000), Salmon defined a causal process as a process that transmits a non-zero amount of a conserved quantity at each moment in its history. Conserved quantities are quantities so characterized in physics—linear momentum, angular momentum, charge, and so on. A causal interaction is an intersection of world lines associated with causal processes involving exchange of a conserved quantity. Finally, a process transmits a conserved quantity from $A$ to $B$ if it possesses that quantity at every stage without any interactions that involve an exchange of that quantity in the half-open interval $(A, B]$. One may doubt that this new theory really avoids reliance on counterfactuals, but an even more fundamental difficulty is that it still does not adequately deal with the problem of causal or explanatory relevance described above. That is, we still face the problem that the feature that makes a process causal (transmission of some conserved quantity or other) may tell us little about which features of the process are causally or explanatorily relevant to the outcome we want to explain. For example, a moving billiard ball will transmit many conserved quantities (linear momentum, angular momentum, charge etc.) and many of these may be exchanged during a collision with another ball. What is it that entitles us to single out the linear momentum of the balls, rather than these other conserved quantities as the property that is causally relevant to their subsequent motion? In cases in which there appear to be no conservation laws governing the explanatorily relevant property (i.e., cases in which the explanatorily relevant variables are not conserved quantities) this difficulty seems even more acute. Properties like “having ingested birth control pills,” “being pregnant”, or “being a sample of hexed salt” do not themselves figure in conservation laws. While one may say that both birth control pills and hexed salt are causal processes because both consist, at some underlying level, of processes that unambiguously involve the transmission of conserved quantities like mass and charge, this observation does not by itself tell us what, if anything, about these underlying processes is relevant to pregnancy or dissolution in water. In a still more recent paper (Salmon, 1997), Salmon conceded this point. He agreed that the notion of a causal process cannot by itself capture the notion of causal and explanatory relevance. He suggested, however, that this notion can be adequately captured by appealing to the notion of a causal process and information about statistical relevance relationships (that is, information about conditional and unconditional (in)dependence relationships), with the latter capturing the element of causal or explanatory dependence that was missing from his previous account: I would now say that (1) statistical relevance relations, in the absence of information about connecting causal processes, lack explanatory import and that (2) connecting causal processes, in the absence of statistical relevance relations, also lack explanatory import. (1997, p.476) This suggestion is not developed in any detail in Salmon's paper, and it is not easy to see how it can be made to work. We noted above that statistical relevance relationships often greatly underdetermine the causal relationships among a set of variables. What reason is there to suppose that appealing to the notion of a causal process, in Salmon's sense, will always or even usually remove this indeterminacy? We also noted that the notion of a causal process cannot capture fine grained notions of relevance between properties, that there can be causal relevance between properties instances of which (at least at the level of description at which they are characterized) are not linked by spatio-temporally continuous or transference of conserved quantities, and that properties can be so linked without being causally relevant (recall the chalk mark that is transmitted from one billiard ball to another). As long as it is possible (and why should it not be?) for different causal claims to imply the same facts about statistical relevance relationships and for these claims to differ in ways that cannot be fully cashed out in terms of Salmon's notions of causal processes and interactions, this new proposal will fail as well. Selected Readings: Salmon, 1984 provides a detailed statement of the Causal Mechanical model, as originally formulated. Salmon, 1994 and 1997 provide a restatement of the model and respond to criticisms. For discussion and criticism of the CM model, see Kitcher, 1989, especially pp. 461ff, Woodward, 1989 and Hitchcock, 1995. 5. A Unificationist Account of Explanation. 5.1 The Basic Idea The basic idea of the unificationist account is that scientific explanation is a matter of providing a unified account of a range of different phenomena. This idea is unquestionably intuitively appealing. Successful unification may exhibit connections or relationships between phenomena previously thought to be unrelated and this seems to be something that we expect good explanations to do. Moreover, theory unification has clearly played an important role in science. Paradigmatic examples include Newton's unification of terrestrial and celestial theories of motion and Maxwell's unification of electricity and magnetism. The key question, however, is whether our intuitive notion (or notions) of unification can be made more precise in a way that allows us to recover the features that we think that good explanations should possess. Michael Friedman (1974) is an important early attempt to do this. Friedman's formulation of the unificationist idea was subsequently shown to suffer from various technical problems (Kitcher, 1976) and subsequent development of the unificationist treatment of explanation has been most associated closely with Philip Kitcher (especially Kitcher, 1989). Let us begin by introducing some of Kitcher's technical vocabulary. A schematic sentence is a sentence in which some of the nonlogical vocabulary has been replaced by dummy letters. To use Kitcher's examples, the sentence “Organisms homozygous for the sickling allele develop sickle cell anemia” is associated with a number of schematic sentences including “Organisms homozygous for $A$ develop $P$” and “For all $X$ if $X$ is $O$ and $A$ then $X$ is $P$”. Filling instructions are directions that specify how to fill in the dummy letters in schematic sentences. For example, filling instructions might tell us to replace $A$ with the name of an allele and $P$ with the name of a phenotypic trait in the first of the above schematic sentences. Schematic arguments are sequences of schematic sentences. Classifications describe which sentences in schematic arguments are premises and conclusions and what rules of inference are used. An argument pattern is an ordered triple consisting of a schematic argument, a set of sets of filling instructions, one for each term of the schematic argument, and a classification of the schematic argument. The more restrictions an argument pattern imposes on the arguments that instantiate it, the more stringent it is said to be. Roughly speaking, Kitcher's guiding idea is that explanation is a matter of deriving descriptions of many different phenomena by using as few and as stringent argument patterns as possible over and over again-the fewer the patterns used, the more stringent they are, and the greater the range of different conclusions derived, the more unified our explanations. Kitcher summarizes this view as follows: Science advances our understanding of nature by showing us how to derive descriptions of many phenomena, using the same pattern of derivation again and again, and in demonstrating this, it teaches us how to reduce the number of facts we have to accept as ultimate. (p.423). Kitcher does not propose a completely general theory of how the various considerations he describes—number of conclusions, number of patterns and stringency of patterns—are to be traded off against one another, but does suggest that it often will be clear enough what these considerations imply about the evaluation of particular candidate explanations. His basic strategy is to attempt to show that the derivations we regard as good or acceptable explanations are instances of patterns that taken together score better according to the criteria just described than the patterns instantiated by the derivations we regard as defective explanations. Following Kitcher, let us define the explanatory store $E(K)$ as the set of argument patterns that maximally unifies $K$, the set of beliefs accepted at a particular time in science. Showing that a particular derivation is a good or acceptable explanation is then a matter of showing that it belongs to the explanatory store. 5.2 Illustrations of the Unificationist Model As an illustration, consider Kitcher's treatment of the problem of explanatory asymmetries (recall Section 2.5). Our present explanatory practices—call these $P$—are committed to the idea that derivations of a flagpole's height from the length of its shadow are not explanatory. Kitcher compares $P$ with an alternative systemization in which such derivations are regarded as explanatory. According to Kitcher, $P$ includes the use of a single “origin and development” (OD) pattern of explanation, according to which the dimensions of objects-artifacts, mountains, stars, organisms etc. are traced to “the conditions under which the object originated and the modifications it has subsequently undergone” (1989, p. 485). Now consider the consequences of adding to $P$ an additional pattern $S$ (the shadow pattern) which permits the derivation of the dimensions of objects from facts about their shadows. Since the OD pattern already permits the derivation of all facts about the dimensions of objects, the addition of the shadow pattern $S$ to $P$ will increase the number of argument patterns in $P$ and will not allow us to derive any new conclusions. On the other hand, if we were to drop OD from $P$ and replace it with the shadow pattern, we would have no net change in the number of patterns in $P$, but would be able to derive far fewer conclusions than we would with OD, since many objects do not have shadows (or enough shadows) from which to derive all of their dimensions. Thus OD belongs to the explanatory store, and the shadow pattern does not. Kitcher's treatment of other familiar problem cases is similar. For example, he notes that we believe that an explanation of why some sample of salt dissolves in water that appeals to the fact that the salt is hexed and the generalization $(H)$ that all hexed salt dissolves in water is defective, at least in comparison with the standard explanation that appeals just to the generalization that $(D)$ all salt dissolves in water. He suggests that the “basis for this belief” is that the derivation that appeals to $(H)$ instantiates an argument pattern that belongs to a totality of patterns that is less unifying than the totality containing the derivation that appeals to $(D)$. In particular, an explanatory store containing $(H)$ but not $(D)$ will have a more restricted consequence set than a store containing $(D)$ but not $(H)$, since the latter but not the former allows for the derivation of facts about the dissolving of unhexed salt in water. And the addition of $(H)$ to an explanatory store containing $(D)$ will increase the number of patterns without any compensating gain in what can be derived. Kitcher acknowledges that there is nothing in the unificationist account per se that requires that all explanation be deductive: “there is no bar in principle to the use of non-deductive arguments in the systemization of our beliefs”. Nonetheless, “the task of comparing the unifying power of different systemizations looks even more formidable if nondeductive arguments are considered” and in part for this reason Kitcher endorses the view that “in a certain sense, all explanation is deductive” (p.448). What is the role of causation on this account? Kitcher claims that “the ‘because’ of causation is always derivative from the ‘because’ of explanation.” (1989, p.477). That is, our causal judgments simply reflect the explanatory relationships that fall out of our (or our intellectual ancestors') attempts to construct unified theories of nature. There is no independent causal order over and above this which our explanations must capture. Like many other philosophers, Kitcher takes very seriously, even if in the end he perhaps does not fully endorse, standard empiricist or Humean worries about the epistemic accessibility and intelligibility of causal claims. Taking causal, counterfactual or other notions belonging to the same family as primitive in the theory of explanation is problematic. Kitcher believes that it is a virtue of his theory that it does not do this. Instead, Kitcher proposes to begin with the notion of explanatory unification, characterized in terms of constraints on deductive systemizations, where these constraints can be specified in a quite general way that is independent of causal or counterfactual notions, and then show how the causal claims we accept derive from our efforts at unification. 5.3 The Illustrations Criticized As remarked at the beginning of this section, the idea that explanation is connected in some way to unification is intuitively appealing. Nonetheless Kitcher's particular way of cashing out this connection seems problematic. Consider Kitcher's treatment of the flagpole example. This depends heavily on the contingent truth that some objects do not cast enough shadows to recover all of their dimensions. But it seems to be part not just of common sense, but of currently accepted physical theory that it would be inappropriate to appeal to facts about the shadows cast by objects to explain their dimensions even in a world in which all objects cast enough shadows that all their dimensions could be recovered. It is unclear how Kitcher's account can recover this judgment. The matter becomes clearer if we turn our attention to a variant example in which, unlike the shadow example, there are clearly just as many backwards derivations from effects to causes as there are derivations from causes to effects. Consider, following Barnes (1992), a time-symmetric theory like Newtonian mechanics, applied to a closed system like the solar system. Call derivations of the state of motion of planets at some future time $t$ from information about their present positions (at time $t_0$), masses, and velocities, the forces incident on them at $t_0$, and the laws of mechanics predictive. Now contrast such derivations with retrodictive derivations in which the present motions of the planets are derived from information about their future velocities and positions at $t$, the forces operative at $t$, and so on. It looks as though there will be just as many retrodictive derivations as predictive derivations, and each will require premises of exactly the same general sort—information about positions, velocities, masses etc. and the same laws. Thus the pattern or patterns instantiated by the retrodictive derivations look(s) exactly as unified as the pattern or patterns associated with the predictive derivations. However, we ordinarily think of the predictive derivations and not the retrodictive derivations as explanatory and the present state of the planets as the cause of their future state and not vice-versa. It is again far from obvious how considerations having to do with unification could generate such an explanatory asymmetry. One possible response to this second example is to bite the bullet and to argue that from the point of view of fundamental physics, there really is no difference in the explanatory import of the retrodictive and predictive derivations, and that it is a virtue, not a defect, of the unificationist approach that it reproduces this judgment. Whatever might be said in favor of this response, it is not Kitcher's. His claim is that our ordinary judgments about causal asymmetries can be derived from the unificationist account. The example just described casts doubt on this claim. More generally, it casts doubt on Kitcher's contention that one can begin with the notion of explanatory unification, understood in a way that does not presuppose causal notions, and use it to derive the content of causal judgments. 5.4 The Heterogeneity of Unification This conclusion is reinforced by a more general consideration: unification, as it figures in science is a quite heterogeneous notion, covering many different sorts of achievements.[17] Some kinds of unification consist in the creation of a common classificatory scheme or descriptive vocabulary where no satisfactory scheme previously existed, as when early investigators like Linnaeus constructed comprehensive and principled systems of biological classification. Another kind of unification involves the creation of a common mathematical framework or formalism which can be applied to many different sorts of phenomena, as when the systems of equations devised by Lagrange and Hamilton were first developed in connection with mechanics and then applied to domains like electromagnetism and thermodynamics. Still other cases involve what might be described as genuine physical unification, where phenomena previously regarded as having quite different causes or explanations are shown to be the result of a common set of mechanisms or causal relationships. Newton's demonstration that the orbits of the planets and the behavior of terrestrial objects falling freely near the surface of the earth are due to the same force of gravity and conform to the same laws of motion was a physical unification in this sense. Of these three kinds of activities only the third—physical unification—seems to have much intuitively to do with explanation, at least if we think of explanation as involving the citing of causal relationships. In particular, depending on the details of the case, the kind of unification associated with adoption of a classificatory scheme may tell us little about causal relationships. Moreover, as historical studies have made clear, a similar point holds for formal or mathematical unification: the fact that we can construct a common mathematical framework for dealing with a range of different phenomena does not by any means automatically insure that we have identified some set of common causal factors responsible for those phenomena—i.e., that we have produced a unified physical explanation of them. For example, the mere fact that we can describe both the behavior of a system of gravitating masses and the operation of an electric circuit by means of Lagrange's equations does not mean that we have achieved a common explanation of the behavior of both or that we have “unified” gravitation and electricity in any physically interesting sense. These considerations raise the following question: Is Kitcher's account of unification sufficiently discriminating or nuanced to distinguish those unifications having to do with explanation from other sorts of unification? The worry is that it is not. The conception of unification underlying Kitcher's account seems to be at bottom one of descriptive economy or information compression—deriving as much from as few patterns of inference as possible. Many cases of classificatory and purely formal unification involving a common mathematical framework seem to fit this characterization. Consider schemes for biological classification and schemes for the classification of geological and astronomical objects like rocks and stars. If I know that individuals belong to a certain classificatory category (e. g. $X$s are mammals or polar bears), I can use this information to derive a great many of their other properties ($X$s have backbones, hearts, their young are born alive etc.) and this is a pattern of inference that can be used repeatedly for many different sorts of $X$s. But despite the willingness of some philosophers to regard such derivations as explanatory, it is common scientific practice to regard such schemes as “merely descriptive” and as telling us little or nothing about the causes or mechanisms that explain why $X$s have backbones or hearts.[18] Another illustration of the same general point is provided by the numerous statistical procedures (factor analysis, cluster analysis, multidimensional scaling techniques) that allow one to summarize or represent large bodies of statistical information in an economical, unified way and to derive more specific statistical facts from a much smaller set of assumptions by repeated use of the same pattern of argument. For example, knowing the “loading” of each of $n$ intelligence tests on a single common factor $g$, one can derive a much larger number $(n(n-1)/2)$ of conclusions about pairwise correlations among these tests. Again, however, it is doubtful that by itself this “unification” tells us anything about the causes of performance on these tests. 5.5 The Winner-Take-All Conception of Explanatory Unification Another fundamental difficulty with the unificationist account derives from its reliance on what might be called a “winner take all” conception of unification. On the one hand, it seems that any plausible version of that account must yield the conclusion that generalizations and theories can sometimes be explanatory with respect to some set of phenomena even though more unifying explanations of those phenomena are known[19]. For example, Galileo's law can be used to explain facts about the behavior of falling bodies even though it furnishes a less unifying explanation than the laws of Newtonian mechanics and gravitational theory, the latter are in turn explanatory even though the explanations they provide are less unified than those provided by General Relativity, the theories of Coulomb and Ampere are explanatory even though the explanations they provide are less unified than the explanations provided by Maxwell's theory, and so on. If we reject this idea, we must adopt the conclusion that in any domain only the most unified theory that is known is explanatory at all; everything else is non-explanatory. Call this the winner-take-all conception of explanatory unification. The winner-take-all conception gives up on the apparently very natural idea, which one would think that the unificationist would wish to endorse, that an explanation can provide less unification than some alternative, and hence be less deep or less good, but still qualify as somewhat explanatory. However, Kitcher's treatment of the problems of explanatory irrelevance and explanatory asymmetry seems to require just this conception. Why is it that we cannot appeal to the fact that this particular sample of salt has been hexed to explain why it dissolves? According to Kitcher, any explanatory store containing a generalization about the dissolving of hexed salt will be “less unified” than a competing explanatory store according to which the dissolving of the salt is explained by appeal to the generalization that all salt dissolves in water. Similarly, the reason why we cannot explain the height of a flagpole in terms of the length of its shadow is that explanations of lengths of objects in terms of facts about shadows do not belong to the “set of explanations” which “collectively provides the best systemization of our beliefs” (1989, p. 430). This analysis clearly requires the winner-take-all idea that an explanation $T_1$ that is less satisfactory from the point of view of unification than some competing alternative $T_2$ is unexplanatory, rather than merely less explanatory than $T_2$. If Kitcher were to reject the winner take all idea and hold instead that even if $T_2$ is more unified than $T_1$, it does not automatically follow that $T_1$ is unexplanatory, then his solution to the problems of explanatory irrelevance and asymmetry would no longer be available: his conclusion should be that an “explanation” of Mr. Jones' failure to get pregnant in terms of his ingestion of birth control pills is genuinely explanatory, although less so than the alternative explanation that invokes his gender, and similarly for a derivation of the height of a flagpole from the length of its shadow. Intuitively, the problem is that we need a theory of explanation that captures several different possibilities. On the one hand, there are generalizations and associated putative explanations (like the generalization relating barometric pressure to the occurrence of storms and the generalization relating the hexing of salt to its dissolution in water) that are not explanatory at all; they fall below the threshold of explanatoriness. On the other hand, above this threshold there is something more like a continuum: a generalization can be explanatory but provide less deep or good explanations than some alternative. What we have just seen is that the unificationist account has difficulty simultaneously capturing both of these possibilities. Either there is no threshold (every derivation is explanatory to some extent and it is just that some derivations belong to systemizations that are less unifying and hence less explanatory than others) or else there is no continuum (only the most unifying systemizations are explanatory). 5.6 The Epistemology of Unification Recall that, according to Kitcher, causal knowledge derives from our efforts at unification. However, as Kitcher also recognizes, it is highly implausible that most individuals deliberately and self-consciously go through the process of comparing competing deductive systemizations with respect to number and stringency of patterns and number of conclusions in order to determine which is most unifying. His response to this observation is to hold that most people acquire causal knowledge by absorbing the “lore” of their communities, where this lore does reflect previous systematic efforts at unification. He writes that “our everyday causal knowledge is based on our early absorption of the theoretical picture of the world bequeathed to us by our scientific tradition” (1989, p. 469) How exactly is this suggestion supposed to work? While it is surely true that individual human beings acquire a substantial amount of causal knowledge by cultural transmission, it is also obvious that not all causal knowledge is acquired in this way. Some causal knowledge that individuals acquire involves learning from experience. Moreover, unless we are willing to make extremely implausible assumptions about the innateness of a large number of specific causal beliefs, the stock of socially transmitted causal knowledge must itself have been initially acquired in a way in which learning from experience played an important role. The question that then arises is how this process of learning from experience is supposed to work on a view like Kitcher's about the source of our causal knowledge. If, as Kitcher claims, “the idea that any one individual justifies the causal judgments that he/she makes by recognizing the patterns of argument that best unify his/her beliefs is clearly absurd” (1989, p. 436), just what is it that is going on at the individual level when people learn form experience? One possibility is that although individuals do not knowingly go through the process of comparing the degree of unification achieved by alternative systemizations when they acquire new causal knowledge by learning from experience, they go through this process tacitly or unconsciously, perhaps because of some general disposition of the mind to seek unification. However, Kitcher does not seem to endorse this idea and it does not fit very well with his emphasis on the social transmission of causal information. Moreover, it looks as though even unconscious unification requires very sophisticated cognitive abilities (construction and comparison of different deductive systemizations etc.) that it is implausible to attribute to many causal learners, such as small children. One natural interpretation of the passages quoted above and others in Kitcher (1989) is this: a social process of comparing alternative systemizations of beliefs and drawing out their deductive consequences occurs at the community level, with groups of people making arguments to one another about which overall deductive systemizations best unify the beliefs of the community as a whole. Particular causal beliefs are justified at the community level by being shown to be part of the best overall systemization of the beliefs of the community, and are then passed on from the common community stock to individuals via a process of social transmission. An obvious problem with this picture is that the community-wide process of justification must still be carried out in some fashion by individual actors. If, as appears to be the case, there are many societies which possess a substantial amount of causal and explanatory knowledge but in which no one possesses an explicit or clearly articulated concept of a deductively valid argument or is very skilled at drawing out the deductive consequences of beliefs or possesses explicit versions of Kitcher's concepts of number and stringency of argument patterns, how exactly are community beliefs that reflect the operation of these notions supposed to form? If, as Kitcher concedes, it is psychologically unrealistic to assume that individual human beings deliberately and self-consciously go through the process of comparing alternative systemizations when they acquire causal beliefs through experience, why is it any more realistic to suppose that this process somehow occurs through the interactions of individual actors at the community level[20]? There is a second, related difficulty. Assume, for the sake of argument, that it is desirable to have a unified belief system in Kitcher's sense—whether because unification is connected to explanation and the latter is intrinsically valuable or because unification is connected to other goals (e.g., confirmation) that are desirable. It is still not obvious why it would be valuable to have a set of beliefs that are a smallish proper subset of the beliefs that comprise such a unified system, which is what most people seem to have, given Kitcher's views about the transmission of causal knowledge. Recall Kitcher's basic picture: when I acquire the belief that, say, whether salt is hexed is causally irrelevant to whether it dissolves and that whether it is placed in water is causally relevant, I acquire a fragment of the community's overall systemization $S$. But adding a fragment of $S$ or even a number of fragments of $S$ to my belief store may not result in my having a belief system that is unified, or that facilitates whatever epistemic goals are associated with unification. Of course if I end up adding all or most of $S$ to my belief store, I will have at that point a set of beliefs that is unified and that brings with it all of the benefits of unification. But, as Kitcher agrees, it is unrealistic to suppose that most people possess anything like the full systemization $S$ that best unifies all of the beliefs in their community. This seems to be true, for example, of our own epistemic community, in which knowledge—especially scientific knowledge—is highly dispersed among a small group of experts and in which no single person's mind (and still less the typical member's mind) contains or operates in accordance with the systemization that best unifies the beliefs of the entire community. More generally, it seems unlikely that the different portions $B_i$ of the community systemization $S$ that various individuals $i$ acquire by means of cultural transmission will be in each case highly unified systemizations. In short, it is a major problem with the cultural transmission story that it is hard to see how unification could be cognitively or practically valuable unless it characterizes the belief systems of individuals and not just the community. However, taking the sort of unification that Kitcher associates with causal and explanatory knowledge to characterize individual belief systems seems prima-facie psychologically unrealistic. This is not to say that there is no way of making sense of the acquisition of causal knowledge on the unificationist picture, but a great deal more needs to be said about how this works. Selected Readings: The most detailed statement of Kitcher's position can be found in Kitcher, 1989. Salmon, 1989, pp. 94ff. contains a critical discussion of Friedman's version of the unificationist account of explanation but ends by advocating a “rapprochement” between unificationist approaches and Salmon's own causal mechanical model. Woodward, 2003, contains additional criticisms of Kitcher's version of unificationism. 6. Pragmatic Theories of Explanation 6.1 Introduction Despite their many differences, the accounts of Hempel (focusing now on just the DN rather than the IS model), Salmon, Kitcher and others discussed above largely share a common overall conception of what the project of constructing a theory of explanation should involve and (to a considerable extent) what criteria such a theory should satisfy if it is to be successful. Lets us say that a theory of explanation contains “pragmatic” elements if (i) those elements require irreducible reference to facts about the interests, beliefs or other features of the psychology of those providing or receiving the explanation and/or (ii) irreducible reference to the “context” in which the explanation occurs. (For what this means, see below.) Although the writers discussed above agree that pragmatic elements play some role in the activity of giving and receiving explanations, they assume that there is a non-pragmatic core to the notion of explanation which it is the central task of a theory of explanation to capture. That is, it is assumed that this core notion can be specified in a way that does not require reference to features of the psychology of explainers or their audiences that and it can be characterized in terms of features that are non-contextual in the sense that they are sufficiently general, abstract and“structural”that we can view them as holding across a range of explanations with different contents and across a range of different contexts. Often, but not always, it is claimed that many aspects of these features can be captured formally, via relationships like deductive entailment or statistical relevance. In addition, these writers see the goal of a theory of explanation, as capturing the notion of a correct explanation, as in “the (or an) explanation of the photoelectric effect is such and such” as opposed to the notion of an explanation's being considered explanatory by a particular audience or not, a matter which presumably depends on such considerations as whether the audience understands the terms in which the explanation is framed. Finally, as noted in the Introduction to this entry, writers in this tradition have not had as their goal capturing all of various ways in which the word “explanation” is used in ordinary English. They have instead focused on a much more restricted class of examples in which what is of interest is (something like) explaining “why” some outcome or general phenomenon occurred, as opposed to explaining, e.g., the meaning of a word or how to solve a differential equation. The motivation for this restriction is simply the judgment that an interesting and non-trivial theory is more likely to emerge if it is restricted in scope in this way. For ease of reference, let us call this the “traditional” conception of the task of a theory of explanation. Some or all of these assumptions and goals are rejected in pragmatic or as they are sometimes also called “contextual” accounts of explanation. Early contributors to this approach include Michael Scriven (e.g.,1962) and Sylvan Bromberger (e.g., 1966), with more systematic statements, due to van Fraassen (1980) and Achinstein (1983) appearing in the 1980s. Since it is not always clear just what the points of disagreement are between pragmatic and traditional accounts, some orienting remarks about this will be useful before turning to details. Defenders of pragmatic approaches to explanation typically stress the point that whether provision of a certain body of information to some audience produces understanding or a sense of intelligibility or is appropriate or illuminating for that audience depends on the background knowledge and interests of the audience members and on other factors having to do with the local context. For example, an explanation of the deflection of starlight by the sun that appeals to the field equations of General Relativity may be highly illuminating to a trained physicist but unintelligible to layperson because of his background. Factors of this sort are grouped together as “pragmatic” and their influence is taken to illustrate at least one way in which pragmatic considerations enter into the notion of explanation. Taken in itself the observation just described seem completely uncontroversial and not in conflict with approaches to explanation that are usually viewed as paradigmatically traditional. Indeed,as remarked above writers like Hempel and Salmon explicitly agree that explanation has a pragmatic dimension in the sense just described—in fact, Hempel invokes the role of pragmatic factors at a number of points to address prima-facie counterexamples to the DN model[21]. This suggests that, often at least[22], what is distinctive about pragmatic approaches to explanation is not just the bare idea that explanation has a “pragmatic dimension” but rather the further and much stronger claim that that the traditional project of constructing a model of explanation pursued by Hempel and others has so far been unsuccessful ( and perhaps is bound to be unsuccessful) and that this is so because pragmatic or contextual factors play a central and ineliminable role in explanation in a way that resists incorporation into models of the traditional sort. On this view, much of what is distinctive about pragmatic accounts (including the accounts of van Fraassen and Achinstein discussed below) is their opposition to traditional accounts and their diagnosis of why accounts fail—they fail because they omit pragmatic or contextual elements. It will be important to keep this point in mind in what follows because there is a certain tendency among advocates of pragmatic theories to argue as though the superiority of their approach is established simply by the observation that explanation has a pragmatic dimension; instead it seems more appropriate to think that the real issue is whether traditional approaches are inadequate in principle because of their neglect of the pragmatic dimension of explanation. A second issue concerns an important ambiguity in the notion of “pragmatic”. On one natural understanding of this notion, a pragmatic consideration is one that has to do with utility or usefulness in the service of some goal connected to human interests, where these interests are in some relevant sense “practical”. Call this notion “pragmatic1”. On this construal, Hempel's DN model might be correctly characterized as a pragmatic1 theory (or as containing pragmatic1 elements) since it links explanatory information closely to the provision of information that is useful for purposes of prediction and prediction certainly qualifies as a pragmatic goal. For similar reasons, Woodward's (2003) theory of explanation might also be counted as a pragmatic1 theory since it connects explanation with the provision of information that is useful for manipulation and control—unquestionably useful goals. As these examples suggest, models of explanation that aspire to traditional goals can be pragmatic1 theories. In the context of theories of explanation, however, the label “pragmatic” is usually intended to suggest a somewhat different set of associations. In particular, “pragmatic” is typically used to characterize considerations having to do with facts about the psychology (interests, beliefs etc.) of those involved in providing or receiving explanations and/or to characterize considerations involving the local context, often with the suggestion that both sets of considerations may vary in complex and idiosyncratic ways that resist incorporation into the sort of general theory sort sought by traditional models.[23] Call this set of associations“pragmatic2”. Neither Hempel's nor Woodward's theory is pragmatic2 . In particular, as the example of the DN model illustrates, the fact that a theory is pragmatic1 in the sense that it appeals to facts about goals generally shared by human beings (such as prediction) to help to motivate a model of explanation does not preclude attempting to construct models of explanation satisfying traditional goals and does not require commitment to the idea that explanation must be understood as a pragmatic2 notion. We need to be careful to distinguish these two different ways of thinking about the “pragmatic” dimension of explanation. Finally, as emphasized above, a concern with the pragmatics of explanation, naturally connects with an interest in the “psychology” of explanation, and this in turn suggests the relevance of empirical studies of sorts of information that various subjects (ordinary folks, scientists) find explanatory, treat as providing “understanding”, the distinctions subjects make among explanations and so on. Although there is a growing literature in this area, the most prominent philosophical advocates of pragmatic approaches to explanation have so far tended not to make use of it. In this connection, it is worth pointing out that this psychological literature goes well beyond the truisms found in philosophical discussion about different people finding different sorts of information explanatory depending on their interests. In particular, psychologists have been very interested in exploring general features or structural patterns present in information that various subjects find explanatory. For example, Lombrozo (2010) finds evidence that subjects prefer explanations that appeal to relationships that are relatively stable (in the sense of continuing to hold across changing circumstances[24]) and Lien and Cheng (2000) present evidence that in cases in which the explanandum $E$ has a single candidate cause $C$, subjects prefer levels of explanation/causal description that maximize $\Delta p = \text{Pr}(E \mid C) - \text{Pr}(E \mid \text{not-}C)$. Notice that in both cases these are relationships or patterns of the sort that traditional accounts of explanation attempt to capture. As these examples bring out, there is no necessary incompatibility between the project of trying to formulate an account of explanation that satisfies traditional goals and an interest in the psychology of explanation. It may be that subjects find certain sorts of information explanatory or understanding-producing because certain structural features of the sort that traditional accounts attempt to characterize are present in that information—indeed this is what the Lombrozo and Lien and Cheng papers suggest. In the same vein, we also should distinguish the general project of investigating the empirical psychology of explanation (which can be pursued with a variety of different commitments about best to theorize about explanation) from the more specific claim that the characterization of what it is for an explanatory relationship to hold between explanans and explanandum must be given in “psychologistic” terms in the sense that this requires irreducible reference to psychological facts about particular audiences such as the vagaries of what they happen to be interested in. In general, whether there are robust regularities connecting structural or objective features in bodies of information with whether that information is judged as explanatory by various subjects ought to be regarded as an empirical question and not as something that can settled from the armchair. It might be true that there are no such regularities and that what people find explanatory or productive of understanding varies enormously, depending on their interests and on other psychological factors, but this is something that needs to be shown, not assumed at the outset of investigation. 6.2 Constructive Empiricism and the Pragmatic Theory of Explanation One of the most influential recent pragmatic accounts of explanation is associated with constructive empiricism. This is the thesis, defended by Bas van Fraassen in his 1980 book, The Scientific Image, that the aim of science (or at least “pure” science) is the construction of theories that are “empirically adequate” (that is, that yield a true or correct description of observables) and not, as scientific realists suppose, theories that aim to tell literally true stories about unobservables. Relatedly, “acceptance” of a theory involves only the belief that it is empirically adequate ( van Fraassen, 1980, p. 12). van Fraassen's account of explanation, which is laid out in several articles and, most fully, in Chapter Six of his (1980) is meant to fit with this overall conception of science: it is a conception according to which explanation per se is not an epistemic aim of “pure” science (empirical adequacy is the only such aim), but rather a “pragmatic” virtue, having to do with the “application” of science. (Note that to the extent that the application of science is taken to be a “pragmatic” (i.e., pragmatic1) matter and the idea that explanation is pragmatic in this respect is used to motivate the adoption of a pragmatic (i.e., pragmatic2) theory of explanation, we have a transition between the two notions of “pragmatic” distinguished above.) Because explanation is a merely pragmatic virtue, a concern with explanation is not something that can require scientists to move beyond belief in the empirical adequacy of their theories to belief in the literal truth of claims about unobservable entities. According to van Fraassen, explanations are answers to questions and getting clear about the logic of questions is central to constructing a theory of explanation. Questions can take many different forms, but when the question of interest is a “why” question, explanatory queries will typically take the following form: a query about why some explanandum $P_k$ rather than any one of the members of a contrast $X$ (a set of possible alternatives to $P_k$) obtained. In addition, some “relevance relation” $R$ is assumed by the question. An answer $A$ to this question will take the form “$P_k$ in contrast to (the rest of) $X$ because $A$, where $A$ bears the relevance relation $R$ to $[P_k, X]$”. To use van Fraassen's example, consider “Why is this conductor warped?” Depending on the context, the intended contrast might have to with, e.g., why this particular conductor is warped in contrast to some other conductor that is unwarped or alternatively, it might have to do with why this particular conductor is warped now when it was previously unwarped. The relevance relation $R$ similarly depends on the context and the information which the questioner is interested in obtaining. For example, $R$ might involve causal information (the question might be a request for what caused the warping) but it also might have to do with information about function, if the context was one in which it is assumed that the shape of the conductor plays some functional role in a power station which the questioner wants to know about. Thus “context” enters into the explanation both by playing a role in specifying the contrast class $X$ and the relevance relation $R$. van Fraassen describes various rules for the “evaluation” of answers. For example, $P_k$ and $A$ must be true, the other members of the contrast class must not be true, $A$ must “favor” (raise the conditional probability of) $P_k$ against alternatives, and $A$ must compare favorably with other answers to the same question, a condition which itself has several aspects including, for example, whether $A$ favors the topic more than these other answers and whether $A$ is screened off by other answers. However, he also makes it clear (as the example above suggests) that a variety of different relevance relations may be appropriate depending on context and that the evaluation of answers also depends on context. Moreover, he explicitly denies that there is anything distinctive about the category of scientific explanation that has to do with its structure or form—instead, a scientific explanation is simply an explanation that makes use of information that is (or at least that is treated as) grounded in a “scientific” theory. Van Fraassen sums up his view of explanation (and gestures at his grounds for rejecting objectivist approaches) as follows The discussion of explanation went wrong at the very beginning when explanation was conceived of as a relation like description: a relation between a theory and a fact. Really, it is a three-term relation between theory, fact, and context. No wonder that no single relation between theory and fact ever managed to fit more than a few examples! Being an explanation is essentially relative for an explanation is an answer… it is evaluated vis-à-vis a question, which is a request for information. But exactly… what is requested differs from context to context. (1980, p. 156) Van Fraassen begins his chapter on explanation with a brief story that provides a good point of entry into how he intends his account to work. Recall from section 2.5 that a well-known counterexample to the DN model involves the claim that one can explain the length $S$ of the shadow cast by a flagpole in terms of the height $H$ of the flagpole but that (supposedly) one cannot explain $H$ in terms of $S$, despite the fact that one can construct a DN derivation from $S$ to $H$. This is commonly taken to show that the DN model has left out some factor having to do with the directional or asymmetric features of explanation—e.g., perhaps an asymmetry in the relation between cause and effect that ought to incorporated into one's model of explanation. In van Fraassen's story, a straightforward causal explanation of the usual sort of $S$ in terms of $H$ (although the object in question is a tower rather than a flagpole) is first offered. Then a second explanation, according to which the height of the tower is “explained” by the fact that it was designed to cast a shadow of a certain length is advanced. Presumably the moral we are to draw is that as the context and perhaps the relevance relation $R$ are varied, both \[ H \text{ explains } S \] \[ S \text{ explains } H \] are acceptable (legitimate, appropriate etc.) explanations. Moreover, since these variations in context and relevance relation turn on variations in what is of interest to the explainer and his audience, we are further encouraged to conclude that explanatory asymmetries have their source in psychological facts about people's interests and background beliefs, rather than in, say, some asymmetry that exits in nature independently of these. Pragmatists about explanation think that a similar conclusion holds for other features of the explanatory relevance relation that philosophers have tried to characterize in terms of traditional models of explanation. One obvious response to this claim, made by several critics (e.g. Kitcher and Salmon, 1987, p. 317), is that the example does not really involve a case in which, depending on context, $H$ causally explains $S$ and $S$ causally explains $H$. Instead, although $H$ does causally explain in $S$, it is (something like) the desire for a shadow of length $S$ (rather than $S$ itself) that explains (or at least causally explains) the height (or the choice of height) for the tower. Or, if one prefers, in the latter case we are given something like a functional explanation (but not a causal explanation) for the height of the tower, in the sense that we are told what the intended function of that choice of height is. On either of these diagnoses, this will not be a case in which whether $H$ provides a causal explanation of $S$ or whether instead $S$ provides a causal explanation of $H$ shifts depending on factors having to do with the interests of the speaker or audience or other contextual factors. If so, the story about the tower does not show that the asymmetry present in the flagpole example must be accounted for in terms of pragmatic factors. It may be accounted for in some other way. In fact, although discussion must be beyond the scope of this essay, a number of possible candidates for such a non-pragmatic account of causal asymmetries have been proposed, both in philosophy and outside of it (for example, in the machine learning literature). These candidates include asymmetries in causal connectability of the sort described in Hausman (1998), statistical asymmetries of various sorts (e.g., Spirtes, Glymour, and Scheines, 2000) and asymmetries in informational dependence (e.g., Janzing, 2012) . All of these proposals may be wrong but it is hard to see how they are shown to be wrong just by the sorts of observations advanced by van Fraassen in the tower and shadow story. Instead showing they are wrong would require detailed critiques of the proposal themselves[25]. A much more general criticism has been advanced against van Fraassen's version of a pragmatic theory by Salmon and Kitcher (1987). Basically, their complaint is that the relevance relation $R$ in van Fraassen's account is completely unconstrained, with the (what they regard as the obviously unacceptable) consequence that for any pair of true propositions $P$ and $A$, answer $A$ is relevant to $P$ via some relevance relation and thus “explains” $P$. For example, according to Salmon and Kitcher, we might define a relationship of “astral influence” $R^*$, meeting van Fraassen's criteria for being a relevance relation, such that the time $t$ of a person's death is explained in terms of $R^*$ and the position of various heavenly bodies at $t$. Here it may seem that van Fraassen has a ready response. As noted above, on van Fraassen's view, background knowledge and, in the case of scientific explanation, current scientific knowledge, helps to determine which are the acceptable relevance relations and acceptable answers to the questions posed in requests for explanation—such knowledge and the expectations that go along with it are part of the relevant context when one asks for an explanation of time of death. Obviously, astral influence is not an acceptable or legitimate relevance relation according to modern science—hence appeal to such a relation is not countenanced as explanatory by van Fraassen's theory. More generally it might be argued that available scientific knowledge will provide constraints on the relevance relations and answers that exclude the “anything goes” worry raised by Salmon and Kitcher—at least insofar as the context is one in which a “scientific explanation” is sought. While this response may seem plausible enough as far as it goes, it does bring out the extent to which much of the work of distinguishing the explanatory from the non-explanatory in Van Fraassen's account comes from a very general appeal to what is accepted as legitimate background information in current science. Put differently, this raises the worry that once one moves beyond van Fraassen's formal machinery concerning questions and answers (which van Fraassen himself acknowledges is relatively unconstraining), one is left with an account according to which a scientific explanation is simply any explanation employing claims from current science and a currently scientifically approved relevance relation. Even if otherwise unexceptionable, this proposal is, if not exactly trivial, at least rather deflationary—it provides much less than many have hoped for from a theory of explanation. In particular, in cases (of which there are many examples) in which there is an ongoing argument or dispute in some area of science not about whether some proposed theory or model is true but rather about whether it explains some phenomenon, it is not easy to see how the proposal even purports to provide guidance. On the other hand, the obvious rejoinder that might be made on van Fraassen's behalf is that no more ambitious treatment that would satisfy the expectations associated with more traditional accounts of explanation (including a demarcation of candidate explanations into those that are “correct” and “incorrect”) is possible—a theory like van Fraassen's is as good as it gets. If there is no defensible theory of explanation embodying a non-trivially constraining relevance relation, it cannot be a good criticism of van Fraassen's theory that he fails to provide this[26]. So at least from van Fraassen's perspective, traditional models are in no better position than his own in providing such guidance. A final point that is suggested by van Fraassen's theory is this. In considering pragmatic theories, it matters a great deal exactly where the “pragmatic” elements are claimed to enter into the account of explanation. One point at which such considerations seem clearly to enter is in the selection or characterization of what an audience wants explained. This is reflected in van Fraassen's theory in the choice of a $P_k$ and an associated contrast class $X$. Obviously, whether we are looking for an explanation of why, say, this particular conductor is now bent when it was previously straight or whether instead we want to know why this conductor is bent while some other conductor is straight is a matter that depends on our interests. However, this particular sort of “interest relativity” (and associated phenomena having to do with the role of contrastive focus in the characterization of explananda, which really just serve to specify more exactly which particular explananda we want explained) seems something that can be readily acknowledged by traditional theories[27]. After all, it is not a threat to the DN or other models with similar traditional aspirations that one audience may be interested in an explanation of the photoelectric effect but not the deflection of starlight by the sun and another audience may have the opposite interests. What would be a threat to the DN and similar models would be an argument that once some explanandum $E$ is fully specified, whether explanans $M$ explains $E$ (that is, whether there is an explanatory relation between $M$ and $E$) is itself “interest-relative”. It is natural to interpret van Fraassen as making this latter claim, both in connection with explanatory asymmetries and more generally. 6.3 Explaining as an Illocutionary Act Another very influential pragmatic account of explanation focuses on the act of explaining and treats this as an illocutionary act, in the sense in which that notion is used in speech act theory. The most systematic statement of this approach is due to Peter Achinstein (see especially, Achinstein, 1983). Like many other pragmatic theorists, Achinstein is interested in capturing a very broad notion of explanation, which includes not just causal explanations (and not just answers to why-questions), but such notions as explaining the meaning of word, the rules of chess, the function of some biological structure and so on. His account is much too complex to describe in full detail; all that can be attempted here is a very rough sketch, Achinstein's point of departure is what is involved in someone's explaining something to another. According to Achinstein, in such episodes the intention of the person doing the explaining is crucial: in particular in explaining the explainer must have the intention to render something (roughly, a certain type of indirect question $q$ corresponding to the something explained) “understandable”. An explanation (understood as the product of an explaining act) is then defined as ordered pair, “one of whose members is an act type explaining $q$ [as above]… and whose other member is a proposition that provides an answer to the question $q$.” (2010, p. xi).For example, Newton's explanation of why the tides occur is represented by the ordered pair: (The tides occur because of the gravitational pull of the moon: explaining why the tides occur). Achinstein distinguishes between “correct” and “good” explanations. “A correct explanation is one in which the propositional member of the ordered pair is true” (2010, xi). A correct explanation may nonetheless not be a good one because, e.g., it is inappropriate in various ways to the abilities and interests of the audience to which it is directed. The notion of a good explanation is further characterized in terms of a set of instructions for explanation construction, where these instructions are sensitive to the interests, beliefs and so on of the audience. Such instructions might specify, e.g., that a causal explanation rather than some other kind of explanation is sought or that the explanation sought must make reference to micro-entities. A very important feature of Achinstein“pragmatic”s position is that there is no single universal set of instructions that is appropriate for all audiences and contexts, either in science or elsewhere. Thus traditional accounts that purport to provide such instructions are (in this respect) mistaken. Achinstein writes Now let me offer a conjecture. Suppose, following in the footsteps of Hempel and Salmon, you formulate a set of objective, nonpragmatic criteria that you think all scientific explanations must satisfy to be evaluated highly. These criteria will be universal in the sense that they are not to vary from one explanation to the next, but are to be ones applicable to all scientific explanations. They are also universal in the sense that they are not to incorporate specific empirical assumptions or presuppositions that might be made by scientists in one field or context but not another. So they might include the use of laws, causal factors, and quantitative hypotheses, the satisfaction of some criterion of unification or simplicity, and so forth. My conjecture is that whatever set of objective, nonpragmatic, universal criteria you propose you will be able to find or construct counterexamples to it, but as a set of necessary conditions and as a set of sufficient conditions. (2010, p. 137) Achinstein illustrates this claim with reference to Rutherford's 1911 explanation of alpha particle scattering. Rutherford's explanation appealed to assumptions about atomic structure—in particular, that the positive charge of an atom is concentrated in its nucleus whose volume is small in comparison with the total volume to the atom—to derive a quantitative expression for the magnitude of scattering at various angles. According to Achinstein, other competing explanations (e.g., an explanation which just gave the quantitative expression governing scattering but did not connect this to claims about atomic structure) can satisfy the various traditional criteria for explanatory goodness found in the philosophy of science literature (such explanations may have a DN structure, describe causes, be unifying etc.) but will nonetheless be less good than Rutherford's. Rutherford's explanation is good (or as good as it is) because it provides an explanation “at the subatomic level of matter in a way that physicists at the time were interested in explaining scattering” (2010, p. 136, italics in original). In other words, to explain the respects in which Rutherford's explanation is good (or better than competitors) we must make irreducible reference to the interests of physicists at the time. In this sense, Achinstein's account of “good explanation” is, as he says, “strongly pragmatic”. His “conjecture” nicely captures much (although perhaps not all)of what is at issue between pragmatic and traditional, not purely non-pragmatic accounts of explanation. The central issue is whether one can capture the respects in which Rutherford's explanation is better than alternatives in terms of an explanatory relation that can be specified independently of the interests of (and perhaps other psychological facts concerning) particular audiences and also independently of irreducibly “contextual” facts (such as the claim that in this case a good explanation requires reference to the subatomic level,but there is nothing more general to be said, independently of facts about people's interests, about why explanations at this level are preferable). A convincing argument for the second alternative would presumably need (at least) to examine the existing traditional accounts and show they are unsuccessful. This is a project Achinstein undertakes in his (1983)—unsurprisingly, judgments about whether he makes the case for the failure of objectivist accounts differ. 6.4 Concluding Reflections on Pragmatic Theories So far we have been treating “pragmatic” and “traditional” accounts as diametrically opposed possibilities. This corresponds to how these accounts are usually presented, both by their defenders and detractors. However, it is worthwhile (and, provides additional insight into both approaches), to consider the possibility that some of the ideas associated with each might be combined, thus enlarging the space of possible approaches to explanation. First, we might distinguish the claim that explanation has irreducibly “contextual” elements from the claim that these contextual elements must be understood in terms of facts about the psychology (interests etc.) of the parties to the explanation. An alternative possibility is that explanation is indeed irreducibly contextual, but that these contextual elements should be understood non-psychologically—roughly in terms of the role of particular empirical facts in explanation, where the relevance of these facts resists capture by means of the resources employed in the DN and other traditional models (that is, it resists capture in terms of relationships, like deductive entailment, statistical relevance and so on.) A possible illustration is provided by Achinstein's own example of the explanation of the scattering of alpha particles in terms of facts about nuclear structure. As noted above, Achinstein himself thinks that in this case the goodness of the explanation is context-dependent because it depends on psychological facts: in particular, the goodness of the explanation reflects the fact that physicists are particularly “interested” in explanations appealing to nuclear structure. An alternative possibility is that the explanation has irreducibly “contextual”elements in the sense that there is something about the empirical details of this particular case that makes facts about nuclear structure explanatorily relevant to scattering but where this relevance cannot be fully captured in terms of terms of the abstract, structural features (Achinstein's “objective”, nonpragmatic, universal criteria) on which traditional models of explanation focus. Thus this possibility involves a non-psychological notion of “contextual” that contrasts with the idea that the explanatory relation can be specified in a “content-independent” way. To spell out this notion of content-independence consider the DN model. This model is content-independent in the sense that it claims that as long as a certain abstract structural relationship holds between explanans $(C_i, L_i)$ and explanandum $E$, it does not matter what specifically one fills in for $C_i$, $L_i$, and $E$—the resulting structure is an explanation. A contextualist about explanation in the non-psychological sense would claim instead that for whatever content-independent candidate for the explanatory relation that we specify (whether that specification is in terms of deductive or probabilistic relationships or anything else similarly formal and abstract or with similar aspirations to universality) there will be examples instantiating this structure that are explanatory and examples that are not explanatory—in this sense that the particular content that we fill in for the candidate explains matter to whether we have an explanation. An additional analogy may help to flesh out this idea. John Norton (e.g., forthcoming) has advocated in a series of papers what he calls a “material theory of induction”. His view is that the reliability of various inductive inferences is dependent on associated specific empirical (“material”) assumptions in a way that precludes the formulation of any universal logic of induction—there is no universal form of an inductive argument that ensures reliability, regardless of the particular content of that goes into that argument. However, it is not part of Norton's view that inductive support is somehow a subjective matter or relative to the interests etc. of particular audiences—it is empirical facts of a non-psychological sort (except of course when the evidential relations of interest concern psychological hypotheses) that undergird evidential relationships. We might say that on his view inductive inference is “contextual” in the sense that it is not content-independent but that it is also does not require a psychologistic characterization. The possibility under consideration is that a similar claim might be true for explanation. Perhaps it is true, for example, that in order to capture the respects in which Rutherford's explanation is a good one, one needs to invoke, in addition to general relatively content—independent requirements about unification, derivability from laws and so on, constraints having to do with more specific “local” material facts about atomic structure, in the sense that nothing will count as an explanation of alpha scattering that does not invoke such facts and that other explanations with the same form not invoking atomic structure will not count as explanatory. But perhaps these additional constraints have to do with facts about what the world is like rather than, as Achinstein suggests, facts about what physicists of the time were most interested in. A view of this sort might capture (or concede) some of the claims made by pragmatic approaches about the role of contextual elements in explanation but would avoid some of the subjective or psychologistic tendencies in such approaches. It would be “contextual” in the sense that Norton's material theory of induction is contextual. A closely related thought is that if one is inclined to incorporate contextual elements into the theory of explanation, there remains a range of possibilities about how they might be combined with more universalistic elements. As suggested above, rather than thinking of these two sets of elements as simply standing in opposition to each other, it may be better to think in terms of the two working together in a synergistic way. As an illustration consider the notion of unification. It may be that we cannot provide an adequate characterization of this notion and its role in explanation in purely formal, completely content-independent terms—e.g. in such terms as deriving many conclusions from a few basic assumptions or replacing theories with many free parameters with theories that have only a few such parameters. Nonetheless it may be true that once local material or empirical constraints are used to restrict the class of candidate theories to be compared with respect to the unification they achieve, something like counting basic assumptions or number of free parameters (or more plausibly something in the same spirit but more sophisticated) furnishes useful information about degree of unification achieved. Again the analogy with theories of inductive reasoning is suggestive. It is certainly not the case that all attempts to provide formal or general theories in this area are misguided or doomed to failure—the various treatments of statistical inference and machine learning are obvious counterexamples to this suggestion. On the other hand, the successful theories in this area are not completely universal or content-independent; instead, in many cases they yield results that seem sensible or normatively correct in a certain range of applications or when certain empirical background conditions are satisfied but not in other situations. In other words, such theories are both sensitive to context and contain elements that look objective and structural. Perhaps something like this will turn out to be true of “explanation”. Selected Readings. van Fraassen (1980), especially Chapter Six and Achinstein (1983) are classic statements of pragmatic approaches to explanation. These pragmatic accounts are discussed and criticized in Salmon (1989). van Fraassen's account is also discussed in Kitcher and Salmon (1987). De Regt and Dieks (2005) is a recent defense of what the authors describe as a “contextual” account of scientific understanding and which engages with some of the themes in the “pragmatics of explanation” literature. 7. Conclusions, Open Issues, and Future Directions What can we conclude from this recounting of some of the more prominent recent attempts to construct models of scientific explanation? What important issues remain open and what are the most promising directions for future work? Of course, any effort at stock-taking will reflect a particular point of view, but with this caveat in mind, several observations seem plausible, even if not completely uncontroversial. 7.1 The Role of Causation The first concerns the role of causal information in scientific explanation. It is a plausible, although by no means inevitable, judgement.[28] that many of the difficulties faced by the models described above derive from their reliance on what appear to be inadequate treatments of causation and causal relevance. The problems of explanatory asymmetries and explanatory irrelevance described in section 2.5 seem to show that the holding of a law (understood as a regularity) between $C$ and $E$ is not sufficient for $C$ to cause $E$; hence not a sufficient condition for $C$ to figure in an explanation of $E$. If the argument of section 3.3 is correct, the fundamental problem with the SR model is that statistical relevance information is insufficient to fully capture causal information in the sense that different causal structures can be consistent with the same information about statistical relevance relationships. Similarly, the CM model faces the difficulty that information about causal processes and interactions is also insufficient to fully capture causal relevance relations and that there is a range of cases in which causal relationships hold between $C$ and $E$ (and hence in which $C$ figures in an explanation of $E$) although there is no connecting causal process between $C$ and $E$. Finally, a fundamental problem with unificationist models is that the content of our causal judgments does not seem to fall out of our efforts at unification, at least when unification is understood along the lines advocated by Kitcher. For example, as discussed above, considerations having to do with unification do not by themselves explain why it is appropriate to explain effects in terms of their causes rather than vice-versa. At the very least these observations suggest that progress in connection with “scientific explanation” may require more attention to the notion of causation and a more thorough-going integration of discussions of explanation with the burgeoning literature on causation, both within and outside of philosophy.[29] Counterfactual accounts of causation may be promising in this connection (cf. Woodward, 2003). Does this mean that a focus on causation should entirely replace the project of developing models of explanation or that philosophers should stop talking about explanation and instead talk just about causation? Despite the centrality of causation in explanation, it is arguable that completely subsuming the latter into the former loses connections with some important issues. For one thing, causal claims themselves seem to vary greatly in the extent to which they are explanatorily deep or illuminating. Causal claims found in Newtonian mechanics seem deeper or more satisfying from the point of view of explanation than causal claims of “the rock broke the window” variety. It is usually supposed that such differences are connected to other features—for example to how general, stable, coherent with background knowledge a causal claim is. However, as we have noted, not all kinds of generality, stability etc. seem explanatorily relevant (or connected to explanatory goodness). So even if one focuses only on causal explanation, there remains the important project of trying to understand better what sorts of distinctions among causal claims matter for goodness in explanation. To the extent this is so, the kinds of concerns that have animated traditional treatments of explanation don't seem to be entirely subsumable into standard accounts of causation. There is also the important question of whether all legitimate forms of why- explanation are causal. For example, some writers (e.g. Nerlich, 1979) contend that there is a variety of physical explanation which is “geometrical” rather than causal, in the sense that it consists in explaining phenomena by appealing to the structure of spacetime rather than to facts about forces or energy/momentum transfer. (Nerlich takes causal explanations in physics to have to do with the latter.) According to Nerlich, explaining the trajectory followed by a free particle by noting that it is following a geodesic in spacetime is an illustration of a geometrical rather than a causal explanation. A really satisfying theory of explanation should provide some principled answer to the question of whether all why explanation must be causal (and according to what notion of causal this is so or not so), rather than just assuming an affirmative (or negative) answer to this question. Again, to the extent that there are non-causal forms of explanation, explanation will remain a topic that is at least somewhat independent of causation. 7.2 Explanation and Other Epistemic Goals Noretta Koertge (1992) noted that although the literature on explanation is immense, comparatively little attention has been paid, in the construction of the various competing models of explanation, to the question of what they are to be used for or what their larger point or purpose is (other than capturing “our” notion of explanation). Relatedly, writers on explanation have not always paid adequate attention to how explanation itself is connected to or interacts with (or is distinct from) other goals of inquiry—for example, what the connection is between explanatory goodness and other frequently proposed goals for inquiry such as evidential support, prediction, control of nature, simplicity, and so on. One result is that it is sometimes unclear how to assess the significance of our intuitive judgments about the goodness of various explanations or to determine what turns on our giving one judgment rather than another. For example, as we have noted, most people judge intuitively that one cannot explain the height of a pole by appealing to the length of its shadow. However, a determined defender of the DN model (e.g. Hempel, 1965, pp 353–4) may well ask why we should be so impressed by such intuitive judgments. Perhaps our pre-analytic assessment is confused or mistaken in some way or perhaps it reflects merely pragmatic considerations that should have no place in the theory of explanation. One way to respond to this skepticism would be to provide a non-question-begging account of what of importance would be lost or left out if we failed to distinguish between explanations of shadow lengths in terms of pole heights and “explanations” running in the opposite direction. (Note that to the extent that we are interested merely in prediction, the two inferences appear to be on a par. “Non-question-begging” means that we don't just say that the height causes the shadow and not vice-versa, but that we provide some further explication of what this difference consists in and why the difference matters.) One possible answer would appeal to the epistemic goal of having information relevant to manipulation and control; one may manipulate the length of the shadow by, among other things, manipulating the height of the pole but not conversely. This difference is real regardless of one's intuitions about explanation in the two cases[30]. Regardless of what one thinks about this particular answer, the more general point is that one way forward in assessing competing models of explanation is to focus less (or not just) on whether they capture our intuitive judgments and more on the issue of whether and why the kinds of information they require is valuable (and attainable), and how this information relates to other goals we value in inquiry. As another illustration, consider the CM model. Underlying this model is presumably some judgment to the effect that tracing causal processes and their interactions is a worthy goal of inquiry. Now of course one might try to defend this judgment simply by claiming that the identification of causes is an important goal and that causal process theories yield the correct account of cause. But a more illuminating and less question-begging way of proceeding would be to ask how this goal relates to other epistemic values. For example, what is the connection between the goal of identifying causal processes and constructing unified theories? Or between identifying causal processes and the discovery of information that is relevant to prediction or to manipulation and control? Are these the same goals? Independent but complementary goals? Competing goals in the sense that satisfaction of one may make it harder to satisfy the other? Obviously, one may ask similar questions about the goal of unification. The need for treatments of explanation that relate this notion more adequately to other concepts and goals is particularly salient in connection with the role of laws in explanation, which is another item on the agenda for future work in this area. The account of laws that is currently regarded as the most promising by many philosophers is the Mill-Ramsey-Lewis (MRL) theory. According to this theory, laws are those generalizations which figure as axioms or theorems in the deductive systemization of our empirical knowledge that achieves the best combination of simplicity and strength (where strength has to do with the range of empirical truths that are deducible)[31]. It is natural to connect this conception of laws with unificationist approaches to explanation: if laws are generalizations that play a central role in the achievement of simple (and presumably unified) deductive systemizations, then by appealing to laws in explanation, we achieve explanatory unification—this makes it intelligible why it is desirable that explanations invoke laws[32]. If an account along these lines could be made to work we would have a sort of integrated story about laws and explanation that is largely lacking in the DN account—a story about what laws are that is directly connected to an idea about the point of explanation. Of course there remain real problems (some of which are discussed above) with the unificationist account of explanation and, for that matter, with the MRL theory of laws[33], but the integrated account that would result from putting the two together nonetheless might be taken to illustrate the sort of thing we should be aiming at. 7.3 A Single Model of Explanation? Yet another general issue concerns the extent to which it is possible to construct a single model of explanation that fits all areas of science. It is uncontroversial that explanatory practice—what is accepted as an explanation, how explanatory goals interact with others, what sort of explanatory information is thought to be achievable, discoverable, testable etc.—varies in significant ways across different disciplines. Nonetheless, all of the models of explanation surveyed above are “universalist” in aspiration—they claim that a single, “one size” model of explanation fits all areas of inquiry in so far as these have a legitimate claim to explain. Although the extreme position that explanation in biology or history has nothing interesting in common with explanation in physics seems unappealing (and in any case has attracted little support), it seems reasonable to expect that more effort will be devoted in the future to developing models of explanation that are more sensitive to disciplinary differences. Ideally, such models would reveal commonalities across disciplines but they should also enable us to see why explanatory practice varies as it does across different disciplines and the significance of such variation. For example, as noted above, biologists, in contrast to physicists, often describe their explanatory goals as the discovery of mechanisms rather than the discovery of laws. Although it is conceivable that this difference is purely terminological, it is also worth exploring the possibility that there is a distinctive story to be told about what a mechanism is, as this notion is understood by biologists, and how information about mechanisms contributes to explanation. A closely related point is that at least some of the models described above impose requirements on explanation that may be satisfiable in some domains of inquiry but are either unachievable (in any practically interesting sense) in other domains or, to the extent that they may be achievable, bear no discernible relationship to generally accepted goals of inquiry in those domains. For example, we noted above that many scientists and philosophers hold that there are few if any laws to be discovered in biology and the social and behavioral sciences. If so, models of explanation that assign a central role to laws may not be very illuminating regarding how explanation works in these disciplines. As another example, even if we suppose that the partition into objectively homogeneous reference classes recommended by the SR model is an achievable goal in connection with certain quantum mechanical phenomena, it may be that (as suggested above) it is simply not a goal that can be achieved in a non-trivial way in economics and sociology, disciplines in which causal inference from statistics also figures prominently. In such disciplines, it may be that additional statistically relevant partitions of any population or subpopulation of interest will virtually always be possible, so that the activity of finding such partitions is limited only by the costs of gathering additional information. A similar assessment may hold for most applications of the CM model to the social sciences. • Achinstein, P., 1983, The Nature of Explanation, New York: Oxford University Press. • Achinstein, P., 2010, Evidence, Explanation, and Realism: Essays in Philosophy of Science, New York: Oxford University Press. • Barnes, E., 1992, ‘Explanatory Unification and the Problem of Asymmetry.’ Philosophy of Science, 59: 558–71. • Braithwaite, R., 1953, Scientific Explanation, Cambridge: Cambridge University Press. • Bromberger, S., 1966, ‘Why Questions’, in Mind and Cosmos: Essays in Contemporary Science and Philosophy, R. Colodny, (ed), Pittsburgh: University of Pittsburgh Press. • Cartwright, N., 1979, ‘Causal Laws and Effective Strategies’ Noûs, 13: 419–437. • Cartwright, N., 1983, How the Laws of Physics Lie, Oxford: Clarendon Press. • Cartwright, N., 2004, ‘From Causation to Explanation and Back’ In B. Leiter (ed) The Future for Philosophy Oxford: Oxford University Press. • De Regt, H. and Dieks, D., 2005, “A Contextual Account of Scientific Understanding”, Synthese 144: 137–170. • Dowe, P., 2000, Physical Causation, Cambridge: Cambridge University Press. • Earman, J., 1986, A Primer on Determinism, Dordrecht: Reidel. • Friedman, M., 1974, ‘Explanation and Scientific Understanding’, Journal of Philosophy, 71: 5–19. • Gardiner, P., 1959, The Nature of Historical Explanation, Oxford: Oxford University Press. • Gardiner, P.,(ed), 1952, Theories of History. New York: The Free Press. • Goodman, N., 1955, Fact, Fiction, and Forecast, Cambridge, MA: Harvard University Press. • Greeno, J., 1970, ‘Evaluation of Statistical Hypotheses Using Information Transmitted’, Philosophy of Science, 37: 279–93. Reprinted in Salmon, 1971b. • Hall, N., 2004, ‘Two Concepts of Causation’, in Causation and Counterfactuals. J. Collins, N. Hall, and L.Paul (eds), Cambridge: MIT Press, pp. 225–276. • Hausman, D., 1998, Causal Asymmetries, Cambridge: Cambridge University Press. • Hausman, D. and Woodward, J., 1999, ‘Independence, Invariance, and the Causal Markov Condition.’, British Journal for the Philosophy of Science, 50: 521–583. • Hempel, C., 1965a, Aspects of Scientific Explanation and Other Essays in the Philosophy of Science, New York: Free Press. • Hempel, C., 1965b, ‘Aspects of Scientific Explanation.’ In Hempel 1965a, 331–496. • Hempel, C., 1942, ‘The Function of General Laws in History’, Journal of Philosophy, 39: 35–48; reprinted in Hempel 1965b, pp. 231–244. • Hempel, C. and P. Oppenheim., 1948, ‘Studies in the Logic of Explanation.’, Philosophy of Science, 15: 135–175. Reprinted in Hempel, 245–290, 1965a. • Janzing, D., Mooij, J., Zhang, K., Lemeire, J., Zscheischler, J., Daniusis, D., Steudel, B. and Scholkopf, B., 2012. “Information-geometric Approach to Inferring Causal Directions”, Artificial Intelligence 182–183: 1–31. • Jeffrey, R., 1969, ‘Statistical Explanation vs. Statistical Inference’ in N. Rescher (ed.), Essays in Honor of Carl G. Hempel. Dordrecht: D. Reidel. Reprinted in Salmon, 1971b. • Hitchcock, C., 1995, ‘Discussion: Salmon on Explanatory Relevance.’, Philosophy of Science, 62: 304–20. • Kitcher, P., 1989, ‘Explanatory Unification and the Causal Structure of the World’, in Scientific Explanation, P. Kitcher and W. Salmon, 410–505. Minneapolis: University of Minnesota Press. • Kitcher, P and Salmon, W., 1987, “Van Fraassen on Explanation”, Journal of Philosophy 84: 315–330. • Koertge, N., 1992, ‘Explanation and Its Problems.’, British Journal for the Philosophy of Science, 43: 85–98. • Kyburg, H., 1965, ‘Comment.’, Philosophy of Science, 32: 147–51. • Lewis, D., 1973a, Counterfactuals, Cambridge, MA: Harvard University Press. • Lewis, D., 1973b, ‘Causation.’, Journal of Philosophy, 70: 556–67. Reprinted with Postscripts in Lewis, 1986, 159–213. • Lewis, D., 1986, Philosophical Papers, Volume II, Oxford: Oxford University Press. • Lewis, D., 2000, ‘Causation as Influence.’, Journal of Philosophy, 97: 182–197. • Lien, Y., and Cheng, P., 2000, “Distinguishing genuine from spurious causes: a coherence hypothesis”, Cognitive Psychology, 40: 87–137. • Lombrozo, T., 2010, “Causal-Explanatory Pluralism: How Intentions, Functions, and Mechanisms Influence Causal Ascriptions”, Cognitive Psychology, 61:303–32. • Mitchell, S., 1997, ‘Pragmatic Laws’ PSA 96 (Supplement to Philosophy of Science, 64 (4): S468–S479. • Morrison, M., 2000, Unifying Scientific Theories, Cambridge: Cambridge University Press. • Nagel, E., 1961, The Structure of Science: Problems in the Logic of Scientific Explanation, New York: Harcourt, Brace and World. • Nerlich, G., 1979, ‘What Can Geometry Explain?’ British Journal for the Philosophy of Science, 30: 69–83. • Norton, J., forthcoming, “A Material Dissolution of the Problem of Induction”, Synthese. • Pearl, J., 2000, Causality: Models, Reasoning and Inference, Cambridge: Cambridge University. • Pitt, J. (ed.), 1988, Theories of Explanation. New York: Oxford University Press. • Popper, K., 1959, The Logic of Scientific Discovery, London: Hutchinson. • Psillos, S., 2002, Causal Asymmetries, Stocksfield, UK: Acumen Publishing. • Railton, P., 1978, ‘A Deductive-Nomological Model of Probabilistic Explanation.’, Philosophy of Science, 45: 206–26. • Railton, P., 1981, ‘Probability, Explanation, and Information.’, Synthese, 48: 233–56. • Ruben, D., (ed.), 1993, Explanation, Oxford: Oxford University Press. • Salmon, W., 1989, Four Decades of Scientific Explanation, Minneapolis:University of Minnesota Press. • Salmon, W., 1971a, ‘Statistical Explanation’, in Statistical Explanation and Statistical Relevance, W. Salmon, (ed.), 29–87, Pittsburgh: University of Pittsburgh Press. • Salmon, W.,1989, Four Decades of Scientific Explanation, Minneapolis: University of Minnesota Press. • Salmon, W., 1984, Scientific Explanation and the Causal Structure of the World, Princeton: Princeton University Press. • Salmon, W., 1994, ‘Causality Without Counterfactuals.’, Philosophy of Science, 61: 297–312. • Salmon, W., 1997, ‘Causality and Explanation: A Reply to Two Critiques.’, Philosophy of Science, 64: 461–477. • Salmon, W., (ed.), 1971b, Statistical Explanation and Statistical Relevance, Pittsburgh: University of Pittsburgh Press. • Salmon, W. and Kitcher, P., (eds.), 1989, Minnesota Studies in the Philosophy of Science, Vol 13: Scientific Explanation, Minneapolis: University of Minnesota Press. • Schaffer, J., 2000, ‘Causation by Disconnection.’, Philosophy of Science, 67: 285–300. • Scriven, M., 1959, ‘Truisms as the Grounds of Historical Explanations.’, In Gardiner (ed.). • Scriven, M., 1962, ‘Explanations, Predictions, and Laws’, in Scientific Explanation, Space, and Time (Minnesota Studies in the Philosophy of Science: Vol. 3), H. Feigl and G. Maxwell (eds), 170–230. Minneapolis: University of Minnesota Press. • Sober, E., 1983, ‘Equilibrium Explanation’, Philosophical Studies, 43: 201–210. • Sober, E., 1999, ‘The Multiple Realizability Argument Against Reductionism’ Philosophy of Science, 66: 542–564. • Sober, E., 2003, ‘Two Uses of Unification’, in F. Stadler, ed., Institute Vienna Circle Yearbook 2002, Dordrecht: Kluwer. • Spirtes, P. Glymour, C. and Scheines, R., 1983, Causation, Prediction and Search, New York: Springer-Verlag. Second Edition, 2000. Cambridge: MIT Press. • Woodward, J., 1989, ‘The Causal/Mechanical Model of Explanation.’, In Scientific Explanation. Minnesota Studies in the Philosophy of Science, volume 13, P. Kitcher and W. Salmon (eds.), 357–383. • Woodward, J., 2000, ‘Explanation and Invariance in the Special Sciences.’, British Journal for the Philosophy of Science, 51: 197–254. • Woodward, J., 2003, Making Things Happen: A Theory of Causal Explanation, Oxford: Oxford University Press. • Woodward, J., 2003, Making Things Happen: A Theory of Causal Explanation, New York: Oxford University Press. • Woodward, J., 2006, “Sensitive and Insensitive Causation”, The Philosophical Review.115: 1–50. • Van Fraassen, B., 1980, The Scientific Image. Oxford: Oxford University Press Other Internet Resources • Theories of Explanation”, by G. Randolph Mayes (CSU/Sacramento), in the Internet Encyclopedia of Philosophy (edited by J. Fieser) Copyright © 2014 by James Woodward <> Please Read How You Can Help Keep the Encyclopedia Free
63fdf135ebd7b7ba
Take the 2-minute tour × Suppose I have only real number problems, where I need to find solutions. By what means could knowledge about complex numbers be useful? Of course, the obviously applications are: • contour integration • understand radius of convergence of power series • algebra with $\exp(ix)$ instead of $\sin(x)$ No need to elaborate on these ones :) I'd be interested in some more suggestions! In a way this question is asking how to show the advantage of complex numbers for real number mathematics of (scientifc) everyday problems. Ideally these examples should provide a considerable insight and not just reformulation. EDIT: These examples are the most real world I could come up with. I could imagine an engineer doing work that leads to some real world product in a few months, might need integrals or sine/cosine. Basically I'm looking for a examples that can be shown to a large audience of laymen for the work they already do. Examples like quantum mechanics are hard to justify, because due to many-particle problems QM rarely makes any useful predictions (where experiments aren't needed anyway). Anything closer to application? share|improve this question possible duplicate of Interesting results easily achieved using complex numbers –  Guess who it is. Nov 25 '12 at 12:33 You should edit your question, then. Change "real number" to "real world", so people won't mistake it for the real numbers. –  Asaf Karagila Nov 25 '12 at 13:34 Control theory! Fluid dynamics! Differential equations! Electrical engineering! Signal processing! Quantum mechanics! en.wikipedia.org/wiki/Complex_number#Applications –  Rahul Nov 25 '12 at 13:37 This question is quite ambiguously phrased: all three applications listed in it belong to pure mathematics (and two answers posted so far address some aspects of this) but afterwards the OP claims to be interested in "real world" applications, where "real world" seems to be more or less equivalent to "useful to an engineer" (and @Rahul's comment answers that beautifully). Please make up your mind. –  Did Nov 25 '12 at 14:09 ?? Complain? Well... If ever I had fancied answering the question, your last comment is a quite effective deterrent. (Update: upon reading your comment, I was vaguely wondering when I had previously met this tone on the site... and behold!) –  Did Nov 26 '12 at 10:52 4 Answers 4 The trouble with maths is that, just like in the case of a living organism, all its various apparently unrelated parts are in reality interconnected. For instance, Ramanujan's prime-counting function, belonging to the field of number theory, turned out to be ultimately wrong because, in a veiled or hidden manner, it was equivalent to saying that the Riemann zeta function does not possess any complex zeroes: which, as it happens, is false. He thought that it would always predict the exact number of primes lesser than a given number, and that any error, were it to even exist, would be at worst bounded. Turns out he was wrong on both counts. Which, of course, does not mean that it cannot be used as a very good approximation, but the precision and certainty for which he was aiming proved in the end to be untouchable. And that's just one random example among many about the surprising way in which the various fields of math eventually reveal themselves to be tied together. Hope this helps. share|improve this answer I find the use of complex numbers extremely helpful in problems of plane elementary geometry, in particular when there are symmetries present which have to be exploited. In the "complex coordinate" $z$ of a point both real coordinates are encoded, you have the full vector algebra of the plane at your disposal, rotations about angles like $90^\circ$ or $120^\circ$ are obtained essentially for free, and on, and on. share|improve this answer One basic example is with eigenvalues and eigenvectors of matrices. Often real matrices are not diagonalisable over $\mathbb{R}$ because they have imaginary eigenvalues, wnad knowing things about these eigenvalues can tell us a lot about the transformation that the matrix represents. The obvious example is the $2D$ rotation matrix $\begin{pmatrix} \cos\theta & -\sin\theta\\ \sin\theta &\cos\theta \end{pmatrix} $ with eigenvalues $e^{\pm i\theta}$ which tell us the angle of rotation that this real matrix gives us. Admittedly a simple example but I'm sure there are plenty more. On other result that comes to mind is in quantum mechanics! A big area of science right now, it deals with complex wave functions like you wouldn't believe (or maybe you would, it seems like you've done enough maths to have taken a course or two in quantum mechanics!) A lot of problems have complex solutions, and certainly the relation of $e^{i\theta}$ and trig is used to no end, particularly in solving second order differential equations (which the Schrödinger equation frequently reduces to). Probably the biggest way that the complex results are translated back to the real world is that the probability of finding a wavefunction in a given region is the integral over that region if it's magnitude squared. The complex wave function is reduced to a real integral to give us a probability, which is certainly a real world result! A lot of interesting solutions, known as steady stationary states of the Schrödinger equation, give us wavefunctions where the time dependence looks like $e^{\frac{iE_nt}{\hbar}}$. Here $E_n$ is the energy of the state and $\hbar$ is Planck's (reduced) constant. The point is, the magnitude of these solutions is independent of time. This means that if a particle has this wavefunction, then we know exactly what it's energy is for all time. Further, since the Schrödinger equation is linear, we can superpose solutions to get more solutions, and in fact these steady states form a basis, so we can find the wavefunction for any particle as a combination of these stationary states. share|improve this answer If I were to explain that to an engineer, how can I show the need for that? Even quantum mechanics, as fundamental as it is, is probably little used by engineers who (in case the know QM) would argue that theory can't predict anyway, so they rather use experiments. –  Gerenuk Nov 26 '12 at 10:13 QM in itself may be little used, but it has some hugely important applications: en.wikipedia.org/wiki/Quantum_mechanics#Applications I would highlight particularly transisters which are required for reasonable sized computers, and lasers which also have many other applications. As to saying QM can't predict, I'm afraid you're wrong! QM tells us that the world isn't deterministic so that in principle, an particular event is impossible to predict exactly at a quantum level. However it makes well defined predictions in terms of probabilities which work extremely well on large scales. –  Tom Oldfield Nov 26 '12 at 13:45 This was already mentioned by Rahul but I think it deserves an answer in its own right. Digital signal processing of 1d (sound) and 2d (images) real data would take incredible amounts of time and would be much harder to understand if it weren't for the discrete Fourier transform and its fast implementations. This field is very real and complex numbers play a major role in it. share|improve this answer Your Answer
5b4c75a055cc6361
Dismiss Notice Join Physics Forums Today! Atom Stability 1. Apr 28, 2007 #1 Why does in QM the electron does not fall toward the nucleus? After all, the only force between nucleus and electron is attractive (- electron and + nucleus). Is the same reason that justifies the moon does not fall to the earth? 2. jcsd 3. Apr 28, 2007 #2 User Avatar Staff Emeritus Science Advisor Education Advisor 2016 Award Please read our FAQ in the General Physics forum. 4. Apr 28, 2007 #3 Electrons do really fall to the nucleus. They are during sometime in equilibrium on an orbit, exactly like the moon. In this equilibrium attraction in exactly compensated by inertia (centrifugal force). However, rotating electrons lose energy because the emit electromagnetic radiations. Therefore, they actually fall onto the nucleus. The first strange thing is that they do that suddenly, not continuously. The second strange thing is that they do not fall little by little but by finite steps to precesely defined orbits. The last strange thing is that they finally stop falling and do not reach the nucleus. The last level they reach is called the fundamental level. This is explained by the wave-like nature of electron and is at the basis and origin of quantum mechanics. The other level are called excited levels. At low temperatures, most atoms are in the fundamental level where the electron reached the final stable orbit. In principle the moon could also behave like that because gravitational wave may also dissipate its energy. However the moon, and the earth are such big objects that their wave-like behaviour are totally negligible and not observable. If you want more understanding, you should train and learn in physics and mathematics. Last edited: Apr 28, 2007 5. Apr 28, 2007 #4 User Avatar Staff: Mentor The Bohr-Sommerfeld planetary model of the atom has been dead, dead, dead since the 1920s. Please don't encourage people to think in terms of that model, except as a purely historical exercise. 6. Apr 28, 2007 #5 Ok, I'm a physicist. I know the Bohr-Sommerfeld model.. but it does not explain the atom structure. It states. Nothing else. THAT'S NOT A VALID ANSWER: it's like to say "this is so, because so it is". And the problem is the same in the planetary motion, gravitational force is only attractive... But why the moon does not fall to the earth? I don't want answers such "there exists strange things, theories that stands tall, etc". Give me the physical reason... I should think that no one knows it? 7. Apr 28, 2007 #6 User Avatar Staff: Mentor In the case of the moon (which can be described without QM, of course), it is always falling (accelerating) towards the earth. However, it is also moving sideways because of its orbital motion, so it always misses the earth! :biggrin: In the case of the atom, the electron does sometimes "hit the nucleus." QM does not allow us to calculate a planet-like trajectory for the electron. All we can calculate (from solving Schrödinger's Equation) is the probability of finding the electron in various locations. It turns out that in general the electron does have a very small probability of being located inside the nucleus, at any instant of time. If it is then possible for the electron to interact with the nucleus, and still satisfy conservation of energy, it can do so. This is called electron capture, and some radioactive nuclei do decay via this process. 8. Apr 28, 2007 #7 I don't understand the purpose and the utility of this remark: To answer the question by ClubDogo, which was in a naïve style, it would have been totally meaningless to come with the Schrödinger equation and wavefunctions. The main ingredients to answer the question were included in my post, in an "allegorical way" yet useful way. For students with a 30 hours background in quantum mechanics, the translation to the rigourous language is easy. However, they usually ignore basic things like radiations by charged particles and of course quantum field theory. Therefore, it would be an total illusion to think that a more precise language would make a better answer, at this level of a discussion. Finally, the next question is: why can't the fundamental level lose anymore energy by radiation​ and to answer this question, the BS model would indeed become insufficient. Well, I guess so, but I could have fun this evening to think about it. Electron capture involves the weak interaction. I think the initial question by ClubDogo was related to the stability of atoms under the electromagnetic interaction only. (- electron and + nucleus). This is indeed an important think to learn and understand in quantum mechanics. I think it is of no real help to involve the weak interaction in the answer. Last edited: Apr 28, 2007 9. Apr 28, 2007 #8 You should understand that the BS model gave a first "explanation" of the atomic levels. The idea was that electrons had a wave-like structure and that stationary states had to be "resonant". However, this was a naïve theory. Obviously this wave had to be described in 3 spatial dimensions and time. Bohr and Sommerfeld and everybody at that time knew that very well. Many people at that time also had a deep understanding of classical mechanics, like Schrödinger and Dirac. It turned out that Schrödinger was the first to come with a full wave picture for the Hydrogen atom, a result that he based on his knowlegde of CM. Dirac was soon able to go further. Now, what is the conclusion of this story? I think that, to some extent, we can not say that the BS theory is dead or that it does not explain the stability of the atoms. We cannot say that it states without explaining. The full consistent quantum theory will not give you any further explanation, altough it will give you more aspects as well as other consequences (vacuum fluctuations for example, ...) The stability of the atom in the BS model or in the full QM theory has the same explanation: the stationary states are a resonant structure. In other words: In the simplified BS model as well as in QM, the orbitals are resonnant structures (eigenmodes, eigenvectors). In the BS model, this structure is oversimplified, but gave the right levels(by chance). It was practically a simple 1D model. In the QM theory, the structure is almost perfectly described and therefore more predictions are possible.​ Last edited: Apr 28, 2007 10. Apr 28, 2007 #9 User Avatar Staff Emeritus Science Advisor Education Advisor 2016 Award This seems to be a common occurrence here lately, and I don't know why. Can you show something that actually has a "physical reason" so that we can THEN at least understand what you mean by such a thing. As a "physicist", you of all people should have been aware that at the MOST FUNDAMENTAL LEVEL, all we have for every single phenomenon is a description. Go ahead and pick anything and see if what you think you have "understood" is nothing more than a physical description of that phenomenon. 11. Apr 28, 2007 #10 User Avatar Science Advisor Gold Member Echoing what ZapperZ has said: Physics is continuing attempt to answer the "why". That is actually done by way of improved descriptions - usually by way of a theory which has predictive power. Yet... when you answer one "why" you end up creating another! There are plenty of questions we will likely never be able to answer the "why" about. Why were you born? We might be able to answer the "how" (descriptive) but not the "why". In sum: It is not a flaw in a theory that it does not "explain" or "describe" in a fashion that answers "why" questions. It is possible that there is no better theory of gravity than General Relativity, for example, and we still do not know "why" we live in 4 spacetime dimensions rather than some other number. 12. Apr 28, 2007 #11 As I said very often, physics is not about explaining things. Physics is about describing things of the world and their relations with a minimum amount of information. We can think that quantum mechanics and electromagnetism explain atomic physics. It is more correct to think that atomic physics does not need more than these two theories to get a full theoretical description and that moreover, atomic physics shares QM and EM with many other parts of physics. 13. Apr 28, 2007 #12 User Avatar Staff: Mentor I mentioned electron capture because it illustrates that the electron sometimes really does "fall into the nucleus" in some sense, as described by the QM probability distribution. The fact that it proceeds via the weak interaction doesn't matter; the weak interaction doesn't get the electron "into" the nucleus, as far as I know. 14. Apr 29, 2007 #13 I understood why you mentioned the EC. But an EC depends much more on the state of the nucleus than on the state of the electron. It was good however to remind ClubDogo of the non-zero probability of presence within the nucleus. But, I thought ClubDogo was comparing the (apparent) stability of the moon orbit with the stability of atoms. Therefore, my preffered answer was back to the basics: without radiative effects, the stability is the consequence of the wave-like nature of the electrons, for any levels with radiative effects, only the fundamental level is absolutely stable the next good question is: why does the fundamental level not radiate EM energy?​ 15. May 2, 2007 #14 User Avatar Science Advisor The Bohr-Sommerfeld theory had lot's of problems; in fact both Bohr and Sommerfeld beame proponents of the, then new, quantum theory. Among other things, the Bohr-Sommerfeld and Schrodinger/Heisenberg theories were based on very different physical reasoning. Yet, the fundamental idea of a stationary state, invented by Bohr, became a key ingredient of modern QM. The stability of the hydrogen atom is virtually guaranteed in QM by the stationary states given by the Schrodinger Eq. -- naturally, this assumes that the hydrogen-radiation interaction is small. Really, we build in atomic stability from the very beginning in QM. And this stability is fundamentally a quantum effect. Reilly Atkinson 16. May 4, 2007 #15 Maybe your question intended to be: "If a proton and an electron are stationary at some distance and then they are released, if they are point particles, shoudn't there be a non zero probability they don't interact forming the hydrogen atom but, instead, the electron fall directly on the proton?" ? I personally don't think your is a silly question. Personal answer: they are not point particles. Last edited: May 4, 2007 Similar Discussions: Atom Stability
3a0a9d35e38fb3c4
add contact to address book Eitan Geva Office Location(s): 2000D Phone: 734.763.8012 Research Group • About Modern computational chemistry strives to provide an atomistically detailed dynamical description of fundamental chemical processes. The strategy for reaching this goal generally follows a two-step program. In the first step, electronic structure calculations are used to obtain the force fields that the nuclei are subject to. In the second step, molecular dynamics simulations are used to describe the motion of the nuclei. The first step is always based on quantum mechanics, in light of the pronounced quantum nature of the electrons. However, the second step is most often based on classical mechanics. Indeed, classical molecular dynamics simulations are routinely used nowadays for describing the dynamics of complex chemical systems that involve tens of thousands of atoms. However, there are many important situations where classical mechanics cannot be used for describing the dynamics. Our research targets the most chemically relevant examples include: (1) Linear and nonlinear vibrational and electronic spectroscopy. The transition frequencies in these cases are often much larger than kT. Furthermore, in the spectral signals can be expressed in terms of optical response functions that lack a well defined classical limit. (2) Vibrational and electronic relaxation. The quantum nature of the pathways of irradiative intramolecular energy redistribution within molecules and intermolecular energy transfer between molecules is attributed to the large gap between vibrational and electronic energy levels. (3) Proton and electron transfer reactions. The elementary steps of many complex chemical processes are based on such reactions. Their pronounced quantum nature is attributed to the light mass of protons and electrons, which often gives rise to quantum tunneling and zero-point energy effects. The challenge involved in simulating the quantum molecular dynamics of such systems has to do with the fact that the computational effort involved in solving the time-dependent Schrödinger equation is exponentially larger than that involved in solving Newton’s equations. As a result, a numerically exact solution of the Schrödinger equation is not feasible for a system that consists of more than a few atoms. The main research thrust of the Geva group is aimed at developing rigorous and accurate mixed quantum-classical, quasi-classical and semiclassical methods that would make it possible to simulate equilibrium and nonequilibrium quantum dynamics of systems that consist of hundreds of atoms and molecules. We put emphasis on applications to experimentally-relevant disordered complex condensed phase systems such as molecular liquids, which serve as hosts for many important chemical processes. We also specialize in modeling and analyzing different types of time resolved electronic and vibrational spectra that are used to probe molecular dynamics in those systems, often in collaboration with experimental groups. Representative Publications Hanna G. and Geva, E. (2008) “A computational study of the one and two dimensional infrared spectra of a vibrational mode strongly coupled to its environment: Beyond the cumulant and Condon approximations” J. Phys. Chem. B 112, 12991 Hanna, G. and Geva, E. (2008) “Vibrational energy relaxation of a hydrogen-bonded complex dissolved in a polar liquid via the mixed quantum-classical Liouville method” J. Phys. Chem. B 112, 4048 Shang, J. and Geva, E. (2007) “A computational study of a single surface-immobilized two-stranded coiled-coil polypeptide” J. Phys. Chem. B 111, 4178 Ka, B.J. and Geva, E. (2006) “Classical vs. quantum vibrational energy relaxation pathways in solvated polyatomic molecules” J. Phys. Chem. A 110, 13131 Ka, B.J. and Geva, E. (2006) “A nonperturbative calculation of nonlinear spectroscopic signals in liquid solution” J. Chem. Phys. 125, 214501 Ka, B.J. and Geva, E. (2006) “Vibrational energy relaxation of polyatomic molecules in liquid solution via the linearized semiclassical method” J. Phys. Chem. A 110, 9555-9567 • Education • Ph.D., Hebrew University of Jerusalem • Research Areas of Interest • Computational/Theoretical Chemistry Materials Chemistry Physical Chemistry Ultrafast Dynamics
fb564fb731c38ec3
Take the 2-minute tour × Is there a mechanistic-type explanation for how forces work? For example, two electrons repel each other. How does that happen? Other than saying that there are force fields that exert forces, how does the electromagnetic force accomplish its effects. What is the interface/link/connection between the force (field) and the objects on which it acts. Or is all we can say is that it just happens: it's a physics primitive? A similar question was asked here, but I'd like something more intuitive if possible. share|improve this question Necessary classic: youtube.com/watch?v=wMFPe-DwULM –  jld Apr 19 '13 at 3:31 I've tried very hard for some form of an intuitive understanding of force, including gravity. At this point I just accept them but I can't say I have any form of intuitive understanding for them. –  Brandon Enright Apr 19 '13 at 6:28 For electro-magnetic forces, we can say "photons", but is that really a better explanation than "force fields"? At least "photons" correctly imply quantization, but I don't think that was your objection against the term "force fields". –  MSalters Apr 19 '13 at 11:33 This is also related. These kinds of fundamental terms in physics are human names for things we have observed in nature. –  Douglas B. Staple Apr 19 '13 at 13:15 The correct way to describe how to electrons repel is quantum electrodynamics, if you really want to know how it works at a deeper level. The photon couples to the electric charge of the electron. –  Dilaton Apr 19 '13 at 13:19 3 Answers 3 A force is the name given to a physical measurement that isn't a physical property of an accelerated mass, but still allows us to predict a value for its acceleration, given the numerical value of a force present. The only interpretation of force which physically matters is how you assign a number to it. share|improve this answer This seems like an interesting answer. If I understand the comment correctly it seems to say that force isn't quite the point. What's the point is the fact that acceleration occurs. Since acceleration implies an increase in energy in the accelerated entity, the question then becomes (at least in my mind) how does that energy transfer work/happen? –  RussAbbott Apr 19 '13 at 17:43 @RussAbbott I was trying to say that physical measurements are what matters, and how they enable us to create models of the world. Force is just a physical measurement that allows us to predict what we'll measure for the acceleration of a mass. Wanting to know how force manages to do what it does is open to hand-waving make-believe interpretation. All we can say is that if we make these physical measurements, then we'll measure this other physical measurement. –  Larry Harson Apr 19 '13 at 22:17 The interface between the force(field) and the objects is what we call "charge". Electromagnetism, for example, does not excert any force on an electrically neutral object. In a very crude way, you could imagine charge as the hooks, on which the springs that mediate the force are hung. One has to be careful with Newton's graviation, however. It works fine for everyday applications such as falling bodies etc. to picture the graviational charge as "mass". A massless object is weightless from this point of view. This is not true in general relativity! There, mass bends space itself and photons, too, are subject to it, albeit not having any (rest) mass. The similarities between Newtonian gravitation and electromagnetism go quite far. You could imagine the electromagnetic potential as a mountain at the position of one electron from the point of view of another. The repelling force then acts like a ball rolling up a slope and down again. share|improve this answer Quantum Mechanics says force is not physics primitive. It shows the undelying mechanism for them. What is a force? It is something that changes the velocity of a particle, with the Newton's second law: $$\vec{F}=m\dfrac{d\vec{v}}{dt}$$ Any other appearances of forces can be reduced to this. For example, when we measure the force with a dynamometer, it is actually two forces aplied to the same particle, and they cancel each other when the particle reaches the offset equilibrium position. Without any force, the particle would move with the same velocity $\vec{v}=\mathrm{const}$. But the Quantum Mechanics shows a more complicated picture: the particle is distributed in space, and depicted as a wave packet. Its evolution (motion and change) without a force is governed by the Schrödinger wave equation $$i\hbar\dfrac{\partial\Psi}{\partial t}=\dfrac{-\hbar^2}{2m}\nabla^2\Psi\qquad\left[=\dfrac{-\hbar^2}{2m}\dfrac{\partial^2\Psi}{\partial x^2}\quad\text{(in 1D space)}\right]$$ That does say the same thing, $\vec{v}=\mathrm{const}$, but in a sense that all velocity components of the wave packet (that is, its Fourier coefficients) evolve constantly in time, and independently on each other. But that's the mathematical abstraction. Physically, it tells some different story: the wave function oscillates and flows. The rate of oscillations is what we call enegry, and the flow is kept up by the gradient of the phase. Then, what happens when a force appears on the scene? We should add the potential energy of that force to the Schrödinger equation: $$i\hbar\dfrac{\partial\Psi}{\partial t}=\dfrac{-\hbar^2}{2m}\nabla^2\Psi+U\Psi\qquad\left[=\dfrac{-\hbar^2}{2m}\dfrac{\partial^2\Psi}{\partial x^2}+U\Psi\quad\text{(in 1D space)}\right]$$ What happens to the wave funtion then? It starts to oscillate with higher rate in some points (where $U>0$), while with the same rate in some others (where $U=0$). Because of that, the phase in the first points would outrun the phase the second points. And the gradient of phase tells the wave function to flow away in the direction of the retarding phase. So the particle would run away from the place where the potential energy is high! That's how forces work. share|improve this answer Your Answer
9ad7cc9bc314d7c4
Take the 2-minute tour × Why is a proton assumed to be always at the center while applying the Schrödinger equation? Isn't it a quantum particle? share|improve this question Self interactions are not considered in a non-relativistic quantum mechanical treatment and the Hydrogen atom is usually treated that way, in a first course. –  Torsten Hĕrculĕ Cärlemän Dec 31 '13 at 8:51 @TorstenHĕrculĕCärlemän : What about proton being at the center? –  Rajesh D Dec 31 '13 at 8:53 I don't get the fact about it being at the center of a coordinate frame, and it being a quantum particle. You can infact take any point as the origin, only to complicate the expressions further. It is most natural to hence take the nucleus at the center. –  Torsten Hĕrculĕ Cärlemän Dec 31 '13 at 8:56 @RajeshD The assumption that the proton is stationary is just an approximation used since protons are about 2000 times as massive as the electrons and 2000 is approximately infinity. –  David H Dec 31 '13 at 8:57 @DavidH : Thanks David. That seems very reasonable. –  Rajesh D Dec 31 '13 at 9:00 3 Answers 3 up vote 19 down vote accepted There is a rigorous formal analysis which lets you do this. The true problem, of course allows both the proton and the electron to move. The corresponding Schrödinger equation thus has the coordinates of both as variables. To simplify things, one usually transforms those variables to the relative separation and the centre-of-mass position. It turns out that the problem then separates (for a central force) into a "stationary proton" equation and a free particle equation for the COM. There is a small price to pay for this: the mass for the centre of mass motion is the total mass - as you'd expect - but the radial equation has a mass given by the reduced mass $$\mu=\frac {Mm}{M+m}=\frac{m}{1+m/M} ,$$ which is close to the electron mass $m$ since the proton mass $M$ is much greater. It's important to note that an exactly analogous separation holds for the classical treatment of the Kepler problem. Regarding self-interactions, these are very hard to deal with without invoking the full machinery of quantum electrodynamics. Fortunately, in the low-energy limits where hydrogen atoms can form, it turns out you can completely neglect them. share|improve this answer I assume you're talking of the hydrogen atom; the hamiltonian of the nucleus + electron system is $$ H = \frac{p_e^2}{2 m _e} + \frac{p_n^2}{2 m _n} - \frac{e^2}{|r_e - r_n|}. $$ You can do a change of coordinates (center of mass coordinates) $$ \vec{R} = \frac{m_e \vec{r}_e + m_n \vec{r}_n}{m_e+m_n} \\ \vec{r} = r_e -r_n $$ and find the conjugate momenta to these coordinates: $$ \vec{P} = \vec{p}_e + \vec{p}_n \\ \vec{p} = \frac{m_n \vec{p}_e - m_e \vec{p}_n}{m_e+m_n}. $$ Defining also the reduced mass $\mu$ such that $$ \frac{1}{\mu} = \frac{1}{m_e} + \frac{1}{m_n} $$ and the total mass $M = m_e + m_n$, you can write the hydrogen atom hamiltonian as $$ H = \frac{P^2}{2 M} + \frac{p^2}{2 \mu} - \frac{e^2}{r} = H_{CM} + H_{rel}. $$ In this calculations I always treated the nucleus as a quantum particle; but if you look at $H_{rel} = p^2/2\mu - e^2/r$ and let the mass of the nucleus tend to infinity, you obtain the hydrogen atom hamiltonian usually taught in basic QM courses Also, you don't have other terms like spin-orbit, j-j couplings etc. because they are relativistic effects that come out from the Dirac equation. share|improve this answer thanks for the explanation @AlexA –  Rajesh D Dec 31 '13 at 9:36 With regards your first question: A similar (the same?) question you might reasonably ask is: how can we assume that the proton is stationary, at the centre of the problem, since it is surely going to be attracted by the electron and jiggle about a little? This is a question that would be just as valid directed at a classical system --- say, a planet orbiting a star --- as a quantum mechanical one. The solution to this is as described above, by others: the fact that the star/proton is so much more massive than the planet/electron means that it is going to move very little (the acceleration of an object is inversely proportional to its mass, and hence with a large mass we have a very small acceleration i.e. very little motion), and so the stationary nature of the star/proton is a great approximation. And in fact, we can make the analysis completely rigorous by dealing with relative separations and reduced masses. But the finite mass of the proton means that indeed, the proton won't actually be stationary. However, I'm not sure this is the question you're asking. Your concern was not "isn't the proton a particle of finite mass" but rather "isn't it a quantum particle". The suggestion is that you think the proton should jiggle due to its quantum mechanical nature --- that is, due to the uncertainty principle etc. --- irrespective of the mass of the proton (perhaps I am mistaken about this). In the limit of the proton having infinitely more mass than the electron, the quantum mechanical nature of the proton won't force it to jiggle. In other words, the uncertainty in its position, $\Delta x$, can be made arbitrarily close to zero. This is consistent with the uncertainty principle since its momentum $p$ (mass x velocity) can tend to infinity in the limit of an infinitely massive proton. Hence we can still achieve $$ \Delta p \Delta x \geq \frac{\hbar}{2} $$ with an arbitrarily small velocity and positional uncertainty, if we make the mass arbitrarily large. In other words, in the assumption that we're using to neglect motion of the proton due to it being attracted to the electron, we are also able to neglect the motion of the proton due to quantum mechanical effects. The reality of course is that the proton will jiggle --- it will jiggle a bit due to its intrinsic quantum mechanical nature, and it will jiggle a bit more due to the attractive force on it of the electron. However, this can be dealt with rigorously just as before, using relative separations and reduced masses. share|improve this answer protected by Qmechanic Mar 3 at 16:12 Would you like to answer one of these unanswered questions instead?
b5659ff8fd89aa69
My caricature Ark's Homepage Curriculum Vitae What's New Physics and the Mysterious Event Enhanced Quantum Physics (EEQT) Quantum Future My Kaluza-Klein pages Links to my other online papers dealing with hyperdimensional physics QFG Site Map EEQT, Quantum Jumps and Quantum Fractals Paper "Quantum Jumps, EEQT and the Five Platonic Fractals": PDF or HTML or link to Los Alamos preprint server (but the pdf version there has lower quality images die to size limits on their server) OpenSource Java Files Quantum Fractals Java Applet EEQT Lab Java Applet Image gallery Bibliography of EEQT Note: These two applets crash on certain browsers. Internet Explorer seems to be OK and Mozilla seems to be OK. The applets crash with Opera (for reasons that are not understood) and with older Netscape. Therefore the best thing is to download the .jar files from SourceForge, unpack them, download Java SDK appropriate to the operating system, install it, and then run the .jar files with "java -jar qf.jar" or "java -jar wave.jar" EEQT or Event-Enhanced-Quantum-Theory I. EEQT stands for "Event Enhanced Quantum Theory" - the term introduced by Ph. Blanchard and A. Jadczyk to describe the piecewise deterministic algorithm replacing the Schrödinger equation for continuously monitored quantum systems (and we suspect all quantum systems fall under this category). 1. Isn't it so that EEQT is a step backward toward classical mechanics that we all know are inadequate? EEQT is based on a simple thesis: not all is "quantum" and there are things in this universe that are NOT described by a quantum wave function. Example, going to an extreme: one such case is the wave function itself. Physicists talk about first and second quantization. Sometimes, with considerable embarrassment, a third quantization is considered. But that is usually the end of that. Even the most orthodox quantum physicist controls at some point his "quantizeeverything" urge - otherwise he would have to "quantize his quantizations" ad infinitum, never being able to communicate his results to his colleagues. The part of our reality that is not and must not be "quantized" deserves a separate name. In EEQT we are using the term "classical." This term, as we use it, must be understood in a special, more-general-than-usually-assumed way. "Classical" is not the same as "mechanical." Neither is it the same as "mechanically deterministic." When we say "classical" - it means "outside of the restricted mathematical formalism of Hilbert spaces, linear operators and linear evolutions." It also means: the value of the "Planck constant" does not govern classical parameters. Instead, in a future theory, the value of the Planck constant will be explained in terms of a "non-quantum" paradigm. 12. The name "Event Enhanced Quantum Theory" is misleading. As we have stated: "EEQT is the minimal extension of orthodox quantum theory that allows for events." It DOES enhance quantum theory by adding the new terms to the Liouville equation. When the coupling constant is small, events are rare and EEQT reduces to orthodox quantum theory. Thus it IS an enhancement. (...) II. Most of the essential papers dealing with various aspects of EEQT are available online. III. EEQT allows us to simulate "Nature's Real Working". Of course EEQT is an incomplete theory, yet it tries to simulate the real world events with an underlying quantum substructure. The algorithm of EEQT is non-local, that suggests that Nature itself, in its deeper level, is non-local too. IV. Normally students learning quantum mechanics are being taught that it is impossible to measure position and momentum of a quantum particle. They learn how to derive Heisenberg's uncertainty relations, and they are told that these mathematical relations have such-and-such interpretation. Some are told that the interpretation itself is disputable. In EEQT all the probabilistic interpretation of quantum theory, including Born's interpretation of the wave function, is derived from the dynamics. EEQT allows us to simulate and predict behavior of a quantum system when several, as one normally calls them, incommensurable observables are being measured. The fact is that in such situation the dynamics is chaotic, and no joint probability distribution exists. That explains why ordinary quantum mechanics rightly noticed the problems with defining such a distribution. (For visualization purposes physicists, especially those dealing with quantum chaos, often use Wigner's distribution (which is not positive definite) or Husimi's distribution (which does not reproduce marginal distributions). Quantum Jumps According to EEQT quantum jumps are not directly observable. What we see are the accompanying "events". This part is somewhat tricky, and I will try to explain the trickiness here, in few paragraphs, but without any hope that there will be even one person who will understand what I mean. Well, perhaps mathematicians will do, but are they going to read this page? I doubt. Physicists certainly will think that it is too weird. And they have better things to do than following someone's weird ideas - as every physicist with guts has weird ideas of his/her own! But I would feel guilty if I did not give it a try. So here it is. Physicists do consider quantum jumps. In particular those who deal with theoretical quantum optics and/or quantum computing and information. But these quantum jumps are not being taken as "real". If not for any other reason, than because there are infinitely many jump processes that can be associated with a given Liouville equation, and there is no good reason to choose one rather than other. Thus discontinuous quantum jumps in theoretical quantum optics are considered mainly as a convenient numerical methods for solving the continuous Liouville equation. It is not so in EEQT. But EEQT splits the world into a quantum and a classical part, and quantum physicists deny that the classical part exist. They think all is quantum - the same way Ptolemeian physicists thought that all is perfectly round. Can we propose a clever idea that will show that not all is quantum? Indeed, according to quantum physics the only thing that exists is the quantum wave function. Now, let us us ask this: is the wave function itself a classical or a quantum object? That is, we ask, is location of the wave function in the Hilbert space governed by classical or by quantum laws? Most quantum physicists would pretend they do not understand the question. Some will understand, and will answer: "sure, there is an uncertainty in the state vector, but that is altogether different story. They will point me to Braginsky or Vaidman or some other, more recent, paper - but they will not answer my question: is the quantum wave function a classical or a quantum object? Is it an object at all? And if it is an object, then what kind of animal it is, and where it fits? Philosophers perhaps will point me to Eccles and Popper, but this is not an answer either. What is my answer to my own question? I do not know the answer, but I can speculate. So, here it is: we are talking about models. Models of "Reality". Perhaps nothing but models "exists", but that is not our problem now. If all is about models, then we can think of a model in which wave function is both classical and quantum. In which Wave Function "observes" itself - as John Archibald Wheeler has imagined: "The universe viewed as a self-excited circuit. Starting small (thin U at upper right), it grows (loop of U) to observer participancy - which in turn imparts 'tangible reality' (cf. the delayed-choice experiment of Fig. 22.9) to even the earliest days of the universe" "If the views that we are exploring here are correct, one principle, observer-participancy, suffices to build everything. The picture of the participatory universe will flounder, and have to be rejected, if it cannot account for the building of the law; and space-time as part of the law; and out of law substance. It has no other than a higgledy-piggledy way to build law: out of statistics of billions upon billions of observer participancy each of which by itself partakes of utter randomness." (J.A. Wheeler, "Beyond the Black Hole", in "Some Strangeness in the Proportion", Ed. Harry Woolf, Addison-Wesley, London 1980) To observe itself "It" must split into two "personalities", a quantum one and a classical one. So, here comes the model: consider a pair of wave functions, the function trying to determine its own shape. One element of the pair is considered to be "quantum" - as it determines probabilities and quantum jumps, while the second element of the pair is interpreted as a classical one - its shape is the classical variable. They dance together and they jump together. More details can be found in "Topics in Quantum Dynamics". And here we come to the mathematical description of quantum jumps in EEQT. Of course the simplest situation is when we separate jumps from the continuous evolution. To analyze this particular situation let us think of the simplest possible "toy model". Physicists like toy models, as they usually provide us with explicit solutions whose properties we can study in order to try to understand more complex, real world situations, where the problems get so complicated that there is no hope even for an approximate solution. Physicists usually replace real-world problems with other problems, build out of their toy models, which are still simple enough to be solvable, even if only approximately, and yet mirror some essential features of the "true problems." So, what would be the simplest toy model to play with, that teaches us something about quantum jumps? The quantum system, to be nontrivial, must live in a Hilbert space of at least two complex dimensions. The classical system must have at least two states. Such a toy model was indeed studied in connection to the Quantum Zeno effect., where it was demonstrated that a flip-flop detector strongly coupled (that is "under intensive observation" - watched pot never boils... ) to a two-state quantum system effectively stops the continuous quantum evolution. This model is not interesting though if we want to study pure quantum jumps. Here we need a more complicated model and that is how "tetrahedral model" was developed. It was found that it leads to chaotic dynamics and to fractals of a new type: fractals drawn by a quantum brush on the quantum canvas - a complex projective space. And that is how we come to quantum fractals. Quantum Fractals The details and the bibliography are given in "Quantum Jumps, EEQT and the Five Platonic Fractals." Here let us describe the algorithm and the Java applet. (The applet is a part of an OpenSource project, so additions and enhancements will probably follow its release. ) The canvas, is a surface of the unit sphere In coordinates its points are represented by vectors n = (n1,n2,n3) of unit length, thus (n1)2+(n2)2+(n3)2 = 1. There are five Platonic solids: tetrahedron (N=4), octahedron (N=6), cube (N=8), icosahedron (N=12), dodecahedron (N=20), where N is the number of vertices. They have equal faces, bounded by equilateral polygons. It was Euclid who proved that only five such can exist in a three dimensional world. In his Mysterium Cosmographicum (1595) Johannes Kepler attempted to account for the orbits of the six then known planets by radii of concentric spheres circumscribing or inscribing the solids. This site is a member of WebRing. To browse visit Here. Last modified on: June 27, 2005.
c483da7f07fbd403
Green's function From Wikipedia, the free encyclopedia Jump to: navigation, search This article is about the classical approach to Green's functions. For a modern discussion, see fundamental solution. In mathematics, a Green's function is the impulse response of an inhomogeneous differential equation defined on a domain, with specified initial conditions or boundary conditions. Via the superposition principle, the convolution of a Green's function with an arbitrary function f(x) on that domain is the solution to the inhomogeneous differential equation for f(x). Under many-body theory, the term is also used in physics, specifically in quantum field theory, aerodynamics, aeroacoustics, electrodynamics and statistical field theory, to refer to various types of correlation functions, even those that do not fit the mathematical definition. In Quantum field theory, Green's functions take the roles of propagators. Definition and uses[edit] A Green's function, G(xs), of a linear differential operator L = L(x) acting on distributions over a subset of the Euclidean space Rn, at a point s, is any solution of where \delta is the Dirac delta function. This property of a Green's function can be exploited to solve differential equations of the form If the kernel of L is non-trivial, then the Green's function is not unique. However, in practice, some combination of symmetry, boundary conditions and/or other externally imposed criteria will give a unique Green's function. Also, Green's functions in general are distributions, not necessarily proper functions. Green's functions are also useful tools in solving wave equations and diffusion equations. In quantum mechanics, the Green's function of the Hamiltonian is a key concept with important links to the concept of density of states. As a side note, the Green's function as used in physics is usually defined with the opposite sign; that is, This definition does not significantly change any of the properties of the Green's function. If the operator is translation invariant, that is, when L has constant coefficients with respect to x, then the Green's function can be taken to be a convolution operator, that is, See also: Spectral theory Loosely speaking, if such a function G can be found for the operator L, then if we multiply the equation (1) for the Green's function by f(s), and then perform an integration in the s variable, we obtain: \int LG(x,s) f(s) \, ds = \int \delta(x-s)f(s) \, ds = f(x). The right-hand side is now given by the equation (2) to be equal to L u(x), thus: Lu(x)=\int LG(x,s) f(s) \, ds. Because the operator L = L(x) is linear and acts on the variable x alone (not on the variable of integration s), we can take the operator L outside of the integration on the right-hand side, obtaining Lu(x)=L\left(\int G(x,s) f(s) \,ds\right), which suggests Not every operator L admits a Green's function. A Green's function can also be thought of as a right inverse of L. Aside from the difficulties of finding a Green's function for a particular operator, the integral in equation (3) may be quite difficult to evaluate. However the method gives a theoretically exact result. This can be thought of as an expansion of f according to a Dirac delta function basis (projecting f over δ(x − s)) and a superposition of the solution on each projection.) Such an integral equation is known as a Fredholm integral equation, the study of which constitutes Fredholm theory. Green's functions for solving inhomogeneous boundary value problems[edit] The primary use of Green's functions in mathematics is to solve non-homogeneous boundary value problems. In modern theoretical physics, Green's functions are also usually used as propagators in Feynman diagrams (and the phrase Green's function is often used for any correlation function). Let L be the Sturm–Liouville operator, a linear differential operator of the form L=\dfrac{d}{dx}\left[p(x) \dfrac{d}{dx}\right]+q(x) and let D be the boundary conditions operator Du= \begin{cases} \alpha_1 u'(0)+\beta_1 u(0) \\ \alpha_2 u'(l)+\beta_2 u(l). Let f(x) be a continuous function in [0,l]. We shall also suppose that the problem Lu &= f \\ Du &= 0 is regular (i.e., only the trivial solution exists for the homogeneous problem). There is one and only one solution u(x) that satisfies Lu & = f\\ Du & = 0, and it is given by u(x)=\int_0^\ell f(s) G(x,s) \, ds, where G(x,s) is a Green's function satisfying the following conditions: 1. G(x,s) is continuous in x and s. 2. For x \ne s, L G(x, s)=0. 3. For s \ne 0, D G(x, s)=0. 4. Derivative "jump": G'(s_{+0}, s)-G'(s_{-0}, s)=1 / p(s). 5. Symmetry: G(x,s) = G(s, x). Advanced and retarded Green's functions[edit] Sometimes the Green's function can be split in an addition of two functions. One with the variable positive (+) and the other with the variable negative (-). These are the advanced and retarded Green's functions, and when the equation under study depends on time, one of the parts is causal and the other anti-causal. In these problems usually the causal part is the important one. Finding Green's functions[edit] Eigenvalue expansions[edit] If a differential operator L admits a set of eigenvectors \Psi_n(x) (i.e., a set of functions \Psi_n and scalars \lambda_n such that L \Psi_n=\lambda_n \Psi_n) that is complete, then it is possible to construct a Green's function from these eigenvectors and eigenvalues. "Complete" means that the set of functions \left\{ \Psi_n \right\} satisfies the following completeness relation: \delta(x-x')=\sum_{n=0}^\infty \Psi_n^\dagger(x) \Psi_n(x'). Then the following holds: G(x, x')=\sum_{n=0}^\infty \dfrac{\Psi_n^\dagger(x) \Psi_n(x')}{\lambda_n}, where \dagger represents complex conjugation. Applying the operator L to each side of this equation results in the completeness relation, which was assumed true. Table of Green's functions[edit] The following table gives an overview of Green's functions of frequently appearing differential operators, where \theta(t) is the Heaviside step function, r=\sqrt{x^2+y^2+z^2} and \rho=\sqrt{x^2+y^2}.[1] Differential Operator L Green's Function G Example of application \partial_t + \gamma \theta(t)\mathrm e^{-\gamma t} \left(\partial_t + \gamma \right)^2 \theta(t)t\mathrm e^{-\gamma t} \partial_t^2 + 2\gamma\partial_t + \omega_0^2 \theta(t)\mathrm e^{-\gamma t}\frac{1}{\omega}\sin(\omega t) with \omega=\sqrt{\omega_0^2-\gamma^2} one-dimensional damped harmonic oscillator \Delta_\text{2D}=\partial_x^2 + \partial_y^2 \frac{1}{2 \pi}\ln \rho \nabla^2=\partial_x^2 + \partial_y^2 + \partial_z^2= \Delta \frac{-1}{4 \pi r} Poisson equation Helmholtz operator \Delta + k^2 \frac{-\mathrm e^{-ikr}}{4 \pi r} stationary 3D Schrödinger equation for free particle D'Alembert operator \square = \frac{1}{c^2}\partial_t^2-\Delta \frac{\delta(t-\frac{r}{c})}{4 \pi r} wave equation \partial_t - k\Delta \theta(t)\left(\frac{1}{4\pi kt}\right)^{3/2}\mathrm e^{-r^2/4kt} diffusion Green's functions for the Laplacian[edit] \int_V \nabla \cdot \vec A\ dV=\int_S \vec A \cdot d\hat\sigma. Let \vec A=\phi\nabla\psi-\psi\nabla\phi and substitute into Gauss' law. Compute \nabla\cdot\vec A and apply the product rule for the \nabla operator: \nabla\cdot\vec A &=\nabla\cdot(\phi\nabla\psi \;-\; \psi\nabla\phi)\\ &=(\nabla\phi)\cdot(\nabla\psi) \;+\; \phi\nabla^2\psi \;-\; (\nabla\phi)\cdot(\nabla\psi) \;-\; \psi\nabla^2\phi\\ &=\phi\nabla^2\psi \;-\; \psi\nabla^2\phi. Plugging this into the divergence theorem produces Green's theorem: \int_V (\phi\nabla^2\psi-\psi\nabla^2\phi) dV=\int_S (\phi\nabla\psi-\psi\nabla\phi)\cdot d\hat\sigma. Suppose that the linear differential operator L is the Laplacian, \nabla^2, and that there is a Green's function G for the Laplacian. The defining property of the Green's function still holds: L G(x,x')=\nabla^2 G(x,x')=\delta(x-x'). Let \psi=G in Green's theorem. Then: \int_V \left[ \phi(x') \delta(x-x')-G(x,x') \nabla^2\phi(x')\right]\ d^3x' = \int_S \left[\phi(x')\nabla' G(x,x')-G(x,x')\nabla'\phi(x')\right] \cdot d\hat\sigma'. Using this expression, it is possible to solve Laplace's equation \nabla^2\phi(x)=0 or Poisson's equation \nabla^2\phi(x)=-\rho(x), subject to either Neumann or Dirichlet boundary conditions. In other words, we can solve for \phi(x) everywhere inside a volume where either (1) the value of \phi(x) is specified on the bounding surface of the volume (Dirichlet boundary conditions), or (2) the normal derivative of \phi(x) is specified on the bounding surface (Neumann boundary conditions). Suppose the problem is to solve for \phi(x) inside the region. Then the integral \int\limits_V {\phi(x')\delta(x-x')\ d^3x'} reduces to simply \phi(x) due to the defining property of the Dirac delta function and we have: \phi(x)=\int_V G(x,x') \rho(x')\ d^3x'+\int_S \left[\phi(x')\nabla' G(x,x')-G(x,x')\nabla'\phi(x')\right] \cdot d\hat\sigma'. This form expresses the well-known property of harmonic functions that if the value or normal derivative is known on a bounding surface, then the value of the function inside the volume is known everywhere. In electrostatics, \phi(x) is interpreted as the electric potential, \rho(x) as electric charge density, and the normal derivative \nabla\phi(x')\cdot d\hat\sigma' as the normal component of the electric field. \int_S \nabla' G(x,x') \cdot d\hat\sigma' = \int_V \nabla'^2 G(x,x') d^3x' = \int_V \delta (x-x') d^3x' = 1 meaning the normal derivative of G(x,x') cannot vanish on the surface, because it must integrate to 1 on the surface. (Again, see Jackson J.D. classical electrodynamics, page 39 for this and the following argument). The simplest form the normal derivative can take is that of a constant, namely 1/S, where S is the surface area of the surface. The surface term in the solution becomes \int_S \phi(x')\nabla' G(x,x')\cdot d\hat\sigma' = \langle\phi\rangle_S Supposing that the bounding surface goes out to infinity, and plugging in this expression for the Green's function, this gives the familiar expression for electric potential in terms of electric charge density as \phi(x)=\int_V \dfrac{\rho(x')}{|x-x'|} \, d^3x'. Example. Find the Green function for the following problem: Lu & = u'' + k^2 u = f(x)\\ u(0)& = 0, \quad u\left(\tfrac{\pi}{2k}\right) = 0. g''(x,s) + k^2 g(x,s) = \delta(x-s). If x\ne s, then the delta function gives zero, and the general solution is g(x,s)=c_1 \cos kx+c_2 \sin kx. For x<s, the boundary condition at x=0 implies g(0,s)=c_1 \cdot 1+c_2 \cdot 0=0, \quad c_1 = 0 if x < s and s \ne \tfrac{\pi}{2k}. For x>s, the boundary condition at x=\tfrac{\pi}{2k} implies g\left(\tfrac{\pi}{2k},s\right) = c_3 \cdot 0+c_4 \cdot 1=0, \quad c_4 = 0 The equation of g(0,s)=0 is skipped for similar reasons. To summarize the results thus far: g(x,s)= \begin{cases} c_2 \sin kx, & \text{for }x<s\\ c_3 \cos kx, & \text{for }s<x Second step: The next task is to determine c_{2} and c_{3}. Ensuring continuity in the Green's function at x=s implies c_2 \sin ks=c_3 \cos ks One can ensure proper discontinuity in the first derivative by integrating the defining differential equation from x=s-\epsilon to x=s+\epsilon and taking the limit as \epsilon goes to zero: c_3 \cdot \left(-k \sin ks\right)-c_2 \cdot \left( k \cos ks\right )=1 The two (dis)continuity equations can be solved for c_{2} and c_{3} to obtain c_2 = -\frac{\cos ks}{k} \quad;\quad c_3 = -\frac{\sin ks}{k} So the Green's function for this problem is: -\frac{\cos ks}{k} \sin kx, & x<s\\ -\frac{\sin ks}{k} \cos kx, & s<x Further examples[edit] G(x, y, x_0, y_0) =\dfrac{1}{2\pi} &\left[\ln\sqrt{(x-x_0)^2+(y-y_0)^2}-\ln\sqrt{(x+x_0)^2+(y-y_0)^2} \right. \\ &\left. - \ln\sqrt{(x-x_0)^2+(y+y_0)^2}+ \ln\sqrt{(x+x_0)^2+(y+y_0)^2}\right] See also[edit] • S. S. Bayin (2006), Mathematical Methods in Science and Engineering, Wiley, Chapters 18 and 19. • Eyges, Leonard, The Classical Electromagnetic Field, Dover Publications, New York, 1972. ISBN 0-486-63947-9. (Chapter 5 contains a very readable account of using Green's functions to solve boundary value problems in electrostatics.) • G. B. Folland, Fourier Analysis and Its Applications, Wadsworth and Brooks/Cole Mathematics Series. • K. D. Cole, J. V. Beck, A. Haji-Sheikh, and B. Litkouhi, Methods for obtaining Green's functions, Heat Conduction Using Green's Functions, Taylor and Francis, 2011, pp. 101–148. ISBN 978-1-4398-1354-6 • Green G, An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism (Nottingham, England: T. Wheelhouse, 1828). pages 10-12 External links[edit]
9a130d43a0fabb96
DSSP: Deep Space Scouting Party "More you know, more the universe shrinks" Select another topic from the gold post HyperFlight home Portal in new window DSSP topics deal with gravitation, free energy, light, and atomic tractability 2012 Topics for the month of: December: Neutrino is not .. (Concluding with Part 3) November: Neutrino is .. (Cont'd with Part 2) October: Neutrino is .. (Part 1) September: Unit distance -- a new standard for geometric stars August: Odd symmetry is masculine July: Even symmetry is feminine June: Science and Politics are not about the same things May: Motion is easy. Is it, now April: The Eye of God. Just another April entry March: Computer computes the force but it doesn't move? February: (Self)Organization is out there January: An electron at the two slits 2011 Topics for the month of: December: When a university does not have it October: Clockwise and CCW stars September: Octahedron, a Platonic solid, maybe not solid August Definition of a star -- the NC Star: The Non-concentric Star aka The Hyperstar July Definition of a star -- the CC Star: The Concentric Circles Star June: Definition of a star: Concentric Star -- the fundamentals May: Can you make a 24 point star and 16 point star from but one construction? April: Moving energy between objects and dimensions March: Continuing with energy classification (from Feb) February: Sorting out the energy classes the Pythagorean way  January: Taking a five or ten pointed hyperstar and making art out of it 2010 Topics for the month of: December: When counting points on a star we say the star is convex or concave. But the hyperstar is neither. November: Direct a laser beam at something, but .. October: Modulo math is also about orbital harmony September: There is more to waves than up and down and sideways August: If God is all-powerful then .. July: Refreshing your knowledge on inertia. Move a body and include your mind June: Two waves can add up to zero and that bothers a lot of people May: Can you talk to an atom? April: Sending an eye reserved for gods? March: The Virtual Domain February: Create a new pentacle -- it's gas January: You want to know how to create You've just downloaded a very large page. Large pages have lots of data, which is good. (This page has lots of good data, which is great.) If you reached this page directly from a search engine, chances are your query has at least four keywords in it and you are a person with broader and deeper interests. Select the topic of interest or search this page (Ctrl•F) with your keyword. Mike Ivsin The book you will thoroughly enjoy To Publisher... More .. 2009 Topics for the month of: December: Frequencies do it alone and together November: The electron spreads in up to 3D with and without energy October: The electron spreads and shrinks over and over September: The electron spreads and shrinks August: Photon branches and half of it rotates July: Time of the future June: Time, memory, and the truth May: Time travel is and is not April: Sphinx, the physics symbol of .. March: Tarot and geometry February: Mirror symmetry is the even symmetry January: Left hand is right in the mirror We also have site search at home Portal 2008 Topics for the month of: December: Light is an even function November: Energy is an even function October: Irrational number is not a real number and that's good September: Brown's gas future up to you August: Brown's gas is green in more ways than one July: Brown's gas isn't brown June: Free running magnetic motors. And the energy is coming from .. May: Synapse, the opposite of lapse? April: Matter attracts. Yes, analyze that March: Did Archimedes beat Newton-Leibniz duo in the calculus fight? Was Kepler on the sidelines? February: When and how does the photon's energy become real? January: The diff between the photon's undulations and its frequency -- one changes but the other doesn't 2007 Topics for the month of: December: Zero-Dimensional point is the fourth dimension November: Cellular automaton is dumb October: What if geometric computing treats the irrational number just as any real number, the infinite mantissa and all? September: If a circle is a transcendental number, why would anybody make a star in a circle? August: Science guys do have myths, for there are normal experiments they just will not touch July: Is a zero dimensional point infinitely small? June: Point is a point and line is a line. So what's the difference. Oh, and how much May: Is a square but a number multiplied by itself? If so, why is energy of a moving body proportional to its velocity squared? April: In the shade of a pyramid the geometric mean is cool March: When you divide an apple you make two halves. When a cell divides, it makes two cells. Now what February: No way no how to cut a photon January: Global warming is a problem only if we let the scientists do something about it. But there may be a real solution -- send them to the moon 2006 Topics for the month of: December: All this talk about small and big particles is not about the size. It's about momentum and wave-momentum duality November: Bang, bang goes matter and nothing is left for the black hole October: Newton defined inertia to establish the dynamic property of mass. But there is so much more to inertia September: If photon has no momentum, what is absorption about? August: Photon is bouncing between parallel mirrors. Will mirrors move? Forever? July: You know that energy given to an object is conserved; but how? June: New Australian Star. Can you do vibration through geometry? Yes indeed. May: So you chopped off the tail of the irrational number. If that's all there is to it, why does the irrational number have the infinite mantissa to begin with? April: If a photon disappears at absorption, what's left of it? Can you make a new one? Can you make a photon systemic? March: Helium by symmetry. Forget about the silly guess of exclusion. Inclusion builds atoms February: Fibonacci sequence has golden property -- but so do others January: Particle-wave duality is about transformation -- think energy. Is mass getting messy? Okay, it will vanish 2005 Topics for the month of: December: King's Chamber and Balmer's formula make sense together November: Michelson-Morley: Take results of the classic experiment any day October: Yet another dead end project from NASA -- and it all came tumbling down September: Time is confusing if you don't know what leads and what follows August: Red shift or blue shift can be applied to measure absolute speed July: Relativistic presumption is a dud, then and now June: Intractable math is math but it does not reflect nature May: Point cannot be parted -- and the electron knows April: Events separated by distance can be proven to be simultaneous -- that is, absolute March: If it is irrational it is constructible but it cannot be exact February: Recombine photon by splitting it twice. This is not your Newton's prism but he likes it anyway January: Moon number two Select another topic from the gold post HyperFlight home Portal in new window 2004 Topics for the month of: December: Planets play notes August: Golden Ratio is Divine but it needs to be a triangle April: Do a square root rosette November: How's your photon going July: Time is always a derivative, and.. March: Square the circle by looking at the pyramid October Gateways between real and virtual are zero-dimensional June Mass has inertia but light has neither. New definition of inertia February Glue that makes continuum a continuum September Pythagorean Zero May Real numbers' finite precision January Microgravity that never was, elevator that never will be 2003 Topics for the month of: December The composition of incomposite numbers August Atomic versus free electron. Compton effect is a defect April Get prize with photons November Construct non-atomic Absolute clock: Newton got absolute space and time just right July Relativity postulate is neither March Reading old records October Chi, from China with life June Spectacles before they were glasses February Light mill moves and rotates September What's wrong now -- algebra? May Big deal about irrational numbers January Electron on the move Archived monthly 2002 Topics: December: Draw your own star in the heavens August: Photon sparks April: Pluto's cool November: From black hole to conspiracy July: Electrons for Newton March: Antiatom fantasy October: Electron absolute reference June: Saturn is gas February: Hydrogen pop September: Chaos workout May: Earth isn't missing January: Royal split  Archived monthly 2001 Topics  Archived monthly 2000 Topics  Archived monthly 1999 Topics Here is what our scouting parties report Every star is a sun-planets system (a solar system) or a sun-sun (binary sun) system Particular sun's mass can significantly decrease or increase in less than a decade Particular sun's angular momentum which, for our solar system is about 2% of the total, can increase significantly as well Star color or size is not linked to star's age If a galaxy's axis of rotation is nearly identical to another galaxy's axis of rotation (two galaxy subsystem) then these two galaxies will spin in opposition — one cw and the other ccw — and, in addition, both galaxies will rotate in a plane perpendicular to the axis. Both galaxies will cup slightly forming two caps of a sphere Planet is created when it spins off the sun after a 2D solar angular momentum buildup Planetary (and multiple moon) separation orbits are in ratios of notes of the musical octave. (Real numbers have finite mantissa and notes of the octave have the smallest/shortest mantissa.) The ability to interchange linear and angular momentum within each solar system and among neighboring solar systems accounts for the stiffness of a galaxy — that is:  ¤ Flat galaxies are rigid in x-y and pliable in z  ¤ Spherical galaxies are rigid in x-y-z and its solar systems are symmetrically periodic in r as a function of theta The expansion of the universe is a direct reflection of its increase in organization ©1998 -- 2011 Backbone Consultants. Copyrights Information Deep Space Scouting Party DSSP Topics for December 2012 Neutrino Part 3 (last) Last month we said two bits about neutrinos. Inherently, neutrinos are no-big-deal photons but because they also come about during an atomic explosion, they became a big deal. Scientists are so bored and so boring, they cannot leave neutrinos where they found them. So they rush to tell us neutrinos have mass. The way they "prove it," is by bringing up the fact that neutrinos slow down below lightspeed when passing through the earth and then, "they cannot reach lightspeed on account of mass." Duhh. Light slows down routinely when passing through matter the likes of glass or water -- neutrinos are photons and do not slow down any differently than any other photon. Of course, the scientist then claims that the whole universe is made from neutrinos. Well, they cannot waste my time because their credibility is gone, but I sure wish they would not waste the money. Better yet, I think the people who pay them should know when scientists are talking nonsense. Happy New Year! DSSP Topics for November 2012 Neutrino Part 2 Last month we said a bit about neutrinos. This time we should add that scientists must call everything a 'particle,' but we don't have to. Neutrinos are photons. Yes, they are not visible to the human eye but that's okay. They are high energy photons because they come about during the core's (really neutron's) decay. We know that X-ray and Gamma rays are not only high energy but dangerous as well. This is because X-rays and Gamma rays get absorbed by our bodies' water molecules (and create free radicals). Neutrinos, however, are not absorbed by our bodies and go right on thru. That's nice. The probability of neutrino's absorption is very low. After so many encounters with atoms the neutrinos finally get absorbed and make enough of a change in the atom's core we detect the change. Neutrinos are most likely spinning photons in the shape of a corscrew. Certainly the neutrinos have a different geometric shape from the conventional (transverse-oscillating wave) photons and they do not readily get into a context where they would be absorbed. So what's the big deal about neutrinos?  Because neutrinos behave differently from ordinary photons as a result of an atomic explosion, neutrinos are not just a curiosity. But if the scientists drop their 'particle' attachment we would learn something new about light, too. DSSP Topics for October 2012 Neutrino Part 1 Neutrino is • a tiny particle • a weakly interacting this or that • a mountain that is really a molehill • anything but a particle During a neutron decay two particles come apart but their momentum does not add up. To make the momentum -- and therefore energy -- to come out even, a missing particle is postulated. So far so good. At  this point the scientists make a big deal out of it because there is a large number of these things -- now called neutrinos -- all over the place. It does not take long to claim the whole universe owes its existence to this thing because that is how the scientists get the bombastic funding to look for these things. There is even a guy who can fish out a single Argon atom out of a swimming pool. And then there were three Three different neutrinos. Duly recorded. All coming from different kinds of electrons. So now Every time the funding is about to run out, big shifts happen. Maybe they can detect Ruski A-bombs being tested. Maybe the neutrinos suddenly have mass and that is something that should be investigated. Maybe they change from one to the next and kinda vibrate, you know, isn't this exciting? What can you say All neutrinos behave just like photons of light. They are massless and propagate at c and are not affected by electromagnetic fields. Neutrinos do not have charge just like photons. Don't worry that neutrinos go right on through mass -- many wavelengths of light can do that too. Another big one is that neutrinos at times interact and change while at another time they interact and reduce -- just like the photons we see. Of course, neutrinos are not affected by gravitation and that is also consistent with light once you know how to purge Albert from your system. The three neutrinos are just like many different wavelengths of light. Even if there is but a limited number of them, there are also a limited number of wavelengths coming out of hydrogen, too. Hone in on neutrinos Yes, think geometry and how it rules in the sub-subatomic world.  After that you'll also figure out that the absence of charge means that neutrinos are massless -- think symmetry. There is a second part to this topic next month as well. DSSP Topics for September 2012 Unit distance is the new star building standard When you build something you start with the smallest unit block. You use the smallest bricks and cutting them to even smaller bricks is silly because it would be easier to get such smaller bricks in the first place. When building stars geometrically we use a compass and straightedge, and then we want to know the distance between the star's points in terms of the radius (or diameter) of a compass so that we can tell how big the dimensions of a star would be for a given circle radius. This works dandy if there is only one circle needed to make the star. But there are stars that use two circles, sometimes concentric, sometimes not. Math guys love to concentrate on the one-circle polygon but going just a little distance and include Venus in the computations we need two circles for the two orbits. It makes no sense to implore the mainstream math guys to help out with a new kind of modulo math because arm chair science is their cup of tea. Going micro, interlocked multiple orbitals also make up stars and the radii become quantized subject to the circles' separation. Mainstream science cannot even dream about this one while some impresarios the likes of Levi have nothing but bad dreams of the devil to talk about. So now The idea here is to establish a unit distance as the smallest distance needed for a particular star construction. It would then make no difference how many circles of how many radii would be considered because we would be building everything based on the unit distance. This then establishes an important mental precedent that there are many ways to build the stars that are identical to what the nature puts out. On this site there is a hyperstar, a five pointed star built from four rings -- two pairs of concentric and both pairs intersecting. A point to point distance can be done mathematically in two different ways. One via the Unit distance and another one via the mainstream method. The mainstream method requires a creation of a new circle just to get the reference and, of course, the reference circle has nothing to do with the orbitals the hyperstar represents. The distances between points are then computed but the hyperstar is not constructed through such reference circle. This throws the math out of sync with the physical reality the hyperstar is about. If, however, the unit distance is used, the point-to-point distance on the hyperstar is exceptionally easy to compute. The distance between points on the five pointed hyperstar is twice the unit distance. This makes the unit distance reference a worthwhile standard to adopt. DSSP Topics for August '12 Odd symmetry is: Going around in circles Geometric construct for dimension zero Masculine, but so what The odd symmetry is symmetry about a point. A point is a 0D construct. Odd symmetry is also called the point symmetry. The point about which an image in the Cartesian system is rotated (by Pi) is at the intersect of horizontal and vertical axes and is called the Origin. Every piece of some image is rotated about a point (about the Origin) one-half circle and then the original as well as the rotated image have odd, or point, symmetry. The circle quadrant pairs 1&3 as well as 2&4 have odd symmetry. A nice part is that pairs 1&2 as well as 3&4 have even symmetry and so the circle is not exclusively even or odd. Lao Tzu (the author of Te Tao Ching) goes as far as to call a circle the 'undifferentiated whole.' This also works but, as is the case with Tao, it takes a lot of right brain work. [Yes, scientists do not have the right brain.] The odd symmetry engages a 0D point. So, all orbits, orbitals, and spins use the 0D point. Everything in the universe spins and orbits and it is then easy to arrive at the importance of the dimension zero. Yet, some mainstream scientists have big holes in their heads and they don't even count the dimension zero. This could be because dimension zero is the most complex dimension of all. In the East, for example, a point and infinity are the same. It takes some work to figure this out [try Tao?] but a good start is that a point is a virtual point. It is not real and I like to use a tiny circle to denote a point -- the geometric point is inside the small circle. Oh, if you ask a scientist they will count 1D, 2D, 3D and then some of them toss in the time as the 4D. Although 0D is about spin and orbits, and although the spinning energy has 99% of all energy in the universe, the scientist will not count 0D -- these scientists are complete idiots and beyond help. Entities subscribing to odd symmetries are exclusive: they do not occupy the same space and then they displace each other. There are many entities that do that --  an atom or your car or you, for example, but there is more to it because the odd and even symmetries must interplay to make something that is organized. An electron is a good example of that. The exclusivity is what gives the odd symmetry its masculine character. DSSP Topics for July '12 Even symmetry is: Feminine, but so what About energy The even symmetry is symmetry about an axis, the axis being usually shown vertical, and the even symmetry is also called the mirror symmetry or twofold symmetry. Every point on some image is rotated one-half circle (180 deg) about the axis and the resulting as well as the original image are now evenly symmetrical. You'll note the heart is evenly symmetrical. The number 3 is evenly symmetrical but the axis of symmetry is now horizontal. A circle is also even but read the next month's topic about the odd symmetry because a circle is very special. One unique aspect of even symmetry is that the left/right handedness reverses during rotation, which is the same thing as the mirror's reflection. Esoteric Background The wand is the defining tool of magicians. Yes, the wand defines the axis of symmetry every magician needs. Ditto for swords, which are about power, and here we also encounter the feminine as the Lady of the Lake. In the East the axis is not shown, for this axis is virtual and in the East the invisibility of virtual aspects are quite acceptable. So, in Eastern practical mysticism (read magic) we encounter the evenly symmetrical images directly as, for example, the two humps resembling the number 3 in Reiki healing energy or in the Aum symbol. This part needs some thinking. Take a photon's shape and and you'll find it symmetrical. Evenly symmetrical. Reflect it (bounce it), refract it, split it and it remains evenly symmetrical. [How this enters the creation of matter you'll find in the forthcoming late-2012 book, title not yet selected. For example, even symmetry creates inclusiveness.] What this means are two things: 1) Even symmetry cannot be broken (for a photon at least) and this makes the even symmetry a very principal component of physics; and 2) It is very likely (a fact, in fact) that all energies have even symmetry. Feminine tie in The axis of even symmetry is a virtual line. It is an empty slit. Just don't get huffy about this -- you'll lose if you do. Oh, energy can become organized and then the energy is also about knowledge. And so you want to return the sword to the lady if you borrowed it in the first place. There are other entities that have the virtual axis but you are on your own now. DSSP Topics for June '12 Science as the component of politics Al Gore might be the first person who made some inroads under the American system to introduce a political imperative under the scientific pursuit of global warming. The fact he got as far as he did is remarkable, yet it is not remarkable that the use of science for political ends is -- or was -- the standard fare of the Soviet style leadership. One would not think Lenin would write science papers but he did and he was very serious about it. One of the discussions he joined was 'can matter disappear,' and this was at the time quantum mechanics was entering mainstream at the beginning of the 20th Century. Lenin's argument was that matter can indeed disappear and he took on those who thought otherwise. Politics can get personal The principal dimension of politics is power. The more absolute the power, the more personal it gets. The use of science papers by Lenin is no other but gathering of allies with the same persuasion to get to the top. But of course, voting by the population has nothing to do with it. The gory part is that if matter were to disappears as a natural outcome, then the bodies of your enemies can disappear as well and without going against nature. A person seeking power writes, reads, and understands things in a very different context. Perhaps you've heard the riddle: "If a tree falls in the forest and there is no one there to hear it, will it make a sound?" The bottom line is that the use of science by politicians is suspect. In countries without direct elections the use of science by politicians is a directive. How is it with that disappearing matter? Matter can indeed disappear but it is not gone. Matter can transition from being real to being virtual. The art is to transition between the real and the virtual domain -- and back -- at will. While virtual, matter can move at superluminal speeds. Enjoy the summer and think free energy. DSSP Topics for May '12 Movement is easy, except .. .. We walk effortlessly. We just push off and off we go. That's how the world is to a kid. Throw a ball and the ball will hit and move something else. By the time you grow up you find out it is not so simple. You want to step off a canoe onto a pier -- and end up wet. The canoe just slides off backward as its friction is low. Yes, there is the conservation of momentum going on here. Not just for a canoe but for a rifle bullet that must share its moving energy with the rifle. The same also hold for the rotational motion. A helicopter needs a second propeller that keeps it from spinning just like the main propeller. So now we all know that in any creation of motion we have really two opposing motions: In the opposite directions for linear movement and in the opposite rotations for angular movement. What's the big deal? The thing is that the conservation of momentum holds for photons, too. A photon that bounces cannot impart motion to anything it strikes if the photon's speed after the bounce is the same as the speed before the bounce, which is indeed the case. There would be excess energy if the thing that is hit by a photon were to move. Waves are different but the conservation of energy holds for waves, too. In the real world the ball that rebounds is moving slower but such is not the case for a photon. The big deal is that even if the photon is absorbed and its energy utilized, such energy must move (or spin) two things in the opposite direction (or in opposite rotation). Somehow, this guy Einstein got it all wrong with the photoelectric effect because you know that there must be two things -- and not just one electron -- to move in the opposite direction, and that is how the momentum is and must be conserved. Yes, kiddo, this is a test. DSSP Topics for April '12 Horus, after beating up Seth good, went to the court of the gods to be declared the ruler. One of the oft-repeated arguments was that Horus' eye was undamaged. Are the hereditary rights based on geometry and not on, say, blood? The ancient Egyptian story has it that Seth killed Osiris and took his kingdom. Isis gave birth to Horus after finding the body of Osiris, Horus' father. Yes, Isis had to do some magic to bring Osiris back to life and conceive Horus. After Horus grew up he engaged Seth in a fight and both scored significant victories onto each other. The case then ended up in the court of gods that decided Horus was to get his father Osiris' kingdom back. The case is closed, except that during the trial the argument in Horus favor was that Horus' eye was undamaged. Another argument was that if Seth gets the kingdom, "the heavens will touch the earth" (with destruction). This is strange considering that "here on earth" we argue about the blood lines when it comes to claiming the properties of the deceased. (I know, owning people is about slavery and presently that's not specifically enforced -- especially after the fall of the Berlin Wall.) The eye, however, plays a pivotal role in the exercise of power, both constructive and destructive. Although not explicitly stated, the undamaged eye is understood as the fundamental requirement. Yeah, the gods know this stuff but we don't. The eye is a geometric construct analogous to the focus of a lens. It is not focusing the light (e-m radiation) but focuses something else and additionally is able to contact, interact with, and collect the infinities. This is not terribly strange if we accept that intelligence is 'out there' and intelligence is organized energy. There are descriptions of gods such as Thoth using the eye to do what they need to do. More recently, eyes appear to either create of examine the crop circles. But, hey, this is also the month of April and this topic is posted on, you guessed it, April 1. DSSP Topics for March '12 The computer computes the forces but, even so, the computer cannot move What's the computer good for? So the wonderful computer cooks the numbers really fast, but so what? The computer can estimate a trajectory of something moving, but then it needs to burn fuel to get there. This is something we do not question very deeply. It is "obvious" the computer cannot move by itself by executing an instruction and without actually controlling an engine. Why not? That's how we are programmed. The computer computes, rocket moves. So there. Are there structures that can actually move as a result of internal computations? Yes there are. The question is, really: 'Can you move without propulsion, too?' Some free energy devices move as if by themselves, and it is then conceivable to compute something and, as you do, you begin to move as well. Except that the computer is no longer electronic but geometrical. Yes, just as you can compute the SQTR(2) on a machine, you can compute the SQRT(2) on a geometric structure and the result is a particular and actual distance having the result of the SQRT(2). If there are geometric structures that convert a wavelength to a frequency, you just actually converted a wavelength to energy. So there. DSSP Topics for February '12 The real methods are those of a computer The virtual methods are those of waves Each bring something to the table, but then -- what to do with it? The computer is great because we can specify the behavior of a system and then let the machine rip. The waves come up as probability waves. If the system is a car, there are all kinds of options, preferences and performances one needs to consider before making the car. After the car is made there arise new probability waves because the car is getting old and some customers want new things -- from a coffee holder to sat navigation and WiFi. Real Methods Right away we see that the car does not get better on its own. The real method uses the 'IF-THEN' logical construct to define the system. (IF we offer a 6-cyl engine THEN our sales go up by 20% and our cost by 1%.) In fact, our present computer works the way it does by implementing the IF-THEN construct and a human has to fill it in. Virtual Methods go by our own feel because the computer cannot handle "maybe." If you bring a computer to a brainstorming session you could take notes so that you don't forget the stuff, but the computer cannot suggest answers. The brainstorming has no rules other than an objective of increasing sales or decreasing defects, but after that the discussion points are limitless. The virtual methods use the 'WHAT ELSE' method, which asks: "What else is relevant to the objective?" In summary, the real methods specify what is inside the box and how things inside the box could be made more efficient. The virtual methods redefine the box itself. So What You always want to ask this question because there is always a lot of noise in everything. The thing is that the real methods are used by your left brain whereas the virtual methods by your right. You can be dominant in one or the other but you need both. Then What When you use both methods you self-organize. So now you have to figure out how the actual left and right brains do it. You will need geometry. Think of the box as the context. "Left" is the finite, "right" is the infinite. DSSP Topics for January '12 An electron cannot be split. But can the electron be parted -- that is, branched, just as a photon? The dual slit experiment nicely shows what a photon does when it encounters two (or multiple) slits. A photon parts and both branches coherently leave the slits (yes, we've got a primer on that). Branches then superpose to different degrees at the point they reach a different parts of a screen and that is how the undulating superposition pattern of light forms. A problem, if you can call it that, is with an electron. Our image is that of a solid particle and, even though an electron does produce qualitatively the same superposition pattern as a photon in a dual slit experiment, we are kind of stumped on the explanation. We know an electron can and does have a probabilistic distribution (that can vary, too), but somehow the word 'particle' sits in our mind and it is tough to see an electron becoming a wave. This is a spot the academia reaches and here it stops for them. There is no academic (that I know of -- and let me know if there is), who would claim an electron parts just like a photon when encountering two slits. A Wave is something that has energy. Unlike a photon, an electron can be imparted with energy. An electron can be shaped as a wave and such shapes are not confined to 1D (straight-line velocity) à la photon, but can also be in 2D or 3D. The "big" step is the realization an electron can spread and become a virtual electron. In such state, moreover, an electron can become larger than the distance between the two slits. an electron can indeed pass through a plurality of slits as one entity and you'll observe self-superposition ("self-interference") once the superposing waves reach the screen on the right (not shown). For those of you who paid good money (yours or parents') for your education, you've got to undo your work, although you would not be as successful in getting your money back. There are the doozies the likes of Feynman who don't understand waves and feed you particle nonsense. Not all of academia is as screwed up as Feynman, though. Both Schrödinger and Dirac are on track. the question that needs answering is what happens when the tail (leftmost part in the illustration) of the electron reaches the slots. The thing is that indeed the electron cannot be split -- in half or otherwise. The answer is not that difficult and you can also find it in the upcoming book (mid 2012). Think along the lines of a photon. DSSP Topics for December '11 A big public university doesn't get it in more ways than one I think I can say that all of us at one time or another strongly disagreed with statements coming from universities, usually plastered on the web as if they were the most current and objective research results your tax money can buy. I think of it as silly ideas one grows out of. But then there are times I take notice because the school not only gets the science wrong but is also totally ignorant about this country's heritage of fairness. As if suddenly the university professors got empowered to say anything and do it with public money to boot. http://van.physics.illinois.edu/qa/listing.php?id=1424 site talks and discusses the pressure a beam of light is supposed to put on a mirror. The moderator is anonymous. Wow. The questions and answers are one sided. No opposing views are posted. A challenge question by the moderator puts you on a page to report sites you think are baloney. Geez. All this on a public nickel. is something that is not supposed to be questioned because it just is that way. It seems the University of Illinois is getting money for work that could be just plain nonsense and then they need the political arm that would tell us we should see the world the same way as their funding. Not bad for a nice piece of fraud. Well, okay, they hide people behind some Mike W. who tells us we should listen to him and 'beware of other ideas.' is something used heavily by academia. The anonymous author/moderator in response to one of the 'light has momentum' questions says that "It's been measured in countless experiments." Not only does he not furnish a single reference, but in fact there is no single experiment that would measure the light's pressure on a mirror in, say, Newton/m2. For fifty years scientists are hiding behind equations and when they lie about the measurements they do so anonymously and in this case with public money as well. if you think the economy is bad because America is not competitive, you are right. But before that changes you'll have to be honest. There is absolutely nothing wrong about booting the professor, too. As for me, I reported their own site as baloney. My guess is that they were never really honest about being honest and so .. Happy New Year. Think Free Energy DSSP Topics for November '11 And then it disappears A thing is a thing because it stays that way. A piece of a rock, a planet. that it must have been put together somehow. Perhaps with the help of the Pythagorean tetractys/tetrahedron. But then it can be unmade as well. If it was made from nothing then it could be made back into nothing. In a way this would be okay because there would be no trash, no poisons. that the knowledge that has put things together and taken them apart would not be unmade and in fact be available. But now, would it be possible to have that piece of the rock disappear and then reappear again without being remade? Yep, it would. Now, how about you? DSSP Topics for October '11 Clockwise and counter-clockwise stars have always made a diff. But why? Drawing a star clockwise or counterclockwise has always made a big difference in Wicca. Scientists don't care because it does not mean anything to them. In Wicca, CCW is also called 'opening' while CW is 'closing' or 'banishing.' This could be nice and handy to witchcraft folk, but if there is more to it, I'd want to know. It turns out there are harmonious and disharmonious tones, and the nice thing about that is that we can agree on which are which. The harmony in music, then, is absolute. If you could plot the dual tones as geometric stars, you could find out why some (dual) tones are harmonious. Correspondingly, some stars would then be harmonious and some not. Hearing disharmony and seeing disharmony are presently easier to percieve audially than visually. The details and the reasons of harmony are in the Quantum Pythagoreans book, and the idea is that the clockwise pentacle is not harmonious while the counterclockwise pentacle is harmonious. So now we give Wiccans a big credit, for their 'closing' is the same as disharmonious while their 'opening' is the same as harmonious. There is more because any CC star (CC star is made on Concentric Circles) can be classified as harmonious or disharmonious, other stars -- besides the five pointed ones -- can be also found as 'opening' or 'closing.' Indeed, there is one ten pointed star that, if drawn counterclockwise in a particular sequence, is harmonious. Yes, we have a picture. I count the points in a convention that goes clockwise starting at the top and then you have to skip six points to get this harmonious ten pointed star. In Wicca or on this site, this ten pointed star could also be classified as "couterclockwise," because connecting the first two points looks as going CCW. Is the cw or the ccw the key? No. A decagon is disharmonious cw or ccw. A pentagon is harmonious cw and disharmonious ccw [bomb them to the stone age]. The counterclockwise octagon is harmonious but both of the ways of doing the octacle -- cw or ccw -- are disharmonious. The octacles are used heavily in the Muslim world and if they don't know what's going on they just thrash about. DSSP Topics for September '11 Octahedron, third of the five Platonic solids There are five and only five so-called Platonic solids: Tetrahedron (three sided pyramid), Cube (six squares), Octahedron (double four sided pyramid, base-to-base), Dodecahedron (twelve pentagons), and Icosahedron (twenty triangles). They are called solids because they are in 3D. Even Pythagoreans get a part coming up with some of them, according to Proclus [and I like Proclus]. is a solid that, since antiquity, has been shown as the double pyramid. But by giving it a slight tilt a different octahedron emerges. It is made of eight triangles that, however, make a "triangular drum." When folded out, the sides look like the ones shown on left. In this Octahedron top and bottom triangular sides are labeled as such. The other yellow triangles connect to the top triangular side and point down. The other blue triangles are connected to the bottom triangular side and point up. Logically and topologically it is very similar to a cube except that instead of squares it uses triangles for all of its sides. So I call it the triangular cube. It can also be called a triangular drum, much the same way a cube could be called a square drum. Folded Up It looks like a pyramid on its side. It is really a double pyramid. Being solid, it may be difficult seeing inside it, but it can be done. Drawing the edges makes the drawing below. This topic is related to projections and the geometric makeup of a virus. A virus has the shape of an icosahedron but before it gets to that it goes through several "construction stages." DSSP Topics for August '11 Star classification -- Non-concentric Circles Star, the NC Star aka the Hyperstar aka the Molecular Star The last two months dealt with the stars created from concentric circles, or CC stars. These issue from orbits around a single sun. The non-concentric star is in a different category. At this point it would be difficult (for me) to completely describe the orbits in a dual sun system but in the micro the situation is easier for the five pointed star. This is all new. The Hyperstar is a five pointed star that is created from non-concentric circles. Without the circles it is a good ol' pentacle or a pentagon or a five pointed star. With circles it becomes a pentacle that, however, does not have a precedence on this planet -- mathematically or culturally. So you can imagine why I like it very much. Details of its geometric construction are here. While the basic hyperstar is five pointed, two of them can be put together 180 degrees out of phase by overlapping the circles. The result is a ten pointed hyperstar. Technically, this star is not regular. However, it is quite useful and that is one reason why I am proposing the NC Star category here. Geometry Rules Now, if the orbitals are the circles, the hyperstar will divide each of the two circles into exact tenths of a circle. In other words, although the star is not regular as far as the distance(s) between points, it is regular as far as the exact and even angles are concerned. Yet the most important part here is that the hyperstar actually enables the creation of molecules from atoms. This is because each of the circle's center is a spot for an atomic core. Geometry allows the growth of the atom into a molecule. The hyperstar has a particular and non-arbitrary separation of the circles, namely SQRT(5)+1, which is the numerator of the golden ratio. (The denominator is 2.) At this time I don't know if, for example, two eight pointed stars can be put together to form a 16 pointed NC hyperstar. The NC Hyperstar, The Definition The Non-concentric Circle Hyperstar divides each of the two non-concentric circles into equal and exact angular segments. Other Stars What's left from the stars' categorization are the regular stars, the R Stars. These are covered by Gauss and depend on the exact division of a circle. These further divide (subdivide) into harmonious and disharmonious stars. Such subcategorizing is addressed by the Quantum Pythagoreans book. DSSP Topics for July '11 Star classification -- The 2nd and last installment for the Concentric Circles Star, the CC Star Last month we kicked off star classification. Fundamentally, we are in the macro and derive the stars from planetary orbits. Two planets, when merged into one, will create a trace of a star, a CC Star. And then it also happens that a trace of a star would not skip one point and then two points because such trace does not happen for planetary orbits -- but it could be a fancy star anyway. What's left is the difference in points of a star. One kind of a point is made when the trace moves outbound and changes to inbound. The other kind of a point is the reverse -- the trace changes from inbound to outbound. The application tie-in comes from planetary conjunctions. When the two planets are closest together (inferior conjunction) the outbound-to-inbound point is made. This kind of a point is the most visible on a star -- it is furthest away from the center (sun). But when the two planets are the furthest apart (superior conjunction), the in-to-out point is made. This point is closest to the center of the star (closest to sun). So now, the furthest away points can be called 'exterior' or 'outside' or 'extreme' points but I would not adopt the 'inferior' terminology from astrology. Similarly, a point closest to the center could well be called the 'inside' point. Harmony again It is also easy to imagine the two planets disturbing the sun through their angular momentum and introducing a wobble onto the sun. If you were a physicist or an engineer you would see it as a problem because a wobbling sun is a "problem." It is not a problem because we want to create the orbits to begin with and we like planets to live on (I say this because scientists are not very bright). Certain orbit frequencies are harmonious and so you want to know how to get there and make the orbits stable. Here is a pentacle. The two circles are concentric with the sun in the center. Yes, they are the orbits of actual two planets. If drawn as shown with the sequence 1-2-etc., the star is harmonious. In our definition it is a CC star with five extreme points. CC (concentric circles) star means it skips the same number of points during its creation and, in this case the star's construction skips 2 points going (always) clockwise. The numerical characterization (explained in the book) is (5+3)/5 and this also tells you whether the star is cw or ccw and if it is harmonious. Now, this picture is a simplification of a real two-planet merged orbit because the points are connected in straight lines (in order the points are made). So how does a real and actual merged-planet orbit look like? Glad you asked. But of course, different planets make different CC stars. DSSP Topics for June '11 Star classification -- first installment: CC Star (Concentric Circle Star) The classification of stars comes from math people who think they got every aspect of a (geometric) star covered. They get carried away and ignore possible applications such as music. They ignore nature altogether: The Venus-Earth pentacle shape, having a curlicue, would not qualify as a star Wikkily speaking. People on the mainstream scientist websites compile trivia, references and circular references, one paragraph not at times agreeing with the next. To top it off, math guys get nonlinear if you suggest they should include examples of applications with their fancy definitions. In summary, mainstream math guys do not lead and get offensive if you suggest they ought to back it up with an application. At best they try making a virtue out of a need. Two point star -- the Catseye More examples Here is what happens when Neptune and Pluto are gravitationally combined. The trace shows how the Sun sees both planets as a single body. It is a shape that resembles a (rotated) cat's eye having two distinctive points ("corners"). You guessed it, mainstream science does not have a two pointed star because they see "two pointed" reduced into a circle's diameter and they do not give a damn about possible real-world applications. They also happily ignore the 3:2 orbit ratio that produces this eye shape -- a very well known musical ratio (and a harmonious one at that). It is then easier for the scientists to either ignore the application connection or, if they cannot find an explanation, try to remove the application by declaring Pluto a non-planet. It is perfectly okay for scientists to give up on any part under their purview, just as they did with ether. They can also claim photons are just like little meatballs and play with them in their own sandbox, but I would not pay them for such nonsense. It is then time to come up with our own definition of what a star is. New Definition of a Star: 1) Concentric Circles Star or CC Star All stars need circles for their creation. Applications-wise, circles are orbits and so we are in the macro. Because there can be more than one orbit there could be several circles. Initially, circles will be concentric, as there is but one sun in the center. A CC star is a star having its points appearing on one or more concentric circles, although an orbit circle and the points circle are not usually the same. A star's point happens when a shape changes radial direction -- that is, when a shape changes from going outbound to inbound. Going the other way -- from inbound to outbound -- also makes a point but of different kind. In our Neptune-Pluto example, all points lie on a straight line and there are four points: two for outbound change and two for inbound change. At least two orbiting planets are needed. Unicursal Star All concentric stars are unicursal. This means the trace goes on without stopping or jumping and eventually ends up at the starting point. This is yet another outcome from the orbital application and the planetary real world tie-in. A trace can take any-some-such path to get from one point to the next. However, because the star is tied to the real world application the path from one point to the next will always have the same distance. Another way of saying it is that if a star's trace skips a point then it continues skipping but one point going around the circle (and cannot suddenly skip two points). In yet another way of saying it, the star is regular during its creation, which means that going from one point to the next will take the same angle. The star can then be simplified by connecting the points in the sequence they are reached but much information can be lost. For example, if we were to connect the above Neptune-Pluto points, they would end up on a straight line. In the illustration on the left, there are two CC stars, both pentagons. Each star comes from nature but overlaying two of them cannot make one ten pointed CC star. Yet, there are ten pointed CC stars. is about two or more frequencies being played together. Taking the orbit frequencies from our solar system the tones will be harmonious. In general, however, some combined frequencies could be disharmonious and some -- even if they are not present in our solar system -- could be harmonious. The full explanation of harmony's conditions is in the book. Last edit July 7, 2011 DSSP Topics for May '11 Yes, the 24 pointed and 16 pointed stars can be made via a single geometric construction The 12 and 24 point stars are mostly reserved for time keeping. The 16 point stars are about navigation and the 16 point nautical or magnetic stars or rosettes bring you the fine resolution of North by Northwest, for example. The 24 point star starts as a 3 pointed star and by continuous doubling eventually reaches 24 points. The 16 point star starts at 2 points (a diameter) and by doubling gets to 16. In both cases we can continue on and on but the points ratio will always be in 3:2, which is the musical "fifth." How they got to calling it a "fifth" you have to research on your own, for I get flustered by the music people's strange complications. The ratio of 3:2 is just fine by me and very descriptive, too, because frequencies of 300 and 200 Hz being played together are just in the 3:2 ratio and I don't know why anybody has to complicate things by calling it a "fifth." The musical "fifth" has nothing to do with 3+2 and the last time I checked it had nothing to do with a fifth of a gallon, as in BYOB. It is also easy to make a connection between a three-point star and three wavelengths wrapping about the center and, consequently, the points of a star are directly related to frequency. We have now ventured into applications but in a strict sense we are talking about the exact geometric constructions as related to the exact division of a circle. The construction starts with the cardinal X-Y coordinates and uses circles of but one radius. The construction is so simple I was fairly certain I would find it on the Internet. After a short search I did not even find a 12-and-8-point common construction, which is simpler and precedes the 24-and-16-point common construction. The central circle marks the 24 points star intercepts as well as the points for the compass centering of other circles of the same radius. Here is our own construction for both the 12-and-8-point star. Frequencies, again Standing at the center and rotating about it, at each revolution you will encounter 24 points following one star and 16 points following the other. This construction procedure can be used in ever-increasing star point count. Circle radii stay the same and the circles grow denser around the central circle but they do not expand outward. The next point of the compass for the 48 and 32 point combo star would be at the NNW and WNW intercepts with the center circle -- and repeated for the corresponding intercepts at the other three quadrants. DSSP Topics for April '11 Energy classification in dimensions, continuing with dynamics Last two months we delved into energy classification based on dimensions. So far each the 0D, 1D, 2D, and 3D energies have their own "niche" that is easier to understand than saying chemical energy, for example. Is there the most important dimension? Yes, there is. When it comes to all kinds of energies doing their thing in various dimensions, the most significant is the dimension zero. This is not difficult to see because spin and rotation and orbits are about the point symmetry. We also need a point in a very esoteric aspect of transforming energies, for the real and the virtual domains can interact only through a 0D point. The thing is that the dimension zero is very much ignored outside of geometry and so the scientists have their gaming universe to themselves. In other words, you have to appreciate that scientists know very little. The degree of specialization is not moving us forward, really, because everybody is right in a narrower and narrower field. Moving among dimensions This is not about faster-than-light travel but it is close. Energies can move from one dimension to another and then, for example, transforming rotational energy to linear energy results in straight-and-forward 1D motion we love so much. But now the thing moving in 1D could have its energy detached and applied onto something else. Yes, but so far we can understand this only through collisions. Something transfers energy onto something else and makes it spin. So now, can this energy transfer be done without a collision? Yes, it can. And that is the best part of energy. An accelerating body is accepting energy. Once we figure out the energy attachment mechanism, we can then figure out the detachment mechanism -- and stop moving bodies at a distance -- but then attach energy to something else again, without a collision of course. DSSP Topics for March '11 Energy classification in dimensions, continued Last month we proposed the classification of energy based on dimensions. So, as there are 0D to 3D degrees of freedom, we will use this schema to talk about the energy. We also said the scientists are clueless when it comes to classifying intelligence and this month we will see where the smart energies are. The Chinese, interestingly enough, define several intelligent energies. I will stick to calling 2D based energies the 'free energy,' in line with the general pursuit of harnessing these space borne energies. The existence of 'free energy' is presently being denied by the mainstream and taxpayer-paid scientists, but, you know, it's a matter of you figuring it out rather than waiting for a handout. 2D and 3D energies. 2D energies are not exactly free, for the energy is diverted from the Earth's orbital momentum or by deconstructing matter, for example. 3D energies have nowhere near the physical oomph of 2D energies but what they do have is their infinite uniqueness. 3D energies are sculpted in 3D and, being a wavefunction (a virtual energy), 3D energies are difficult to read. The energy has to reduce and an appropriate 3D physical construct must exist to intercept it. When you study you memorize certain things and such things have a geometric representation in your brain. Such representation, then, is capable of intercepting space borne intelligence. A motion of your physical body has a similar memory component and, well, you have get into Tai Chi to move from just talking about it to actually feeling it. 3D also deals with energy storage that has a 0D component. Healing energies are also in 3D. Our structure is in the aura and there are intelligences the likes of angels that can deal with fixing your body. Everything you do is stored and some say everyone has a plan that's stored in there as well. Technically, things can be denied but that could make it worse. After all, you would not want your knowledge to disappear if you were to lose your diploma. 1D energy is your everyday photon. On this planet the linear kinetic motion of automobiles and rifle projectiles dominate but 1D energy is pretty rudimentary and in the long run not very effective. DSSP Topics for February '11 Energy classification Energy is presently classified along the easy-to-visualize lines -- such as, kinetic, potential, thermal, chemical, photonic, etc. Not bad. Actually, the visualization aspect is important. Yet, if we don't understand what energy is and where it could come from and how it looks like, we are limited to descriptions such as healing energy or harmonious energy, which is not very revealing. For some people it is apparent that knowledge, intelligence, and information are also forms of energies. The scientist, however, is clueless when it comes to explaining intelligence or information, and there is then a need to delve into energies along different classification lines. The Chinese differentiate energies associated with a human body into about four categories. Chinese health and Martial arts literature is available and abundant in the explanations, but now I don't want to expound the Chinese reasoning when differentiating chi and ch'i, for example. I don't need to, actually. baseline to energies is very forthcoming. It issues from geometry and, in particular, from the 0, 1, 2, and 3 dimensions of independence. An atomic electron is in an orbital and carries 2D and 3D energies but not 1D energy. A free electron, when moving, carries 1D energy while its spin is 0D energy. Light is 1D energy, and so on. The dimensions of independence give us an excellent framework, which is particularly revealing during energy transformations. For example, a photon of light issuing from an atom receives 2D energy from an electron's orbital but the resulting photon is in 1D. The squaring of a circle comes in and our hyperstar is very useful in this regard. (Hyperstar is treated on this page in the Jan 2011, Dec 2010, and Feb 2010 DSSP topics.) DSSP Topics for January '11 Hyperstar art The hyperstar is our particular creation of a pentagon and consequently a pentacle and a five point star. It comes about from a direct construction of the golden proportion. The end result is the hyperstar, which can be rendered with five or ten points. You will note the hyperstar is not made from just one circle. Picture filename: hyperstar_5-point.gif Last month (below) we highlighted one unusual property of a ten pointed hyperstar: Some points lie on a straight line and do not curve as we would expect on a conventional star. Yet, the hyperstar contains a large number of design elements in addition to the star itself. Lunes (moons), circles, and rings and triangles -- all relating in particular proportions and angles. The triangles are golden, too. By putting and keeping the elements in their original proportions you and I can create art that is specific to -- yes -- atomic and cosmic constructions. Here is but one example but you can see many selected designs here. Enjoy. Picture filename: pentagon-moon.gif DSSP Topics for December '10 When the star is neither concave or convex, what kind of star is it? A: The Hyperstar All polygons are convex. The trace moves from one point to another and the direction (curvature) does not reverse. It is the same thing when riding a race car on an oval -- you steer in one direction only. On a concave track you reverse the direction from right to left and vice versa etc. So, an octagon below is convex and so is the octagram if you trace as indicated. Picture filename: convex_octagon.gif an octacle can be both concave and convex if we count the inside bends as points -- or if the minor points cause a concave trace: Picture filename: convex_and_concave_octacle.gif Picture filename: convex_and_concave_octagon.gif That is how things stand. The scientists classify all possible stars and everything is covered. Well, not our Hyperstar: Picture filename: hyperstar_not_convex_not_concave.gif The Hyperstar When moving from point 1 to 2 to 3 there is no bending. The hyperstar is neither convex or concave. Classifying this star could be a real pain. But, at the end of it, it is a star that has nothing to do with counting points. So there. Oh, the lines 1-2, 2-3, and 1-3 each form a side of two golden triangles. Happy New Year. DSSP Topics for November '10 When you direct a laser at something and observe a motion, you a) Write a new theory about it b) You know some light got absorbed c) Never get a motion in a vacuum without consumables Lasers are nice to work and play with. So we point them at just about anything. They heat up a surface so much they cut right on through. And because most of the people think photons carry momentum, they also think the laser beams can push things. But they don't. A laser beam cannot push anything by reflecting from it. This fact does not subscribe to equations the scientists have, but never mind that. They, the scientists, love equations and if you ask one to build a laser beam perpetual motion machine by bouncing a light beam back and forth between two parallel mirrors, they really cannot do it -- but they love their equations so much they don't care the equations don't describe reality. They point their lasers at liquid bubbles or shoot a laser through a bent fiber optic cable. Bubbles extend, fiber bends some more or less. That would be okay, for people like to invent a new instrument that can pick a small object with a laser, say. The bottom line is that the light from a laser must reduce into heat if some movement is to be observed. So now the scientist makes different conclusions about light. They bring in theories that support or contradict this or that scientist. They toss in Nobel prizes to improve the scientist's credibility, but light is not a stream of little bouncing balls and so they go back and forth arguing the same premise of how many angels can fit on a head of a pin. Everybody makes a point but all of it is irrelevant because -- from Compton on through Einstein -- they cannot go past the little billiard balls when it comes to the photons of light. How can you tell where the truth is? On reflection, from a mirror say, a photon remains a photon -- that is, energy. There is no momentum transferred to a mirror when light bounces from it. When a photon reduces into heat or electrical energy then the photon is no longer a photon and is gone: one form of energy transferred to another. In the vacuum of space it is impossible to create motion from a laser because the absorption opportunities just aren't there. I suppose you can evaporate some material and make it move but when the material is gone so is the motion. You can move in the vacuum of space through other means but gravitational forces are not photonic. To move in the vacuum you will need to understand gravitation and this becomes a different topic altogether. What's wrong? There is nothing wrong writing about a so-called science even if it has no merit and no return on investment. But if NASA thinks they are doing breakthrough science they are just showcasing their stupidity. It would also help if the Nobel committee were to toss out Compton and Einstein, and admit both the committee and the scientists botched it up. I've no problems leaving the scientists behind, along with their baggage. Yes, there are links to all the right angles on light. DSSP Topics for October '10 Modulo math is about rotation and recycling. Is there more? For two thousand years modulo math was in the esoteric category. Pythagoras first brought up the decad -- or modulo-10 -- when HE spoke of the decad in that after the decad of ten one returns to one to start another decad. This was picked up by the Greeks and Hebrews who assigned numbers to letters up to nine, the next letter restarting with zero. Gauss picked up the modulo math formally and showed how to predict divisibility of very large numbers with simple digit summation. Not long ago in the 1980s modulo math was applied in the public key cryptography. In essence, our clock is modulo-12 because after the 12 the hand restarts at one. And so we can pick any whole number N and apply the modulo-N math. Can modulo math be advanced? But of course you knew this is a rhetorical question. In the present modulo-N setup the reference stays put. The 12 o'clock position stays on top. Yet if you forget about the 12 o'clock position and include the small hand of the clock as a modulo reference, which is now allowed to move, we will have a modulo in the 12-to-1 ratio. For every 12 rotations of the large hand the small hand makes one rotation. There could be other ratios. In the 8-to-5 ratio the faster hand would make 8 revolutions for every 5 of the slower hand. Isn't it nice the Venus-Earth revolve around the sun in this ratio? And Earth-Mars in the 15-to-8 ratio? In the book Quantum Pythagoreans the notion of orbit rationing is applied in the explanation of harmony and disharmony. To top it off, orbit ratios open up a new chapter on modulo math, which as yet does not deal with a modulo of two numbers such as the modulo-8/5. DSSP Topics for September '10 Bring on the waves Adding an axiom or two to Euclid can help with free energy Axioms are axioms because they need not be questioned. They are common sense obvious kinds of things that need no proof, say. If we take the Euclid's axiom saying that through two points one can draw but one line, it does not make much sense to argue for or against it: It is just right just the way it is. Or is it? This guy Euclid was not really up on irrational and transcendental numbers. He proves that irrationals such as the square root of two cannot be obtained by rationing and leaves it at that. Euclid has no clue on what the irrationals or transcendentals can do for you and he just leaves them on the side-lines. A solid line from this point to that point is about the magnitude. It has some length we can measure exactly. So we cut a two-by-four to some exact length and feel good about it. But if the distance between two points is irrational there is no way we can cut a stick or a piece of something to be exactly the same distance because we do not know where to make the cut. An irrational distance has an infinite number of nonrepeating sub-unity digits (called mantissa) and even the expert measurer cannot decide where to make the cut because the decimal digits go on and on and on. Yes, I know you will say the third decimal place will not make a diff in the height of a door, but the idea is that in nature no real thing can exist where the irrationals live. The pesky irrationals Okay, so we give up on the irrational distances because no real thing will fit in there and ignore them the same way Euclid did. Hey, we can even ignore those math guys who say there are more irrational numbers than rational numbers. They are theoreticians, you know. But what can actually fit in the irrational distance? Better yet, is there something that's happy about irrational distances? The waves can fit the rational or irrational distance. Waves can span the irrational distance just fine and do so exactly. But you have to give them space because, as you know by now, a rational piece of something fills in the rational length and, by excluding the irrational distance, excludes the waves as well. And so the waves need an empty space to form and to exist across any distance. The transcendentals are in the same boat except they do it in the geometry of a circle. A circle cannot be squared as a whole and so one cannot make a perfect circle from wood, say, because a real thing is made from real numbers and no circumference of any circle can be a finite number. So, similarly, it is up to the waves to make a perfect circle of 2Pi. It just might be good time to add to Euclid's axioms and fold in the waves. They are important because they are everywhere and we want to work with them. But of course, waves are about energy. DSSP Topics for August '10 You know the argument about the nonexistence of God: If God were all-powerful He or She could build a wall .. .. If God were all-powerful He or She could build a wall such that nobody, God included, could get over it. If He could not get over the wall then God would not be all-powerful. If He could then God would not be all-powerful because He did not build the wall that would stop everybody. Some people stand back with a smile waiting for your reaction. Some think a man's brain can put an end to God. And then you say, 'Can you make the biggest dwarf? You can make him out of concrete and put it in your garden, you know, but it has to be the biggest.' something also calls for excluding something else -- for otherwise the spec would not be a spec. By specifying going straight we exclude going in circles. By requiring getting wet we exclude being dry. By requiring building a wall we deny passage. You can ask God to resolve a conflict but you can show but your poor thinking if you want two contradicting things to happen. Things happen for a purpose and a wall can be created -- but for another purpose the wall can be uncreated. There are many, many purposes -- one better than the other. When you enter the infinity you will see God. DSSP Topics for July '10 How is force linked to inertia and mass? Newton defined inertia as force (vis inertiae in Latin) that resists a change in direction and/or speed some mass body may have. A body may be at rest or moving linearly or spinning or orbiting but to affect any change to the path or speed, the inertia is and must be engaged. Newton, then, defines a dynamic property of mass and calls it inertia. The good thing is that inertia works for any body and is the same everywhere in the universe. Inertia appears linearly proportional to mass: a body with twice the mass has twice as much inertia. This linear proportionality is confirmed in a laboratory or in your backyard. Same other background An electron has mass. When accelerating electrons to a higher and higher speed, we would expect the electron's inertia linearity to hold and we could technically achieve any speed we want. Such is not the case, however. Adding more and more energy to an electron speeds it up a bit but the speed does not exceed the speed of light. Energy conservation continues to hold -- that is, even though the electron's speed finds a limit at lightspeed, any amount of energy imparted onto an electron can be recovered in full. So now you have to decide on the mechanism that is taking place here. Some took an easy way out and declared that the mass of the electron is somehow increasing and that is why we are up against a higher and higher resistance and thus a higher and higher inertia. But then the conclusion is that a mass of an electron would then have to increase to infinity as the lightspeed is reached. Yet, letting an electron to acquire mass just like that is somewhere between pseudoscience and stupid science because it does not deal with the mechanism of inertia, which is about the mechanism of the conservation of energy. Inertia is about the conservation of energy When a mass body speeds up the energy put into it can be recovered in full, sooner or later, gradually or suddenly. You can speed up a body but to do so you will need to push it some distance and with distance you get a direction. It is this direction that is also being conserved, for if you speed up a real thing going North, say, the body will continue to go North. Now, the only entity that has speed and direction is a wave because a wave is nonlocal and must have spatial distance. Inertia must be a wave-based mechanism and the conservation of energy is also wave based. How it works Speeding up a real object imparts frequencies to the atoms comprising the object. Higher frequencies have higher energies and that is how the work that is put in to speed things up is stored. Yes, the energy speeding up an object is stored with the object. Big deal? You bet. By affecting the frequencies at a distance you can speed up or slow down an object at a distance. Mainstream science is in the stupids category for about 100 years now. Einstein is okay for keeping the reductionists in their black hole. Sooner than later, though, Einstein and the reductionists will have to go. Lack of truth gives rise to many opportunities and I see the success of Tai Chi in Europe and Canada as a rejection of the status quo. Tai Chi is in the area of health and self-improvement but it has a geometric foundation. In physics, the idea is that you can work this month's topic without the so called scientists and make some real progress, too. The wave mechanics dealing with the conservation of energy is limited by lightspeed but the frequencies are unbounded. Perhaps you see now why the speed of a real object cannot exceed the speed of light but can accumulate higher and higher energy because the frequencies are unbounded. At lightspeed the frequency and thus the energy goes to infinity (but the mass stays the same). DSSP Topics for June '10 When two waves add up to zero, where does the energy go? It is easy to visualize two ocean waves adding up to a bigger wave. One half of the new wave gets higher and the other half gets lower, but the overall feel we get is of a wave that has twice as much energy. After all, a boat on top of such wave would be lifted up twice as high. But if the two waves do not come together at the same time (coherently, in phase) but instead meet each other one half the wavelength later (180º out-of-phase), the waves add up to zero and there is no visible wave action. The energy of a wave seems to have disappeared. Picture filename: wave-phase-addition-gif Energy and Geometry Yet, we also know that the keys are with energy and geometry. When the boat goes up it receives energy and when the boat goes down it gives up energy. The net (or average) energy is zero. Yes, the net energy is zero even if the height of the wave doubles. In practical terms the up and down motion does not take away or give this wave energy. What this also means that the ocean waves do not push a boat along -- that's up to the wind, for example. If such is the case, where is the energy of a wave? Geometric split The height of a wave can be visualized as an even polarizing split happening in the second dimension. Exactly one half becomes positive, the other half negative -- but such is only the case for the up and down direction called the transverse direction. Can you visualize the creation of a wave as creating something out of nothing? After all, the net energy is zero before and after the creation of a wave. (Careful, think transverse vs. longitudinal. Transverse is at a right angle to the longitudinal.) You supply energy when you create a wave. Such energy can be real or it can be virtual. In either case the energy must be imparted in some direction that ends up being the direction of the wave propagation and that is the longitudinal. Technically, force must happen over (some direction) of a distance to speak of energy. When you plink a stone on water you use real energy. Self-test:-) If you agree that ocean waves are real waves, continue. But photons are made with the virtual energy -- all photons are wavefunctions based on the SQRT(-1). Wavefunctions, being virtual, cannot push a real object such as a boat but they do interact logically with the environment they encounter. Wavefunctions also cannot push a mirror ("boat," load) longitudinally although they could reflect from it, and this is the manifestation of the difference between the real and virtual energies. So now you know that the energy is along the longitudinal. Even if two waves add up to zero they do so transversely and the energy remains in the longitudinal. There is more to this and we have a bunch of photons of light topics on this site. Say, can we keep adding photonic energy in a very small space and do so without limit? DSSP Topics for May '10 Is matter smart? The atom could be smart considering its ability to stay together under some pretty extreme conditions. The atom will stay essentially complete even though it loses some electrons from its orbitals and becomes ionized -- only to revert to its original state when the electrons become available again. Some atoms can withstand high temperatures without breaking up -- and we can only compute the millions of degrees it would take to break it up -- and ooh and aah about it. The atom relies on radial symmetry -- the core being at the center -- and the radial symmetry is a concept worth exploring. In the micro we also find radial symmetry in the free electron, which has a propensity to spread -- yes, in the radial fashion. The radial symmetry also exists in the macro in a way our solar system is organized. Now, the concept of the radial symmetry could become embodied as a form of intelligence that is "out there." So far it seems simple but consider that the orbits and orbitals need to resolve the squaring of a circle problem to reconcile the linear (photon) and the rounded (electron) energies for an atom or a molecule to remain unbroken. Perhaps now you might agree that, just maybe, the atom might be smarter than most of us. So what if one could actually talk to or interact with such intelligence? It would not be a person all right but I think an interaction could be had. For example, if you had an idea of a new power source, you could conceivably get a feedback on whether this new source is agreeable to the "Principle" of the radial symmetry that subscribes to the atom's stability, for example. In the esoteric department such principle could actually manifest and act in support or against the new power source. Can this get weird? Yes. As tempting as it is, take it easy and get well grounded before making conclusions. It's nonlocal out there. Oh, it could have something to do with the Philosopher's Stone. DSSP Topics for April '10 What happens when God sends an eye ..  1) You've got to be kidding  2) He/She has but one left  3) It's a figure of speech  4) It's the magic of geometry We keep an eye on things or someone keeps an eye on a kid. Yeah, there is the evil eye that's supposed to give you the jinx and the jibbies. A private eye keeps an eye on others, but that is not what God would do. If He of She would have to pay someone for snooping then He would not be God. But there are stories of ancient Egyptian gods 'sending an eye.' The idea behind such operation is to find someone and cause something to happen. Thoth did it He sent an eye after Tefnut when she ran away to Nubia. The story has Thoth and Tefnut's partner Shu doing magical things to find Tefnut. Thoth sends out an eye, in effect a zero-dimensional construct of a focus, to find Tefnut. Such focus is not optical but logical. Technically, it could find Tefnut wherever she is. 'The Eye' is a geometric construct of a point and you will find it on the AUM symbol, too. It is "a link" to the intelligence of the universe which is, nontheless, subject to the (subjective) objective of the person doing the extraction and using it locally. So much for the technical side of things. The story has a happy ending in that Tefnut and Shu are reunited and after purification and reconciliation they both return to Egypt. They get married, too. This sounds like a really good bedtime story for kids or for farmers relaxing after a long day in the fields. Except that the story continues and says that a human race could arise after that. Okay, so this must be one of the creation myths then -- yes, the world is full of them. Except that this story is from the funeral texts and at that time reserved for but the Pharaohs. Re did it He sent an eye after all of the humanity. It was not supposed to be in anger but it had almost the same effect. Quite an adjustment considering it worked as planned. Two Goddesses received the message through Re's eye and went on a rampage. Humans' faculties got reduced considerably and new agencies were inserted between the divine and us. Supposedly we lost the direct link to the divine. If you think this is too much of bad news, chalk it up to April 1. If you know a bit about the ancient Egyptians, you know about Hawthor and Sekhmet and Re and Thoth. We can still reach the divine but it will take some learning work. You've got the Internet, this site, and you just might be in position to figure it all out. Actually, I have a theory about Re adjustment: Humans are, really, pretty smart -- and it has to do with Corpus Callosum. DSSP Topics for March '10 The virtual domain is imaginary and ..  1) We can enter it with imaginary numbers  2) It is about the infinities  3) It can get very confusing  4) It can be very creative In the recorded and unrecorded beginning the irrational numbers sprung to our consciousness through the geometric root. Pythagoras and HIS school were the source. Soon, however, the infinite part of an irrational number started giving other people problems. So much so the stories were written saying the irrational numbers were an embarrassment to the Pythagoreans and that's why they swept it under a rug. This is not so, simply because if you discover something new and if you are in discovery business to begin with, you keep it close to your chest: The treasure chest of the Pythagorean School. Let the outsiders think what they may. the irrational numbers are not the virtual numbers. This took another thousand years until the workings of the cube root, which led to the square root of a negative number. Presently, the square root of minus one begun to find some applications and it evolved into a new word: wavefunction. Mathematically, the root of -1 is called i, the imaginary unit number. But we prefer the word 'virtual.' Saying 'imaginary' is not bad, it just does not have the application associations we think it deserves. The strange get-together of irrationals and virtuals happens with a photon. The photonic wavefunction moves but because it can be at any spot (at any 0D point), a photon can span an irrational distance and be or move infinitely smoothly at any possible spot. Now, the scientist continues to have a problem with that because he uses algebra to arrive at specific answers and when it comes to a photon they cannot get the exact answer. Have you ever heard that a photon carries virtual energy until it is absorbed? On this site you did, but the scientist sweeps the virtual photon under a rug. The virtual domain is about waves. These waves are not real but that's okay because in the virtual world the waves superpose and the full infinity of them can do that. On top of that, the superposition is instantaneous and so working with infinities may not be that difficult, for it can be done in finite time. As a Pythagorean you are not only into discovery -- you are also into creation. But of course, it is creation that uses waves and the numbers that stand behind them. You want to know how to entice the virtual energies to curve and an electron can become virtual and carry virtual energies as well. Once you know how to make them curve you can make an atom. Imagine. Oh, don't forget the irrationals the likes of the root of five. DSSP Topics for February '10 So you think there is something special about the pentacle. Pick the best one:  1) Pentacle is but a pentagram in a circle  2) Pentacle is witchy  3) Pentacle is the source of the pentagram and there is more than one pentacle  4) Pentacle is about the atom  5) Pentacle and the human body have more than correspondences together  6) All of the above For a while there was just one pentacle -- a five pointed star drawn with straight lines called the pentagram surrounded by a circle that looks like this . And the men painted a person (yes, a man) over the star and wrote micro and macro cosmic stuff about it. A few hundred years ago this guy Agrippa surrounded the pentagram with two circles, others with three. Even king Solomon broke off a point on his seal trying to butt in post mortem on the fame of the pentacle. The occult guys took up swords in ceremonial fashion, stepped into the double circle pentacle, and begun conjuring the spirits. As it happens, the division of a circle into five is doable exactly geometrically and everybody likes this so much the five pointed star is truly worldwide. this month we are dealing with creation and staying in the micro. The atomic orbital must have an integer quantity of waves if such waves are to close about the nucleus -- and the number five will do nicely. There is more to this and it is about stability. The atomic orbitals can change incrementally and in the new closure of the orbital the numbers are important. So, the ability to create a pentagram using various orbits is important as well. A new star is born illustration filename: hyperstar_free_spirit.gif More specifically, a new root of creating a pentagram is born. It takes non-concentric yet interlocking circles and a new pentacle is created from them. Because the pentacle is free to rotate in space, a double-pentacle of a ten-pointed star is possible with the result that all intercepts making triangles, trapezoids, pentagons, and a center diamond are golden -- that is, all their lengths are in the golden proportions. This is very nice, especially if you think about orbital change and the squaring of a circle -- the exchange between the electron curving energies based on transcendentals and the straight photonic energies based on irrationals. All triangles, pentagons, trapezoids, and the diamond are golden illustration filename: hyperstar_north-south_escape.gif There is more in the micro It's gas Two non-concentric circles show the ability to create valence orbitals in a molecule, for the two atoms are physically separated. Gas such as helium, oxygen, and nitrogen are all pairs of atoms and the valence orbitals hold the molecule together. But there is more to it if you can apply it in the macro. I call the non-concentric circle pentacle the hyperstar: The points of two hyperstar pentagrams form a North-South axis that can explain a lot of stuff such as spin, crystal formation, and the shape of the core. You can fancy up the hyperstar, too. Oh, the radii of the hyperstar are in the golden proportion and there are three of them. Hyperstar of your fancy illustration filename: hyperstar-pentacle-fancy.gif There is more in the macro {Mar 6, 2010} The fancy hyperstar on the left has many connotations. In Tai Chi, particular movements are named after animals the likes of tigers, snakes/dragons and storks but on this site we'll stick to the terms of the Pythagorean School. When you think of the hyperstar illustration in the Martial arts context, there is one mind-body movement that offers a close fit. It has the fist of one (right) hand moving to the (right) hip while the other hand is extending forward, palm out. So now I'd call this movement drawing three bows. The couplex (dantien) is not shown but it is in the center of the intersecting cirles. (Movements named after animals are nature inspired and are not indicative of inferior understanding. In China, then and now, Martial arts needs to be disguised for fear of reprisals. Tai Chi is great for health and most schools do not emphasize the fighting dimension. A bureaucrat, however, just does not take a chance in distinguishing one from the other.) DSSP Topics for January '10 Frequencies create, mostly Frequencies tend to engage a system and at times a resonance is found. The New Age types like to say resonances are good but in physics the resonance can damage or even destroy a system. Not everything and everybody is powerless, however, because some frequencies will be absorbed and then reradiated -- and so the energy does not build up to the point of destruction (think photons). If a frequency is in 1D while the structure is in 3D, the energy-carrying frequency does not necessarily become resonant -- that is, absorbed, and the waves of such frequencies go right on through. If the memories in your brain are 3D structures, it just might not be easy to match that and mess with it logically on a signal level. So now You are a creator and want to make something. You will forgo the familiar kids stuff Lego block construction because that just makes bigger things out of smaller things. We now want to create the small things and so we get into the micro to see what's available to us. We start with the alchemy of the ancient Egyptian texts and move on to Pythagoras. From the Sphinx we find out how to make electrons. From references to Pythagoras (Aetius) such as: ".. and for him [Pythagoras] one of the first principles tends toward the creative and form-giving cause, which is intelligence, that is god, and the other tends toward the passive and material cause, which is the visible universe." The Form (the Style, the Character) is the logical image of what the creator wants. The materialization (the "passive") requires the new system to be systemic to be stable and self-perpetuating -- that is, to be a monad, and right away we know things have to happen in a circle. The logic of the virtual energy has to close in a circle and here is where multiple frequencies come in. With two or more frequencies we create a harmonious interplay if we know what frequencies are harmonious together -- and usually we want to know why. The frequencies used for creation have no absolute value because we are dealing with ratios (which is very Pythagorean). So we have linear 1D energies such as light but have to make a circle out of them, and for real things we need 3D constructs such as the pyramid geometry (angular momentum is in 3D). To square a circle we must use infinities and superpose them tractably. Tractability is via the superposition mechanism of waves (wavefunctions) that is inherently without delay. There isn't much to it, is there? Happy New Year workings .. DSSP Topics for December '09 Frequencies are loaded Frequencies are all around us. They can be carried by sound or by electromagnetic radiation such as light or by electrons that move about the atom or in free space. Many sound waves can be heard, many photons can be seen. Electrons cannot be either heard or seen but can be felt or detected by instruments. Frequencies are the inverse (the reciprocal) of periods. So, planets in their periodic orbits also have frequencies. So now we compile various frequencies and find some more important than others. We talk about frequencies coming out of the pyramids or resonating inside their chambers. We compile frequencies that kill viruses just as Rife did. During the Renaissance we talked about the tones of the heavenly spheres. There are the frequencies of the shaman's drum and there are the tones of AUM. There are frequencies captured by geometric structures. There are changes from linear to circular frequencies of creation and permanence, and then we think of it as putting fire into water -- and all that esoteric and alchemical stuff. Frequencies also differentiate by their transverse or longitudinal oscillation. Most recently (last month), we talked about frequencies that happen in up to 3D and for that you need to appreciate that local, solid and real things are not the only things that vibrate or orbit. Creation and destruction It is easier to destroy something because there are many things around us we can experiment with. Destroying cancer causing viruses is good, too, but, ironically there are people bent on destruction that do not want you to know about Rife frequencies. And so you got to find out for yourself. When it comes to creation of real matter and in healing we need to understand what makes harmony and disharmony and then you need more than one frequency. Creating matter can be good but creation also tells you something about destruction, although, as always, precise and effortless cutting of the stone is a great example of the dual use of any technology. Shiva is dancing. In all, single and particular -- that is, absolute, frequencies are about resonance and potential destruction. Multiple frequencies are about creation as well as destruction and we'll get into that next month. It turns out the work of creation is not about particular frequencies. DSSP Topics for November '09 An electron spreads in more ways than one but what happens when it shrinks We are at our last part of the triple topic on electrons. In the last two months we said the electron spreads and shrinks and does so over and over. There is also a linkage to the Sphinx here but that part is somewhat mysterious. It is not a mystery per se but the idea of a Sphinx is that it speaks through the alchemical language, which, though strange, is used for the most general understanding of the electron behavior. So now the electron can become a cloud and, as we like to say, Where is the energy? The electron spreads without the energy input and you don't have to believe that. When an electron is accelerated we indeed add energy to it and the electron retains such moving energy -- that is, the electron acquires momentum. But the electron also spreads and regardless whether or not it is being accelerated. When a moving electron hits a target, it gives up its energy as heat and it also shrinks. So what is the big deal that the electron can behave as a particle? When an electron is decelerated linearly it gives up linear energy. When an electron acquires the shape of an orbital in an atomic (or molecular) structure and when such electron acquires another orbital, the circular (2D) energy exchange takes place. The linear energies carry rational or irrational numbers while the circular energies carry transcendental numbers. So that's the first difference. But, What if the electron can carry 3D energies? This becomes very interesting and here is where Tai Chi comes in as well. If you like the ancient Egyptian stuff, think Shu and Tefnut and their respective symmetries. And then you can fully answer the question of this topic. DSSP Topics for October '09 An electron spreads again and  A) Goes through two slits simultaneously  B) Wait a minute, an electron cannot part into two Last month we claimed that an electron spreads. It has to because the electron has position uncertainty (Heisenberg) and it can then form an orbital as well. In an orbital the electron has odd (point) symmetry. In a two slit experiment we are working with a free electron and not an atomic electron. Either way, an electron can spread. Two slits (We showcase the two slit experiment in Dec '08 DSSP topic.) are closely spaced and an electron is accelerated toward them. When the electron encounters the two slits in its spread out state, it enters the slits as one entity that can be visualized as a cloud enveloping the partition between the slits. Left alone, the electron will form self-superposition (some say self-interference). At this point the scientist just scratches his head because the electron was never shown to split into two -- and then they just ooh and ahh about it. the electron does not have to split into two as it enters the slits simply because it remains spread out as a cloud and exists as one entity. Dumb scientists the likes of Feynman call on intractable "point electron travels in many and all possible ways" description to explain self-superposition, but it is only because they hold on to the electron-is-a-point-at-all-times proposition. don't bother with Bohrs and Feynmans. Instead, imagine the electron spreading and conforming to certain geometric structures. Symmetries have an important part. Last month we suggested you sign up for Tai Chi. It's a big hint. Maybe the electrons, being able to spread and shrink again, can work with you in a way you want them to work with you. DSSP Topics for September '09 An electron spreads and  A) Goes on spreading  B) Wait a minute, an electron does not spread at all  C) An electron also shrinks And so you thought you knew everything about an electron: It is small, it has a charge, it has mass, it can be managed in a vacuum like a little ball. It lives in an orbital of an atom and does not radiate energy despite its charge. Then comes Planck, de Broglie and Schrodinger, and an electron becomes a wave. Then comes Bohr and an electron is forbidden to do this or that. In a nice piece of political science, Bohr's electron is not allowed to become nonolocal. In a nice piece of science ficton, an electron acquires an enigma of mixed up complementarity. Fortunately, an atom goes on just as before not caring if some dumb scientist forbids the electron to do this or that. The scientists look at a piece of the rock and it just stays there. A rock also does not spread. So now the scientist is sure that an electron, having some mass, cannot spread. To be fair, they will stake their reputation on that. But you and I know there are macro and micro domains and before you can put a rock together you wll be working with waves -- and waves are nonlocal. Yes, the electron can become a wave and as it does it also becomes nonlocal. There is more to an electron. It can spread in a linear or circular geometry. In a linear world the electron produces superposition (aka interference) but in order to do that the electron needs to spread and go through the dual slits as one entity. In a spherical geometry the electron also spreads in an orbital but then you also have to work with the squaring of a circle as the linear energies of a photon need to be reconciled with the curving energies of an electron. Finally, an electron can also become localized and we get a nice dot on a screen when we measure its position. don't worry about the scientist. Sign up for a Tai Chi class and put electrons to work on your health and strength. DSSP Topics for August '09 A photon of ligh hits the half-silered mirror with Up polarity (0°), and the photon ..   A) Splits: half goes through (transmits) and half rebounds (reflects). Polarity inconsequential   B) Branches into two half-waves while remaining interconnected -- but -- polarity changes at reflected branch   C) Sometimes it transmits and at other times it reflects. Polarity changes only at reflection. Photons behave counter-intuitively but only if you think of them as mass-carrying particles. If you think of a photon as a wave -- things are easier. If you think of the photon as a wave that cannot be physically split into two parts -- things are much easier. If you are slowing down at this point because your well earned money bought you different and inferior education, you want to review our photon analysis when the photon enters the half-silvered mirror in our Feb 2007 DSSP Topic (where the question of a photons' polarity is not consequential in photon's detection). This month we want to look at a photon's polarity because a change in polarity is important when we recombine previously split photon and produce photon self-interference. You are ahead of the game if you know a photon cannot be split but only branched into, say, two half-photons. You are really ahead if you use the term self-superposition instead of self-interference and you've probably read our QM Primer. Tracing the photon as it enters from the west is easy. It has Up polarity -- that is, its polarity is 0° and at the center of the illustration below the photon encouners the half-silvered mirror. The transmitted branch continues on to the east ('A' branch in the illustration) while the reflected branch goes back to west ('B' branch in the illustration). So, the illustration shows the photon after is has been branched. Single photon branching illustration filename: photon-transmit-reflect-in-half-mirror.gif What happens is that the transmitted half of the photon does not change its polarity and continues with Up polarity. The reflected half of the photon, however, branches and its polarity rotates 90 degrees counter-clockwise. It is CCW because we want to predict which way the photon's polarity rotates and use the thumb of the left hand to allign it with the incoming (unparted) photon. The Prize Behind a Proven Experiment You might have heard of a photon that is branched twice and then recombined into one. This experiment is well established by now and you are in a position to understand this photonic phenomena.You will never think of a photon as being subject to gravitation and other some such nonsense. Below is the illustration of a photon that is branched twice and made whole again. If you need additional details on this, go to our Feb '05 DSSP Topic.   Photon rotation = polarization illustration filename: photon-split.gif DSSP Topics for July '09 The future gets fuzzy, but .. How you can tell the future Can you change the future? Time is always a dependent variable (see the last two months' topics below). We cannot enforce time as a variable such as the variables of length or distance -- that is, we cannot set some particular time and subordinate other variables to it. If we want to see the future, we want to pick independent variables for the forecasting, the idea being that independent variables will hold on into the future. We cannot pick time to do forecasting because time always subordinates to other variables. The scientist might disagree because he or she has the equation for a planetary orbit and, by plugging in some future time, he can tell now where the planet will be in the future. However, the equation the scientist uses is a particular solution that links specific movement to a repeatable path and time is but an overlay that lines up with the tractable forces, distances, and mass. Similarly, we could use time to predict the start of the baseball season, but this is only because the repeatable system is in place (solution exists already). In the middle of the winter we cannot declare the start of the baseball season and expect the weather to comply, for time is not enforceable. We could direct a plane to particular coordinates and enforce such decision. Independent variables can also be called the leading or "strong" variables. Such variables will prevail in the encounter with a dependent variable. In physics, momentum will prevail in the collision and the path will subordinate: bodies in a collision will change their path. Similarly, variables such as inflation have strong leadership value and we might need even stronger variable(s) to keep inflation in check. Okay, the future To forecast what is going to happen in the future you need to take all variables, the full infinity of them, determine their leading and following values, and let them evolve on ahead. Easier said than done, we will need infinite superposition (available in the quantum mechanical environment for physics applications). Consider that your mind, more specifically you right brain, can deal with infinities. Also consider the pyramid as yet another computing structure. [Of course, consider buying the Quantum Pythagoreans book.] As the variables are interacting there is some delay and this is the future time. Such delay is not easy to quantify and so we may know what happens but not necessarily when. The difficulty is in that the leading and following variables change their strength as these interact and the future time is highly nonlinear. So now you know the future and you don't like it. And so you decide to improve it. The problem is that you do not know the timing and it is then difficult to impart changes now while guaranteeing a better outcome. You are also dealing with infinities and with a possibility that in the future time you won't be around the physical plane. DSSP Topics for June '09 Time can be manipulated Time can be zero Can recorded events be erased Can recorded events be corrupted Time travel is not possible (see May's topic below) but there are things we can do with time. Time is always a derivative -- that is, time always follows other variables. Moreover, even though some variables can reverse their independent-dependent modality, time cannot become an independent variable. An independent variable can also be called the leading variable. Economists understand this pretty well but physicists likely never will. This is because physicists confine their math to the arithmetic of algebra and the equal sign allows them to reverse the relation. And so the physicists are free to reverse reality into nonreality and don't think much of it because they defer to their inadequate tools. Time can readily become zero In the quantum mechanical environment the instant action makes time zero because time subordinates to other variables. If an event happens instantaneously then time is zero and time has nothing to say about the event. Time, in and of itself, does not enforce anything. Superluminal space travel calls on quantum mechanics at macro scale and works with variables that interact instantaneously. Time is defined by orbits Periodic movement happens through orbits. This is a movement having symmetry about a (geometric) point. [Point symmetry is masculine.] It is then more appropriate to say that time is about the period. We can keep adding periods to get time but such time is specific to a particular orbit as it issues from some specific orbit. Planetary orbits are generally stable and the time derived from their orbits is not possible to corrupt. The memory Ether can be energized and the energy, being a function of frequency, can range from zero on to infinity. All knowledge is stored associatively as energized ether and can technically stay that way forever. All knowledge is inherently formed via associative linking with other knowledge and can also be read associatively. Knowledge cannot be erased but because other information can be linked to it post factum, memory can appear to be corrupted. Time is but one of myriad variables that is linked within the interconnected web of energized ether. When knowledge is read, any variable can be followed and this includes time, If time is not followed -- that is, when time is not in focus, time becomes irrelevant and different time periods can appear as overlays. To get the truth To corrupt knowledge, a vast amount of contradicting or misleading data is linked associatively with some knowledge. Reading such knowledge will then read all data, true or not. Yet, just as any good detective, other associative paths exist that lead to the truth. The corruptor would need to deal with infinite quantity of associative paths to completely mask or corrupt the truth. The corrupting data is added at a later time and so the time is important when searching for the truth. Next month: Using variables to look into the future DSSP Topics for May '09 Time travel is possible:  (a) physically  (b) mentally into the past  (c) mentally into the future Time travel is easy to get excited about and this hasn't been lost on Hollywood. Why, even the so called scientists are joining the act by producing equations as if equal signs and multiplications are sufficient proofs for time travel. The basic premise in time travel is that the time is a bona fide dimension -- that is, the value of such dimension could be increased or decreased at will -- much like a dimension of spatial distance. The idea behind the dimension is that it is an independent, or leading, variable and by establishing a value for such variable the other things will simply fall in place and in accord with our value. So, if you wish to enforce the variable of distance you punch it in the computer and the sensors will tell you when you get there. time is not an independent variable. Time always issues from other variables and could be zero during nonlocal events we find in quantum mechanics -- at macro scale no less. There is no way we can lock onto time and make other things subordinate to it. So what about the past The memory comes into play here. You would first need to accept that all things that ever happened are in storage. By accessing the storage you will be able to read or "see" the past. So what about the future Here, you'll need to differentiate the leading and following variables. The leadership of the leading or "strong" variable will continue into the future fore some time interval. Yes, you can see the future but only to the extent you follow the leading variables and even then you will be limited by the dominating reach of the variable -- for every variable fluctuates in its strength and thus its influence diminishes into the future. Of course, variables interact and influence each other and then the forecast is really an art. If you think time travel is physically possible and that, for example, we are being visited by beings from the future, you want to start with the fundamental assessment that the time is not a dimension to begin with. Time isn't and never was the 4th dimension, and such claim is: 1) a fantasy 2) wishful thinking 3) an attempt at corruption 4) any of the above DSSP Topics for April '09 The ancient Egyptian texts are:  (a) religious texts  (b) magic texts  (c) physics texts The ancient Egyptian texts aka The Book of the Dead seem to cater to the afterlife of people and especially to the Pharaohs. But they also spring to life talking about the sun and the waters. The complexities of various gods include human and animal forms, supporting/protective and detracting/dangerous relationships, and plain aspects -- all put together with stories and adventures. And the props: the ankh and the scepter the most prevalent. You would think the snakes and serpents are in it just to scare kids but once you start to see these entities as waves, everything changes. You take the props as geometric forms and then, could it be -- well, yes, the Ancient Egyptian texts are about physics and one could also say that the quantum mechanical world has magic in it too. The interesting part is that the alchemical form of the stories makes it a nice read and you really do not feel you should try to explain how you got to this or that conclusion. Da Sphinx What makes this a difficult decode is that the Sphinx is not specifically mentioned in the text. But it has a lion type characteristics and god Shu is at times depicted as (or with) the hind part of a lion. Perhaps not the best part but there he is. Then there is his sister goddess Tefnut who has a form of a lion head (with a wavy mane). So now you put them together but you know you are not done yet, for Shu is male and dry while Tefnut is female and wet. But you also know you are on the right track because both Shu and Tefnut are called the twin lions having one soul. Is there such entity in physics? Yes, it is a free electron. So now you have some fun because in the text Shu and Tefnut take a trip and get lost. Ra sends an eye after them and they are found, brought back and get married -- and the human race can arise after that. But of course, an electron must be made in some fashion and it could have happened inside the Sphinx, at least initially. Oh, the eye is the 0D geometric point in which the energy component of Tefnut is joined with the charge component of Shu. In another story Tefnut has a problem and runs away from Shu to Nubia. All hell breaks loose and disasters are happening. A good guess is that we have antimatter on our hands because the two components of matter are separated. Thoth and Shu step in and after finding Tefnut a reconciliation takes place -- that is, antimatter can be healed. (Music and baboons play a role here and you may get to like the role of baboons after this.) Now you can leave the text and have even more fun with odd (Shu) and even (Tefnut) symmetries. I am posting this April 1, so that those who don't get it will have a way out. I stand by it any day. Yes, we have a page on alchemy. DSSP Topics for March '09 Tarot card deck has four suits, so there Tarot has the swords, wands, pentacles (coins), and cups suits. Divination using the Tarot has been with us since the 14th century. Some believe it, some don't. Some think it works even if you don't believe it. So there is a mystery and mystery is something everybody likes. Demystifying something is okay but would you want to turn mystery into something mundane? Fortunately, when it comes to explaining the Tarot we run into infinities and the mystery remains a mystery, but you might get more out of it. Ah, the geometry So now the sword has a point, the wand is a stick of a line, the pentacle (coin) is flat, and the cup holds the volume. Are we, really, talking about the zero-dimensional point when it comes to the sword, a one-dimensional line in case of the wand, a two-dimensional area for the pentacle (coin), and a three-dimensional volume held by a cup? Could it be as simple as the 0D, 1D, 2D, and 3D -- the four dimensions of freedom? Even if that is so, does the geometry in these four dimensions dominate our environment to the extent that geometry rules? Better believe it. When we perceive the various dimensions the brain makes unique contexts. Some things work in 0D and that is when the spin comes in. In 1D the even symmetry and linear motion come up. In 2D the energy arises as squares and orbits of the circular motion begin as well, and in 3D we start to reach out and diffuse and penetrate the environment. Pythagorean Tetractys is a triangular numeral ten. It has ten dots: one on top, two below, three below that, and four dots on the bottom. From the very beginning of 500 BC, Tetractys dots stood for a point (0D), a line (1D), area (2D), and volume (3D). So there. Take it from here. Note {March 8, 2009}: And so it happened I picked up a book Magic of the Celtic Otherworld by Steve Blamires and there, under 'greater magical weapons' are pictured the spear, sword, shield, and cauldron. Well, happy workings! DSSP Topics for February '09 Handedness reversal is no big deal The even symmetry of the mirror is a big deal Most of the Internet answers on the left hand & right hand reversal in a mirror are okay but they tend to get wrapped around the axle in the blogs. Last month we kicked off this topic with a suggestion that the thickness has something to do with it because for a purely 2D thing -- such as the up/down and left/right -- there are no problems and no reversals. So the reversal is in the depth, that is thickness, and that is the 3rd dimension. As you move your hand away from you (North) the hand's image in the mirror moves toward you (South). So far so good, and here is where the other Internet sites end. Picture filename: left_right_hand_reversal.gif Variant and Invariant Now you bend the fingers of your right hand and the counterclockwise movement of you right hand's fingers becomes the clockwise movement in the image. If you stick your thumb up you know the Up is invariant under mirror symmetry and now your right hand becomes the left hand in the image. You don't have to get bogged down with "how do you know which hand is the right hand" kind of thing. In 3D the left and the right handedness are uniquely and absolutely differentiated and then you declare which hand's thumb is pointing Up -- and now you are consistent throughout the universe because you know your x--y--z's. A math guy can see the even symmetry as the even function when placing the axis of symmetry at the mirror -- shown as the dashed line in the illustration (usually vertical). But, if you are really good you can put the two and two together, for the even symmetry is about energy. The construct of symmetry has priority, and then .. .. happy workings. you are working the feminine. DSSP Topics for January '09 Your left hand becomes your right hand in the mirror and vice versa I picked up a book titled, appropriately, 'Mirror.' A nonfiction piece. So I went through it to find their explanation for this month's topic but, while the book was just about everything you could imagine, the left hand -- right hand reversal was not in there. Then I came across a related article but that one did not explain it either. It concluded with '.. it depends how it is defined.' Well, I just don't need another theory trying to prove that something cannot be done. But why would this simple topic be so difficult? Background s'more You might think there is not much about explaining this hand reversal thing if you are used to seeing images upside down in the "old" film cameras. In the new digital ones you get it right right on the display, so why is a mirror doing this? It may be worth your while to figure this out by yourself -- it sure makes for a great mental exercise and symmetries are big in atomic physics and free energy, too. If you got to this point you decided to read on but it could get darker before it gets easier. We understand that, in a camera, the 'up' and 'down' get reversed on the focal plane and when you move your hand up, the hand in the image moves down. Similarly for the right and the left. Through the camera lens your movement to the right becomes the movement to the left (because it moves through the point of the focus). However, with the mirror it gets stranger. When you move your hand to the right, the hand in the image also moves to the right. When you hop up in front of the mirror the image also hops up. So, if the up and down do not reverse and also if the left and right do not reverse, how come the watch you are wearing on you left hand is being worn on the right hand of the person in the image? You always wear your watch on your left hand! How come the up and down are normal but your left and right hands reverse? I mean, taking the mirror back to the store is not going to solve this. You might have heard that a mirror symmetry rotates about an axis. And so you might get an idea that if up and down do not reverse, you could lay down horizontally in front of the mirror, for if your head and your toes do not reverse then your outstretched arms in the up and down direction should become normal again. Go ahead and try it. If that doesn't work, stop by again and we'll finish off this topic in February. Okay, here is a hint: If you cut out an outline of your hand on a piece of paper, would another person be able to tell if you used your right hand or your left hand as the model? Another one: If you left pencil markings on the paper after cutting the outline of your hand, would the other person be able to tell which hand you traced? Note {Jan 30, 2009}: The symmetry about a point is the odd symmetry and metaphysically it is masculine. A camera lens has the odd symmetry, in 3D no less. The mirror symmetry is the symmetry about the virtual line (axis) and metaphysically it is feminine, in 3D no less. DSSP Topics for December '08 Light is energy Light is pure energy Light is an even function No disagreements that light is energy. Light can be converted to heat, electrical energy, and motion. The conversion of light, however, calls for the light's (photons') absorption and then the photon is gone. Cannot get photonic energy without photon's absorption also called reduction or collapse. If we are in agreement at this point then there is no problem in agreeing that light cannot impart pressure at reflection, for that would mean we get energy while the photon goes on its merry way without regard to the conservation of energy. Even function Last month we just said that any and all energy is an even function. What that means is that the energy is evenly distributed about the axis. Yeah, but can we prove it? But of course. Young's dual-slit experiment got many people excited about the wave nature of light. Presently there aren't too many problems with this experiment, particularly since light has no mass and photons can part and then self-superpose. Since then the phrase 'interference pattern' slowly yielded to 'superposition pattern.' Picture filename: Young_shield_dual_slit_normal.gif Asymmetric blocking is also a part of the experiment that shows some deep properties of light. Young inserted a partition and was astounded to discover that the shield blocked the superposition pattern from both sides. Now what. Picture filename: Young_shield_dual_slit.gif There are no easy interpretations of this phenomenon, but we gave you some pointers in the title and in the last month's topic. Light is an even function and it will always remain an even function, in spite of anybody's attempts of warping light through optical or other means. Light will always maintain its even symmetry (symmetry about an axis) no matter what. The shield can reduce the particular photon in its entirety or the photon instantly reworks its wavefunction if the shield does not reduce the photon while maintaining its even symmetry construct. Nice to know the even symmetry is a construct that is invariant during optical (and other) photonic interactions. Surprise in all this is that the axis of symmetry serves to halve the photon's energy when the photon is absorbed. Well, one half of the photon's energy is imparted one way while the other half the opposite way and we get to conserve momentum when light's energy transforms into the moving energy. Didn't Newton say something about the equal and opposite..? Happy New Year. DSSP Topics for November '08 Is there more to energy than 'more' or 'less' energy? Can we generalize energy? What is energy, really Energy is something that moves things. It is then easy to see energy as "more of it you have, faster you can move." We know that energy costs money and we can buy it in the form of oil, gas, or electricity. So far, it is deeply engrained in our brains that to get energy we need something that is consumable. Not only that, we consume materials to create energy but we always get heat in the process as well. Energy is about movement and it is the force that feeds the movement. So, where do you find the force or how does the force come about? You can take something that is moving already and get energy from it by slowing it down: A running stream or a spinning planet, say. Or you could figure out how the planet gets to spin in the first place and engage that mechanism. It turns out that the force appears when the organization increases. This seems ambiguous but the idea is that increased organization contains energy in the form of the virtual energy. The good part is that heat does not result from the movement that increases organization. Energy can thus be harnessed from superior knowledge. Matter has the innate knowledge for increased organization and this is based on the Pythagorean Tetractys. Alternatively, energy can be had by decreasing organization where the upside is that not only the oil or gas can be used as consumables. The downside is that heat will result whenever consumables are used -- that is, when matter with the higher organization changes to one with lesser organization. Organization synonymous with intelligence Energy always exists as an even function. Energy is always symmetrical about an axis and its reduction will create forces. A photon of light is the most obvious example of an even function that is energy. Since the electrons can become virtual they can also acquire even function characteristics and can also be used for energy extraction. The universe is full of virtual and energized electrons. So how do we manifest the energy? The axis of symmetry could be a stick, an antenna, a post. You will also need to understand the rotation and orbitals, which is really about the symmetry about a point and the squaring of a circle. Happy workings. DSSP Topics for October '08 Irrational number is not a real number Even if that is so, what's the big idea? Irrational number has its decimal fraction going to infinity without any of the numbers repeating as a group. The most common example of an irrational number is the square root of two. It is well established that rationing of any two integers will not create an irrational number. That is, starting with two finite numbers and placing them in a ratio will not result in an irrational number. Put another way, if you have two finite numbers, their ratio will also be a finite number. Along comes this guy Dedekind and he goes about proving that one can cut any distance between two numbers in half and if you were to go about it doing it forever, you will eventually reach any number, including the irrational number. His conclusion is that irrational numbers are really rational, that is real, numbers. The sad part is that math textbooks today all classify irrational numbers as belonging to the domain of the real numbers. What's wrong, or, who cares Technically we could object to incorrect classification and duke it out on the turf of academia. This site, however, is also about free energy and it is then important to show that the benefits of irrationals have something to do with free energy. Academia, then, has nothing to do with it and they may as well print any nonsense they want. Dedekind proof is invalid because it becomes true at infinity. The proof is "right" but only at infinite time in the future, which is never. This is very much the same thing as when the computer calculates the square root of two, for example. We all agree that the computer can technically produce the square root of two because the machine keeps spewing out the numbers that all belong to the square root of two. However, no matter when the machine stops the result is not the square root of two because the square root of two has an infinite number of digits and the computer can produce them only at infinite time in the future. Geometry produces the square root of two in finite number of steps (in finite time) and does so exactly. picture filename: Square_root_of_two.gif In the illustration above the distance between the two zero-dimensional points is exactly the square root of two. Geometry is once again superior to arithmetic but the real bottom line is that you need geometry with its inherent exactness if you ever wish to capture the wave. DSSP Topics for September '08 Brown's gas vs (almost) everything else Background (Continued from August and July '08) Brown's gas so far is a mix of hydrogen and oxygen made from water. On its own these two gasses burn or implode back to water. Brown's gas torch is interesting because the flame is relatively cool and the fire shows as a plume going into the nozzle even though the gas is going out. Different materials react differently with the flame and, for example, tungsten can be melted while a piece of brick can acquire a hole -- sometimes. What is this thing Brown's gas can be energized even more while in its gaseous state. The way Stan Meyer worked this gas is by adding energy to it via particular light frequencies and then mixed it with the atmospheric air before injecting it into the car's cylinders. The idea -- apparently a successful one -- is to get the Brown's gas to burn with other gas by explosion rather than implosion. He even recycled some of the exhaust gas into the intake to keep Brown's gas in the expanding mode so to speak. Is that all? No. As the Brown's gas is being formed, excess electrons appear. This is part of the free energy "equation" and it moves Brown's gas into the fantastic category. Here is also a point where you are on your own, at least in the foreseeable future. If you want to pursue Brown's gas to its full potential you may have to become the Fool of the Tarot. A person who goes on, looking kinda strange but feeling there is a way there somewhere -- that there are indeed higher forces helping you along the way. DSSP Topics for August '08 Brown's gas vs petroleum  One has hydrogen and oxygen  The other just hydrogen and a lot of excess baggage Background (Continued from July '08) The chemical composition of Brown's gas continues to be debated and there are not many who would go for H4O2. The majority sees Brown's gas as a short-term-stable monatomic hydrogen H and oxygen O rather than the molecular form of H2 and O2 found in the atmosphere. When Brown's gas burns as a flame it turns to water. Brown's gas is also called hydroxy or HHO. What is this thing Brown's gas is made from water and is added by many to the air intake of their car. With as little as one half liter per minute, the car runs smoother and emissions go way down. In our mind the fuel burns cleaner -- that is, more thoroughly, and there is then, literally, more bang for the buck. The linear approach If a little bit of Brown's gas helps, let's put in more. Maybe even run entirely on Brown's gas, just like Stan Meyer. Now the engine starts to run rough -- but -- the ignition timing can be changed to smooth it out again. Sometimes you may look at the timing -- only to find out that for the smooth running engine the spark is firing way off the top of the cylinder's position and that just does not make sense. Sometimes the Brown's gas explodes and at other times -- it implodes! Almost as if Brown's gas does what it can and when it can to help you run the car. Here is where the linear guys get amazed because they really would not want to admit they are confused. For pure Brown's gas, the volume of the resulting water is much less than the volume of Brown's gas and we can also have an implosion. To add to the mystery the car also runs cool. Can this thing be stabilized Yes. There appears to be a range where Brown's gas -- produced via straightforward electrolysis -- gives consistent results, particularly if it is introduced into the engine in small quantities. Brown's gas intrigues the mind and there will always be people tinkering with it. Brown's gas reacts dramatically with a spark or a flame but, because Brown's gas is made on demand under the hood, the setting off of Brown's gas (if bypassing the safety "bubbler" to begin with), is confined to the area the gas is produced. Even then a safety plug blows off and without damage to the container holding the water from which the Brown's gas is produced. What do the scientists say They are busy making hydrogen from -- you guessed it -- the oil, to be ready for the so called 'hydrogen fuel cell' in the car. It takes energy to produce hydrogen from oil or gas. If the hydrogen does reach a fuel cell in the car via a pipeline infrastructure, the hydrogen fuel cell makes electricity to power the car but in the process makes a lot of heat. So much heat, in fact, that the fuel cell of "the car of the future" cannot power the car directly and is primarily used to charge the battery. The fuel cell of "the car of the future" also burns oxygen from the atmosphere. Brown's gas has both the hydrogen and oxygen and there is no net burning of oxygen from the atmosphere. Back to Stan Meyer This guy showed that Brown's gas can be produced at over unity. That is, the Brown's gas that is created through a particular on-board (on-demand) process has three or more times the energy that was put in to make it. This gets into the free energy arena very quickly and so don't expect scientists to embrace it. Yet, the scientist can be, and likely will be, left behind once the hydrogen-from-hydrocarbons becomes more expensive than the on-demand hydrogen from water. There are four keys: Cost, safety, oxygen depletion, and heat. On all four (ac)counts the Brown's gas is the way to go. Note1 {Aug 12, 2008}: A neat tradeoff in near-term Brown's gas application is the catalytic converter. Even small amounts -- possibly as small as 1/4 liter per minute -- can take out the polutants to below the present standards and consequently dispense with the catalytic converter altogether. Technically, the heat energy produced in the catalytic converter is now moving your car. If you remember Smokey the Bear and his "You can prevent forest fires," you want to see the catalytic converter go. Note2 {Sep 15, 2008}: Hydrogen can be imparted with additional energy giving it additional and unusual properties. (For one it changes the burn character of hydrogen.) Yes, there is a name for it, parahydrogen, but no ready explanation. My guess is that the electron's orbital does not enter the consideration but that the core (proton) acquires additional vibrational frequencies and with it new fusion properties, which could conceivably aid in transmutation. DSSP Topics for July '08 Brown's gas is:  (a) Fractured water  (b) Energized water  (c) Just hydrogen and oxygen mixed together  (d) None of the above This guy Yull Brown is a Bulgarian-Australian-American (All Slavs are euphemistically called East Europeans) who generated and worked with this gas that -- through an electrical process -- bubbles up from water. Heavy set with a heavy accent, Yull championed the gas for many applications and it is then named after him. Brown's gas burns or explodes and in the end becomes water again. No problems so far. On its way to coming back to water, however, Brown's gas reacts with the surrounding atoms in a funny way. Brown's gas breaks down all kinds of hydrocarbons as it reacts (burns) with them. Brown's gas is the first thing you run into when you want to improve the mileage on your car and lower the car's emissions. It's a win-win additive. That should have put heat on the automakers -- and it did -- but it did not set them on fire. It could have evolved into a nice little additive but there is more to Brown's gas. What happened was that Stan Meyer from Ohio (heartland-American) took the Brown's gas to the next level and built a car that runs entirely on Brown's gas. To make matters completely wacky, Stan's claim is that the energy to produce Brown's gas will power the car and make more Brown's gas to power the car some more while making more Brown's gas to .. the bottom line is 22 gallons of water coast to coast. What is this thing The first step is the spectroscopic analysis. Brown's gas is not hydrogen and oxygen mixed together and neither it is a fractured or broken up water. It is an entity (molecule) of its own. It does not condense at room temperature. It bubbles up nicely through cold water but does not condense as the steam would. It is heavier than air. Some say it returns to water spontaneously after a while, a molecule at a time. Having many unusual properties, Brown's gas is commonly used for specialized welding but many of its properties are kept under wraps. All matter is computational in nature and the Brown's gas appears to be yet another gaseous state of water -- likely H4O2. (If it is H4O2 then its density should be between the carbon monoxide CO and carbon dioxide CO2 gases.) Everyday water that is H2O gets to Brown's gas mathematical solution by addition of energy and such energy is then recovered upon its return to water. But that still would not explain the over-unity claim of Stan Meyer. So, here is where the zero-point energy comes in and as the Brown's gas is being created it absorbs extra energy from the surroundings to reach its new state and consequently returns over-unity upon return to water. Not every process that generates Brown's gas is over-unity. The basic process uses your car battery's DC current and produces Brown's gas at some energy cost -- but -- this cost is less than the cost of gas. 20 to 50 percent mileage improvements have been reported and, in addition to a dramatic decrease in exhaust emissions, 20-50% is your net energy savings -- which also means that 20-50% of the energy does not end up as heat. What else Can we link Brown's gas to other things? If we accept Schauberger's results when he squeezes ("implosion" is his term) energy out of water or air, it appears that Brown's gas is present in small quantities all around us. It is also likely that Brown's gas plays a role in our bodies' metabolic processes when we convert food to energy. It is said we don't really drink beer -- we just borrow it. Well, we can say the same thing about water in general and Brown's gas in particular. Water is really a carrier for energy. Water, perhaps, can be seen as a form of a liquid crystal. DSSP Topics for June '08 Some claim magnetic motors violate the conservation of energy and leave it at that Some claim magnets can be made into self-running motors and don't leave it at that Some make self-running magnetic motors and don't care one way or another Much discussion has been had on the conservation of energy and why self-running magnet-based electric motors would not or could not work. Then YouTube appears and everything changes. You may not even want to watch the classical tube anymore (you know, the one where they serve you what they want to serve you). At first the discussion is on trick photography. Then some showcased products suddenly disappear. Instead of thinking that the trickster was exposed, the fundamental reality flip makes you think that somebody is getting rich on free energy. Then a bunch of scientists make a video on zero-point (free) energy and in their classical bombastic fashion they claim that one cubic foot of space has a lot of energy -- hey, they actually calculated it! The scientists do not speak about the inventors as pyramidiots anymore. (The term pyramidiot took hold after the pronouncements of a British aristocrat.) We begin to feel that if the earth were to flip its axis the scientist could agree with it but only after the fact. We file the scientists into the 'have degree -- will insult' category. Magnets attract or push away. Forces being equal (really equivalent), one magnet can be made to pivot and spin by a static magnet but when the rotating piece comes around, it gets repelled and stops. So now you have to move the static magnet that did the pushing just a little bit away so that the spinning magnet gets over that repelling moment -- and then quickly move the magnet back so it can do more pushing and speed up the spinning magnet a bit more. You may imagine some simple oscillating arm that would move in and out -- and we have a self-running motor. The logic here is that once the motion arises we got real moving energy and if the oscillating arm arrangement takes less energy then we have over-unity. The fundamental logic is that the movement gives us energy but the implementation of switching of a polarity could conceivably take no energy. The Model If the magnetic motors self-run (and many do) we do not need a model -- that is, we don't need a theory. Yet, sometimes we need a model so that we can improve on what we've got. Saying that space is loaded with energy does not help us with the work because it contains no mechanism and it is, therefore, no model and no theory. Saying that space has a lot of energy is the same thing as saying the ether exists -- it means nothing. So we want to look at how the magnetic phenomena happens in the first place. We guess it has something to do with atomic orbitals and the flow of the charge that becomes symmetrically organized. But I would not leave the atomic core out of it. It is quite possible that the core formation and its gravitational interaction allows the unique orbitals to form and it is happening on an ongoing basis. If so, the permanent magnet owes its existence to gravitationally influenced cosmic scale behavior and the "over-unity" energy is then an extraction of the energy that exists in the larger context rather than being local to the magnet. You may want to cancel some of your science classes. The world is a-changing. DSSP Topics for May '08 If a synaptic gap is a connection, what's the big deal? What if the synaptic gap is not a connection? The synaptic gap is, without exception, considered an on-off switch. Just like your electrical switch, the current or the signal flows or it does not. All books on neuron or brain workings [I've seen or heard of] take the synapse and use it as the switch to build more and more and more complex systems until we got the brain. The brain is just like a computer we all know and love. After Cajal's work on the clinical aspects of the synapse, Sherrington coined the name for it around 1900. So now the experts have a lot of fun of counting the brain cells or synapses and comparing it to the gates on a computer chip. The phrase Artificial Intelligence comes up often, although to some of us it is more artificial than intelligence. We still do not know why the computer cannot figure things out by itself even though the signals in a PC are million times faster than the signals running down the axon. So, the science writer puts it in the category of musings and, of course, they need more money for research to close the gap, so to speak. Ah, bring in the quantum mechanics The 21st century is upon us and the brain cells could now have something to do with the quantum. The synaptic gap is measured, scanned, and analyzed. There are chemicals that can influence the gap in general and that can help people with mental problems (or healthy people to acquire mental problems) and it all fits the equation. Not only the equation about power, control, and money -- but it also fits the scientific equation about the synaptic gap: Quantum mechanics has no role to play because the gap is too wide for the quantum entities such as the electrons to reach out and tunnel through the gap -- and thus make the closure of the switch. This is all very rational and proper, and this is the result of a broad scientific consensus from UTrue to UCon. The model of the synaptic gap-equals-switch is really about our own cultural bias, albeit from the 21st century. The gap is not an on-off switch simply because every synaptic gap has a very unique and different profile and, moreover, the gaps have two different modalities of reaction: the chemical and electrical. Now what. What if each gap intercepts but a particular -- that is, specific, information. Uhh, the info would then come in from the outside rather than through the brain's internal wiring. Telepathy? How unscientific can you get? Yep, this is no science for the scientists but for the rest of us it makes extra sense. DSSP Topics for April '08 The matter of gravitation is so simple it is complicated There is a lot that has been said about matter. This thing can put a dent in your car, moves about in sort of circles of the heavenly orbs, consumes gas on acceleration and wears out your brakes on deceleration. It weighs a ton here, not even close on the moon. Matter attracts other matter in the act of gravitation and does it so persistently it readily warps any theory trying to explain the strange attraction: Out of nowhere, across vast distances, instantly, silently. Even though masses attract each other, matter does not cluster in one spot. This, this thing attracts but does not cause jams. As much as we think about it, the collision is not the norm for the matter out there. Possibly the best thing we know about gravitation is that it is a core behavior, rather than the electron orbital, that deals with gravitation. Well, this does not fit in the background category. So now As far as we can tell, electrons do not get bothered by gravitation. Gravitation does not produce photons that are in the regime of the electron and does not ionize matter. Gravitation does not reduce electrons into one spot because it does not interfere with the dual slit experiment. So, let's just take the electron out of the gravitational mechanism. So now What do protons have that electrons don't? Charge polarity, for one. For another, a significantly larger mass. But that is not the whole story because neutrons are a component of the gravitational pull and neutrons do not have a net charge. If the gravitational force is formed by waves -- really wavefunctions, then the wavefunction must reduce to realize the momentum. But what would be the periodically reducing mechanism? So now A wavefunction is a virtual entity and one of its characteristics is that it can span rational and irrational distances. The movement or the spreading of the wavefunction is infinitely smooth and (that is nice, but) there does not seem anything in the way that would reduce the wavefunction when bodies move away or closer together. The smoothness, however, works well to recalculate the wavefunction as a function of distance via the geometric mean, for example, because we need the square root to compute the force. So now It seems that one way of figuring this thing out is that the core must be spinning. A spinning thing must deal with the angular vs. linear considerations and then a quantization comes in that periodically reduces the wavefunction. This would also fit the creation mechanics for both the linear and angular momentum. Yes, gravitation is not only about attraction. DSSP Topics for March '08 Archimedes got to Pi, but did he get to the derivative? Was Newton-Leibniz duo really the first? In the early 1700s, Newton and Leibniz fought it out for the priority of the invention of calculus. But was Archimedes there 2000 years before them? Besides, Archimedes is alleged to write a book The Method, which was lost. We know that Archimedes was doing well applying the limit via his method of exhaustion and got beyond the Pi of the circle by calculating the parameters for the sphere, cylinder and cone. A proposition could then be made that Archimedes discovered calculus -- way in the ancient times. Kepler also took a stab at calculus by estimating the volume of a beer barrel. Then again Besides taking limits to infinity, the critical juncture in arriving at the derivative (aka tangent) of calculus is -- the even older and the really ancient Pythagorean practice of rationing. Yes, the only way to get to the derivative is to put two variables in a ratio and then applying a limit to that ratio. It seems Archimedes did not put variables in a ratio. Kepler, for his part, did put variables in a ratio and did apply a limit to it but he was working with the Fibonacci series (and got to the golden ratio) but he did not do the same with a mathematical function in general. Both Archimedes and Kepler have worked the curving distance in ever-smaller increments -- Archimedes on a circle, Kepler a bit more general on a beer barrel -- but the idea of a tangent eluded them. Who was really the first? Leibniz published first but Newton thought Leibniz took his ideas and jumped the gun. Newton circulated some of his papers among his trusted associates, but the real purpose was to get a broader consensus and maybe a few comments prior to publishing. Leibniz, on the other hand, had his process down pat and was comfortable publishing without a peer review. So they had a row but with some distance it is apparent Leibniz got to the integral portion of calculus first. Newton was ahead on the derivative side but only technically, for Leibniz published first. A derivative is sufficient when working the gravitational acceleration, but for a volume of a beer barrel you really need the integral. My vote is for the beer -- with a toast to Pythagoras. Once a while something is published only to be announced that somebody figured it out earlier. One example is (the Czech guy) Mendel, who was able to get full credit posthumously and 30 some years after his publishing. Others may not be that "lucky" and so perhaps having a peer review is not such a bad idea. Note: {April 5, 2008} Leibniz discovered that during a collision the direction as well as energy is conserved. Leibniz did not get much credit for that and today it is just called the vector law, rather than, say, Leibniz vector law or Leibniz force linearity. Yet, Newton's gravitational law relies on vector linearity because the gravitational forces are vector-added. I did not do any research on this but if Leibniz formulated the vector law first then Newton's claim to the universal gravitational law weakens considerably and narrows to the "square of the distance" component. DSSP Topics for February '08 Can you split the photon's energy? Last month's topic alluded to a particular mechanism carrying the energy of a photon. Okay. We also know that a photon can eject an electron from an atom. To make things more complicated -- but more interesting -- the ejected electron's energy is always less than that of the photon doing the ejecting. What's happening? To make things even more complicated -- but far more interesting -- we also made a case in our February '07 DSSP topic that a photon cannot be physically split. Can all these things be reconciled? But of course! To get something to move we need to conserve momentum. To conserve momentum, two things must be moving (or rotating) in the opposite direction with equal energy. This is easy enough when we consider the recoil of a rifle or when moving from a small boat and onto a dock without getting wet. Right off, a photon that ejects an electron must also impart the same and opposite energy to the core. The electron, then, cannot have more than 50% of the photon's energy. We are down to the last step. If a photon cannot be split, how could its energy be split -- as it must if it is to move something (in the framework of mo conservation). Absorbing vs Non-absorbing photonic interaction The absorbing photonic interaction reduces the entire photon and the photon's energy transforms to other forms of energy. In the non-absorbing photonic interaction, which could also be called the optical interaction, no energy exchange takes place. Once the photon is absorbed it is gone and only its energy lives on in other forms. In the optical (non-absorbing) interaction the photon can be stretched and its shape changed, but its energy always stays the same. So, how could a photon impart 50% of its energy to one thing (the core) and the other 50% to the other (the electron)? The photon is always an even wavefunction and its energy is always symmetrical in the 50-50 fashion about its axis of symmetry. Photon's energy can be split 50-50 but only at absorption. (Self-test:-) If you figured out that a photon cannot impart its energy when interacting with but one object, you are doing really well. Why, you might even be a Pythagorean. DSSP Topics for January '08 The photon's energy is its frequency. If the photonic frequency is the same as the photonic shape then we can change the photon's energy by reshaping it. What? Wheeere is the photon's energy? So you've been told -- or perhaps you paid money to be told -- that the photon's frequency is its color and its energy. So far so good. Then they showed you a picture of a photonic shape and everybody assumed the shape is the photon's frequency. But then the photon is sent through double slits and acquires a different shape. Then the photon is sent through three slits or through a crystal and acquires other and different shapes. The number of the photonic shape undulations changes in each case; yet, the color -- that is, the photon's energy stays the same. What is going on here? You certainly want your money back if the teacher does not have an explanation. Chances are they don't. Picture filename: photon_frequency_energy_shape.gif Where is the position? The photon's undulations are not about the photon's energy but that of the photon's position. When we reshape the photon through various geometries we change the probabilistic distribution of the photon, for the photon is a virtual entity and exists as a nonlocal whole. Yes, this is not difficult to understand even though the teachers hate to say the word virtual. And so it is the probability of seeing the photon in a particular spot that changes. Bouncing off a mirror changes the position of the landing photon as well, but with slits and other geometries each and every photon can also be stretched. Where is the energy? The photon can be stretched and undulated in an infinite variety of ways. But if the photonic undulations do not reveal anything about the energy, where is the energy? You may want to pose this question to a teacher before you pay for your next course. If you do not get a satisfactory answer, look up some math of Dirichlet and consult the illustration above. To understand it, you want to ask a question: "If the energy stays the same even though the shape can be shaped and re-shaped, what is the mechanism that would keep the energy the same? DSSP Topics for December '07 If line is one-dimensional and area is two-dimensional and volume three-dimensional, is a zero-dimensional point the fourth dimension? Is 0D just a point of no interest? Somehow the zero-dimensional point has receded into the background. Some writers like to call the time the fourth dimension, as if the zero-dimensional point is too small to be mentioned. Then again, we could not construct a line or a circle without a point or two. So, why is a point relegated to obscurity in Western science? Well, the reason the zero dimensional point is ignored is because the 0D point is not easy to understand and because the reductionists are running amock in Western science. In Eastern science the point is not only some geometric abstraction, it is right in your body. Dantien (Chinese) or Hara (Japanese) has its place just below the navel and it is a geometric point that has its own applications setting. Rotation. You need a point to rotate about it. If 99% of all moving (real) energy in the universe is in the form of rotation, a point is used all over. It can be said that a point must be used for orbits. If a linear dimension (is straight and) allows the freedom of movement in a particular direction, then a point is a pivot that allows movement about it to create rotation and orbits. And, if a point allows the circular motion to arise, isn't the point -- that is, the 0D, the fourth dimension? You bet. Freedom of movement is just that. We can move there if we can, should, or ought to. That is what 'independent' means when we say the independent dimension. Time is not an independent variable nor it is a dimension. Time always depends on other things and that makes time dependent. Yep, time can never be an independent variable. Pythagorean Tetractys is about the 0D, 1D, 2D, and 3D contexts of geometry. Together they are the tetra or the quad -- the four dimensions of space. Happy New Year DSSP Topics for November '07 Cellular automaton follows rules but so what Some scientists use cellular automaton rules to make pretty pictures. They show the formation of leaves, stems and even things you have not seen before. All cellular automatons follow the 'If-Then' rules just as computers do. Usually, a particular and simple black and white squares (cells) on a grid starts the whole thing off and the creation of neighboring and empty squares are filled in using simple rules. If, for example, the east and the north squares are black then the center square will be white but otherwise it will be black. The squares fill in and we might get some interesting two-dimensional shapes if somebody manages to stop the machine in time. Sometimes the machine ends with a particular pattern and sometimes the machine cycles through a group of patterns. The extension Because some cellular automatons make shapes that resemble leaves, many scientists jump with joy and do not mind trying to convince you they discovered how God makes leaves on trees. Then they make the extension that since they can make similar things as found in nature, who needs God? Wow, they can even make more varieties than God -- if God were to exist, that is. The nonsence The thing is that the cellular automaton does not do any more than any rule based system. Somebody supplies the rules and the computer just follows it. The cellular automaton does not and cannot finish any different next time around and so it contains no intelligence that would improve the speed of the pattern formation. There is nothing in the cellular automaton that would improve anything and the scientist is then left with changing this or that to see where it goes. The programmer than tweaks the rules just as any programmer could. There is nothing innate in the cellular automaton that would change anything, short of random processes, which the scientist likes to substitute for lack of intelligence. The cellular automaton is as smart as a five-year old and it cannot get any smarter. And so it happened that the only person who likes a cellular automaton is another scientist. They love connect-a-dot. There are no applications for cellular automatons and now the scientist has but one choice: Argue that the cellular automaton is somewhere, somehow, useful. What utility there could be they do not say. The mind job they do is good for doing it on each other. DSSP Topics for October '07 What is geometric computing? So you construct the square root of two by constructing the square. The diagonal is the SQRT(2). Nice. You are done in no time. But so what? Well, your PC cannot do that. You can get SQRT(2) to many, many decimal places but sooner or later it's time for dinner and so you declare the result close enough. You can come up with many, many good reasons why close enough is good enough. You really do not have much choice but come up with some excuses because otherwise you would have to deal with the real question: Why is that so? The irrational Geometry deals with irrationals as just another number. Infinite mantissa included all the way. In finite time. Tractable. Energy it is If you construct a geometric structure and have, say, the SQRT(5) as one of the distances, the distance having the value of 5 is there and waiting. What if something that represents energy is now inside the unit 5. What is going to be across the distance of the SQRT(5)? All wavelengths that form the infinity that is the SQRT(5), that's what. Maybe add a few things to make the golden ratio out of it. (If you are not excited now you might be later.) DSSP Topics for September '07 No string can be made into a perfect circle So what's the big deal about the circle? As you pursue the geometry of a circle you may discover that a circle's periphery is a transcendental number and, consequently, if you take pieces of ropes or wires you will find out that the length of any string is a finite -- that is, a rational, number. Therefore, 1. You cannot take an everyday string and make a perfect circle out of it. And also, 2. You cannot make the perfect circle from any real thing such as wood, clay, metal, plastic, stone, glass, or rubber bands So what is it about the circle that fascinates people and why people make stars inside a circle? Why have ancient Greeks pondered over dividing a circle geometrically and discovered numbers that can and cannot divide a circle exactly? More background What entity can exist in the circumference of a circle? Because a circle consists of an infinite number of points along its periphery, we are looking for something that can exist in every conceivable point in space. A real thing will not do, for a real thing occupies but distances that have rational lengths and, therefore, real things have incremental lengths that skip some spatial points. Energy it is Energy exists as waves and waves have periods that can span any spatial distance. An energy wave, then, can exist between any two points. You are catching on fast if you realize that a standing wave can exist in both the straight and the curving geometry and, therefore, an energy wave can close in a circle exactly. You should have no problem seeing that wave energies such as those of light or those of virtual electrons are not real energies but are the virtual energies. (Actually, even the scientists figured that out -- albeit in a limited way -- and call them wavefunctions.) Making stars The number of points of a circled star tells you how many wavelengths can make up a circular standing wave. You will then need to know something about geometry to see whether such star truly represents a standing wave, for some numbers do not fit in a circle exactly. It is easy to dismiss ancient Greeks as ancient. It is also easy to belittle your neighborhood witch but my guess would be your scientific mind has been successfully reduced and is missing some points. DSSP Topics for August '07 The myths of the Western science: The good, the bad, and the pretend When does a myth work against you As a way of a definition, a myth is something that has no ready explanation. A myth could also be a belief that requires no explanation. A myth has no verifiable foundation. The myth rests on a presumption that is impossible to verify and the only reason for the myth's continuing existence is that people will hold on to it as the truth and that's that. Science has myths If there are people who think are myth-free, they are the scientists. They surely love to say that the first benefit of science is the absence of wishy-washy myths of one kind or another. But it is not all that clear cut. If you cannot or if you refuse to acknowledge the existence of something, that certainly fits the belief in a myth. There is a myth of disbelief, too. Outright disbelief in the face of evidence is excusable if the scientists have impaired brain functioning and so we can leave it at that. The first myth of science is that there exists but one form of energy -- the real energy from coal or oil or uranium. The second form of energy, the virtual energy, comes from light and ether. Scientists have a myth that a beam of (say laser) light puts pressure on a mirror as it bounces from it, but this is only because scientists must work with something real and so light "must have" a real punch of momentum. Yet the beam of light does not put pressure on a mirror as it bounces. So now the myth is growing -- not only do scientists believe the myth that the beam of light puts pressure on a mirror, but now they mustn't do the experiment that would resolve it one way or the other. Would you say that a myth and a taboo are closely related? [Don't look at the man behind the curtain!] For the rest of us It makes no difference if scientists refuse to work with ether or with light as forms of the virtual energy. Somebody else will do it. We do not insist veterinarians get an extra license to treat people in addition to animals. What has happened, though, is that people calling themselves scientists are really not competent to speak or work on global warming. Scientists cannot even begin to address the shortages of energy, for they see the energy as something that just keeps on depleting. Meanwhile, we will think free energy -- and if you happen to have a garage, tinker. DSSP Topics for July '07 How small or how big is a zero-dimensional point? This gets Zen-y and possibly zany, so take a relaxed attitude and use your right brain Last month we settled for Euclid's definition of a point. A geometric point is something that cannot be divided. Fine. A zero in the numerator will then remain a zero even if we do try to divide it. Moving On A geometric point, being a zero, then also has no length. This means we cannot use a point to measure distance with it. Is it true that no matter how many points we stack up we end up with zero length? Yes. even though a circle or an irrational distance each exists across some nonzero distance, such distance is not composed of some length that would be a multiple of some rational -- that is finite, number. An incommensurable (irrational or transcendental) distance cannot be composed of some minimum-lenght magnitudes, but even though such distance can be filled with points, there would be an infinity of such points spanning an incommensurable distance. A geometric point is then a true zero. But we also know that zero can get tricky If we delete a point from the end of a line then that should mean we did not shorten it because a point has no length. But by deleting the point we also do not have a tangent going through such point and, conceivably, the last possible tangent value our line (or a curve) should have is now "missing." So what's the point of all this If a tangent connects the two closest points on a curve and one point is missing then the tangent cannot be drawn. Does the length still have the original value? We answer this as yes and label the missing point as the virtual point. This is similar to the division of an area of a circle. A circle can be divided axactly in half -- or by any circumpositional number into exact multiples of a circle -- but the circle's center point cannot be divided. The center of a circle then becomes a virtual or "empty" point but the area value for the circle still holds. This is important if energy is proportional to an area (and it is) because the energy is conserved exactly even though the center point is not included in either half of the now divided circle. Probability math supports these assertions. If we plot energy distribution across some spatial distance and ask "what is the energy at this point?" the answer will be 'zero' because energy is proportional (commensurate) with area and a point times any height yields no area. (For example, the number of people who are exactly six feet tall is zero, for 'six feet' is but a single point on the population distribution curve.) Any energy that is a virtual energy (photon, virtual electron, gravitational wavefunction) can be split exactly in half. This is important for a photonic reduction, collisions, and gravitation. You will note that (on this site, anyway) the photon cannot be split. You will also note, though, that we allow the photon's energy to split exactly 50-50 but only at reduction (absorption) and then the photon has transformed into another form of energy and in its entirety, too. DSSP Topics for June '07 The difference between a point and a line might be easy to see, but .. How could you actually and objectively tell the difference between a point and a line? A nice definition of a point comes from Euclid. A point is that which has not parts. A point, then, cannot be divided. This is somewhat abstract but so far it is sufficient. Euclid goes on to define a line as that which has no width. Perhaps we can say that a line is something that has a width that cannot be divided. Similarly, a plane is that which has no thickness and so the thickness cannot be divided. Is there more to this? If a point is so small it is zero then if we try to divide such zero we also get zero -- that is, we get the same result and that's the end. Okay, so a point is zero-dimensional and "zero" refers to dimensionality. But for a line we need two points. Right away we have a question: What is the minimum distance the two zero-dimensional points have to be separated by to get a line? Here, the answer is also not very difficult. We can draw any number of lines through a single point but we can draw but one line through two points. So, if the result of our computation or analysis results in but one line then we have two points. How do we get but one line? A tangent! Picture filename: tangents_merge_but_points_do_not.gif When two tangents are approaching each other along a curve they each touch a line. In our example there are two tangents, t1 and t2, each touching the curve in A and B. If A and B were just single points, we could not claim we have tangents, for a tangent needs two points that give us direction. We cannot say that A and B are 0D points in the strict geometric definition, so we define A- as a point which is a part of A and, similarly, we define B+ as a point that is a part of B. What is new is that when the two tangents merge -- that is, when they obtain the same value for a slope, the two points A- and B+ do not merge. The two points A- and B+, moreover, reach the smallest possible separation (distance) that defines the shortest possible line geometrically speaking. So, we can actually obtain (derive) the absolute shortest possible length of a line using geometry. All along a curve, a point A- is the leftmost point touched by t1 while B+ is the rightmost point touched by t2. In a correction to the science and math books, a tangent is a line (curve in general) that always touches a curve in two neighboring points. In a possibly better definition, a tangent is a direction of the next geometric point a continuous curve is allowed to have. (Yes, Bunky, allowed by geometry.} Where do we go from here? Any two points give us direction while a single point does not. This could be another, though technical, differentiator between a point and a line. As for the real world -- and because geometry rules -- two atoms would need to be separated by at least the absolute shortest distance if these were to have and be able to apply 1D geometry. Such molecule would then have the geometric and the computational property of a line -- and not that of a point. My guess is that this would be the separation of two hydrogen atoms of a hydrogen gas molecule at absolute zero. Yes, there could be other applications. Think of situations where you need a distance for something.. Discovery of the infinitesimal by Newton and Leibniz isn't all that strange after all. Tangent dy/dx is a direction we can obtain from the smallest possible separation between two zero dimensional points. And it is absolute, too, although each curvature has specific values. If you want to get into the meaning of 'continuous,' think of constructing or creating something that is real. Note: Also recall that zero divided by zero is indeterminate. This makes sense if you think of an entity subject to a construct of 'zero-divided-by-zero' and thus being or spreading anywhere [and I mean anywhere while remaining an entity]. Note2 {Aug. 3, 2007}: A 3D surface also can have a tangent that is a 2D plane. Can we say that such tangent plane must be formed by at least three geometric 0D points? The answer is yes because such tangent plane is unique and there exists but one tangent plane. DSSP Topics for May '07 Is there more to square besides square dancing? What's the big deal about squares and square numbers? Energy of a moving body is proportional to v2, where v is its velocity. We could show energy of a moving body as an energy square (having velocity for its side) that is attached to the body. When the energy increases or decreases as the body is speeding up or slowing down, the square follows that. What becomes apparent is that: (1) Energy variations are continuous because both the rational and irrational sides of the square are accomodated, and (2) Because the moving energy of ½mv2 is conserved then momentum is conserved because it is mv (m is mass). Momentum is thus obtained in a single operation via the geometric mean even though the velocity could at times be an irrational number. [This is about stopping moving bodies at a distance but, as always, you are welcome to disbelieve that.] (3) When the path begins to curve and the distance (as well as velocity) becomes a transcendental number, something else will have to give if the geometric mean does not hold for transcendental numbers [and my guess is that it does not]. Think virtual domain and possibly G. DSSP Topics for April '07 The geometric mean comes from a formula that relates vertical distance h to two horizontal distances on a semicircle Constructing a square root from a line of any length is not obvious but it is easy with the geometric mean Think geometry and get closer to the golden proportion The best way to visualize the mathematical property of 'mean' is to think of centering or balancing. The arithmetic mean comes up when a party of people wants to even out -- that is to center, a bill in a restaurant. You add up all charges and divide it by the number of people taking part. The arithmetic mean is then the average. The geometric mean comes up when we have several lengths or distances and we want to get their mean. The geometric mean, however, is not the average of distances. Geometrically, the mean value of two lengths is such that the square of the mean gives us the same value as the product (area) composed of the two lengths (or distances). The geometric mean hails to us from ancient Greeks. It's a nifty relation because we can get the geometric mean through geometry using a semicircle. (It is said Pythagoras started his studies with a semicircle.) As it turns out, we will not want to use arithmetic to get the geometric mean because geometry works for irrationals whereas arithmetic does not. We are going to start with the formula and prove it via the Pythagorean theorem. The geometric mean is height h shown in the illustration that reaches its max when the height becomes the radius of the circle Picture filename: Geometric_Mean_Semi-circle.gif Prove that for any height h on a semicircle the relation h2 = x1x2 holds. The Geometric Mean h is then SQRT(x1x2) From the Pythagorean theorem we can write: z12 + z22 = (x1 + x2)2 = x12 + 2•x1•x2 + x22 We can also express z1 and z2 individually as: z12 = h2 + x12 and z22 = h2 + x22 Now add the above two equations together: z12 + z22 = 2•h2 + x12 + x22 The equation above and the very first equation have the same left sides. So now we can equate their right sides: x12 + 2•x1•x2 + x22 = 2•h2 + x12 + x22 After canceling terms and dividing by 2, we are left with: x1•x2 = h2 We make a quick check for the case when h is at its max and then h, x1, and x2 all reach the length of the radius. It holds. What does it mean? The geometric mean shows that we can take the area of any rectangle (with sides x1 and x2) and equate it with the area of a square. Having a square we also have its square root, which is the side of the square and in our case it is h. Also, if the x1 is a unit distance and x2 is some length (or distance) then the geometric mean gives us the SQRT(x2). Yes, in geometry we need unit length. (If you are good you can prove that the smallest length x is the infinitesimal of x or dx. If you are really good you can prove that the infinitesimal of x is the distance between two hydrogen atoms of the H2 molecule at absolute zero.) Okay, so the question is: What to do with it? Or; How to apply it? Put your brain in gear. Could the sides of the rectangle be irrational numbers and, if so, does the geometric mean equation hold? If it holds that means we are multiplying two numbers with infinite mantissa and get the result in finite time! Not a bad start, particularly since your PC or the government's mainframes cannot do that. Now, put your brain into overdrive [no drugs needed] and equate z1 and x1 with the golden numbers a and b respectively. The golden number a is 1+ SQRT(5) and b is 2. Yeah, h is the height of the Great Pyramid, a is the length (distance?) of its side and b is one half the length of its base. In the Quantum Pythagoreans book we have an analysis of the Great Pyramid where the geometric mean is prominent. See a sample. Note {Jan 2008}: If you think the brain is geometry-based (and it is), you could keep x1 as unit 1 and by imagining the semicircle instanly obtain a square root of just about any number. DSSP Topics for March '07 Cutting can be physical or logical. If you double something while maintaining a symmetry about the vertical axis you duplicate it. Would you call it a division? Scientists call it cell division but what they really mean is that when a biological cell 'divides,' one cell becomes two cells. Cell division is really cell duplication. We are not going to quibble about inadequate wording, for scientists oftentimes misname things because they do not understand it anyway. When scientists say they divide something they really only understand that after the division you have two halves. Last month we made a case that a photon cannot be cut or divided into parts. That is certainly the case but this month the topic is about another possible division; not the one that cuts but the one that doubles. The "division" that doubles reflects some object about the vertical axes. While it is easy to see that the reflected object is very close to the original, it becomes difficult to actually double an object that is real. In a real object, every piece and every atom needs to be reflected and duplicated. In the virtual domain the duplication is easy because you are reflecting and duplicating only the information that is on one side of the axis or a (mirror) plane. The creation of new or existing things happens in the virtual domain first. You will also need geometry to establish the axis or the plane of symmetry. So, you might think this is no big deal. If you are heavily reality-oriented -- that is, left-brain dominant, you will have reached a conclusion that the world is a zero-sum game. You might also think that in order to get something you would have to take it from somebody else. Sure enough, you will have learned more about the destruction than about the creation, for the creation and growth happens only through the virtual domain of the infinite. Would you be brave enough to say that: The tangible -- that is real, world was created from an idea? The tangible things are secondary, in that they can be created at will? The tangible things we know of are not necessarily the only, or the best, tangible things that exist? The tangible things can be destroyed or uncreated and dissolved back into the virtual domain? DSSP Topics for February '07 So you split the photon in a half-silvered mirror. Do you get two halves or what? You cannot cut a photon, baby! This guy Compton whacked an electron with a photon and, because he measured a lower energy photon, he thought that the photon got split and a part of its energy changed the electron's path. For that he got a prize from the Nobel committee. This might be good Swedish meatball politics but that is not why you are here. Compton never used free electrons in his experiments and as he whacked the atom's core with a photon, many things could and did happen. There is a bit more on this if you put your money on the Compton effect and got yourself cornered. Once you understand his effect is a defect, you are ready to look at a photon by itself. After a photon encounters a half-silvered mirror, it is reflected and (or?) transmitted. Picture filename: branch_or_split_photon.gif At this point, however, we do not know if the photon is: (a) Physically split -- that is, partitioned or cut. This would mean that each half goes its own way; (b) Branched -- that is, one branch goes one way (reflected) and the other branch goes another way (transmitted) while both branches are interconnected; or (c) Reflected as a whole while the next photon could be transmitted as a whole. This would mean that some 0/1 (or heads/tails) randomizing action does the steering. Experimental Results If two photonic detectors are placed in each of the possible paths (in both branches), neither detects a half-energy photon. The possibility (a) is quickly eliminated. To decide between (b) and (c), you will need to get a bit into the instrument called the interferometer. It measures distances along each path (in each branch) and, if the possibility (c) is accepted, the interferometer could not work the way it does. (More on this is in the Quantum Pythagoreans book, including multi-path, instant reduction, and of course gravitation.) A photon of light cannot be split (cut) into two individual sub-photons. A photon, however, can be branched. Compton effect is wishful thinking by the latter day scientists. DSSP Topics for January '07 Global warming got you down? Thank the scientists! Of course, you could put the scientist in charge of cleaning it up and finish off this planet in a hurry Science stands ready to explain everything and anything. BGW (before global warming) were the good old days when we could wake up to a good cup of coffee only to be told that coffee is not good for you. Once you got over that, the next month's coffee would be responsible for yet another ailment. After a while, coffee would be scientifically proven to be okay after all. Coffee got filed in the same category as alcohol: It's good and it's bad, it depends really, use in moderation, no truth to coffee grinds foretelling your future but roughage is okay, pay more for decaf, don't forget water. The bottom line: consult your doctor. Now that we know that all this bugaboo is about keeping doctors in business, global warming arrives. You might even figure out you don't need a doctor. Load up all scientists on a ship and take them to the moon. Al Gore will lead them. The mission, of course, is to research GW from high up. We know ahead of time what will happen. Scientists will report that because the moon has no atmosphere, there is no GW problem on the moon. And then the lead scientist says to the rest of them: "Hey, let's go back and apply our findings to the earth!" Problem solved. Scientists on the moon will be supplied by platforms being pushed by lasers -- but if that does not work, well, scientists all said it would. Once they find themselves without supplies they could all come back (can you hear me?) by pushing the G button on that space elevator -- it was their brilliant idea, too. Happy New Year. Oh, think free energy. Note {Nov 1, 2007}: You do not need to be a scientist to see that the greenhouse gas model for global warming is intellectually weak. Every weatherman will tell you that on clear nights the atmosphere cools off in less than a couple of hours and no amount of greenhouse gas such as CO2 will slow this radiation cooling. On cloudy days the heat stays in and, again, CO2 has nothing to do with it. Besides, you may have noticed there is no glass in the atmosphere and without glass the whole greenhouse model falls apart. (Glass reflects and re-radiates some of the heat back and does not allow cooling through air movement that is convection. Glass is transparent to visible wavelengths but absorptive for heat wavelengths.) If you want to get involved, work on free energy; It's bit more difficult than protesting in the street or voting for a clueless politicial party but it is far more rewarding. A book by Mike Ivsin To Publisher... Quantum Pythagoreans book describes and applies the real and the virtual aspects of our environment. While these aspects are separated, the book takes on the task of bridging the two, which leads to organization and self-organization. More .. DSSP Topics for December 2006 Wave-particle duality really isn't about the particle Get on board with momentum and think big Science writers like to tell us that the reason atoms are different is because they are small. Atomic components are small all right but that is not the whole story. Mainstream scientists say that inside the atom things get weird because the small size of the atomic components enables them to unbecome classical by acquiring a wavelength -- that is, electrons and protons cannot be treated as everyday objects the likes of apples, billiard balls, and planets. Professors readily pull out the equation and tell us the wavelength of an electron and then quickly tell us that a comparable wavelength of an orange is infinitesimally small. The orange, they say, cannot pass through two slits and appear on the other side unsliced and unjuiced. What they ignore is that the very de Broglie equation they use has nothing to do with the particle's size but has everything to do with the particle's momentum. More Background De Broglie's equation is not wrong. People calling themselves scientists, however, do not get it. The equation relates momentum p to the wavelength lambda through an inverse relation that also includes the Planck's constant h, that is: p = h / lambda. Momentum p, moreover, is a product of mass m and velocity v. So, the equation m v = h / lambda deals not only with mass (or size) but also includes the particle's velocity. The point is that you and I can give any size particle any wavelength we want. The point is that an everyday object, even a toaster, can acquire a wavelength comparable to the electron inside the atom. So there. The wave-particle duality the modern scientist talks about for close to a hundred years is really wave-momentum duality. It is easy to speculate about the scientists and whether they are really stupid or just pretend to be. My guess is that their herding instinct puts them all onto a plausible and simplistic platform they each can defend. For your part, however, you may want to get educated quickly, for the scientist is not going to help you with real things such as the global warming or free energy. Scientists are useless to explain neither the perils or the opportunities -- it's all swamp gas to them. DSSP Topics for November '06 Supernova goes bang and it is all about the conversion of matter to energy. What, pray say, happened to the black hole? The so-called scientists think that sun starts to shrink and then it bounces back in an explosion called supernova. But these guys don't bother to explain that the energy does not add up because the sun expands to a much greater diameter than the one from which it started. They would like you to think it is just like a ball bouncing but do not know why the ball bounces to a greater height than the starting one. In all, they do not have a clue and feed you nonsense. There is space between electrons and protons because these components need spatial distance to remain in a tractable relationship -- that is, maintain mathematically computable states. If spatial distances are reduced through pressure or collisions, the atom ceases to remain computable and its components disengage. But that, in and of itself, does not a nova make. What happens next. Each component of matter such as the electron and the proton can be broken up into other components that results in corrupted matter -- that is, antimatter. (Details of the mechanism, which is conditionally reversible, can be found in the Quantum Pythagoreans book.) The antimatter annihilation, however, irreversibly transforms matter into energy. How much matter is destroyed at any one time then makes the bang into nova or supernova. What this also means is that matter undergoes conversion to energy and cannot accumulate without bound into such thing as a black hole. Not all is lost for the scientist: If you really believe in black holes, have a picture taken with Hawking and -- smile. DSSP Topics for October '06 If you release a body from a tower (of Pisa, say), would it begin to spin? Newton defined inertia as force that resists a change in the velocity of an object. Conceptually it works. The ball, the bullet, the train, the ship -- all mass objects put up resistance to change from either their present speed or their present direction of movement. (Speed and direction could also be combined into a vector parameter called velocity.) If you were to think that the property of mass inertia has some computational intelligence, you would model it by saying that there must be a "gyro" in there somewhere because that is one way of figuring out something is trying to change the body's direction. You would also arrive at a conclusion that there must be the absolute reference because a change in speed gives rise to a force (of inertia) that has nothing to do with some arbitrary reference. That is, some people can be fooled but a dumb piece of the rock cannot be fooled by an external reference that will pretend that the speed is now different and that there is no need to put up resistance to the change in speed. All this should be pretty clear to many and the unfortunate part is that we are not going to try to persuade the rest, for we are building on that and are ready to move ahead. Does inertia have more intelligence? Can Newton's model be extended? If (the force of) inertia resists the (external) force being applied unto a body, what are other available mechanisms that would do just such resistance? If inertia is smart, it could come up with other things to do and resist the change to the external force even more. If, for example, the mass body could begin to spin, the energy from the applied force is converted to the rotational energy in addition to the linear energy. If inertia were intelligent, it would have the body commence spinning and resist the change in velocity even more. And that is indeed what happens. Two bodies being gravitationally attracted to each other speed up toward each other and in addition commence spinning to further slow down their mutual acceleration. Of course, there are no limits on intelligence. Can you think of another thing that could be done to increase the resistance (increase inertia) of a body? Think of a "body" as a possible assembly of many bodies and definitely think geometry. [That pancake tastes good this morning -- wonder why.] DSSP Topics for September '06 If our body would not absorb hard radiation, we could be in good shape There is a big difference between reflection and absorption To reduce the photon we need a particular context Photons of light have energy called the virtual energy. At reflection, photons impart no momentum and, if you were to do the experiment, you would find zero pressure at mirror surface. When a photon reduces, however, a transformation takes place and many different forms of real energy can arise. Which molecule or atom reduces X-rays and Gamma rays in your body is the place to start. It turns out it is the atom of hydrogen inside the molecule of water that gets whacked and that changes water into a free radical. This is fairly recent and there is more on this in Nick Lane's Oxygen (or in our review of the book). Classically there is not much we can do because the photon just slams into the molecule and some shield, it seems, would be the only way to stop the ray. Fortunately, the truth is more complicated but far more interesting. The mechanism The picture you see below is from the Quantum Pythagoreans book. It shows the mechanism by which the molecule of water is broken up by a photon. The key is in two bodies (components, sub-atomic particles) receiving momentum in the context of the conservation of momentum. Photon is absorbed by transforming its virtual energy to real energy (and mo is conserved). You are also seeing the creation of the free radical, an electron-deficient (odd count) but electrically neutral almost-water. Picture filename: water_breakup_free_radical.gif Now, how would you or could you stop the photon from reducing inside your body: • Wear a shiny hat or a suit to reflect X-ray photons • Get in the bunker and stay there • Use your strong will • Modify photons in some way so that they do not reduce -- their energy would stay the same but would not reduce inside the hydrogen and go right on through your body • Use another form of a shield that absorbs X-rays (reduces them and creates heat) that would not rely on heavy metals or piles of dirt • Transition to the virtual domain. Admittedly high tech but if your body has no real parts the photon could not reduce in the framework of momentum conservation. (The high tech part is in the ability to transition between domains repeatedly) • Modify the charge relationship between the proton and electron and disrupt the nominal + and - field inside the atom • Provide abundance of electrons so that even if some water molecules are broken up they cannot do further damage through the free radical electron-snatching mechanism. That is, repair the water molecule or the consequent damage quickly Note1: The ejected component (electron and proton) cannot each have more than 1/2 of the photon's energy because (you guessed it) the mo is conserved. Does this mean that photoelectric cells cannot have greater than 50% efficiency? Note 2: {June 3, 2008} The photonic mechanism shown here is not the same one that produces the HHO (Brown's gas). DSSP Topics for August '06 Send photon bouncing between parallel mirrors and see what happens? You learned at school that photons of light have itsy bitsy momentum and push a mirror just a teensy weensy bit when they bounce from it. You have this piece of information on great scientific authority, including NASA. Then you grew up and learned the same thing in college, except then you were fed the equations for the actual photonic (radiation) pressure and you also paid good money for every credit hour you were investing in yourself. Pal, you got the mind implants in all right Or something similar Arnold Schwarzenegger would say, borrowing the idea from Total Recall. Sorry, pal, but you do not want to defer to the authority. You cannot just sit there without asking questions. How many of you asked the teacher, the professor: "But if the photon bounces between parallel mirrors, we got ourselves a perpetual motion machine! Could you explain where the logic fails?" Well, your logic is just fine. But there could be several answers coming from the suddenly beleaguered teacher: • The bounces in time stop because the photon is eventually absorbed and, you see, there is no perpetual machine. (This is the actual response from a scientist -- poor mirror saves bad science!) (Note: {Jan, 2007} If you go through the math and, since the light's wavelength (energy) does not change after a mirror bounce, you will see there is energy over-unity right after the first bounce under the presumption that the mirror received energy and moves.) • Well, the photon loses its effective mass as it bounces and slows down, you know -- just like you and I and the soccer ball • Theoretically, there is a light color shift. Photons do not slow down but because their frequency changes they have less and less of a punch • You got me. Light does not impart pressure at reflection. (This is the truth but it has yet to be spoken [but what will they teach then?]) Yet, light has energy and light is known to move things, cut things, and vaporize things. So where is the catch? Light can become a mover and shaker but it first must be absorbed -- that is, transformed. At bounce there is no absorption and there will be no pressure at reflection -- ever. At absorption the photon is transformed to real energy and the photon is gone. Next month we will look at ionizing radiation and how light breaks up a molecule of water -- the mechanism that messes up our bodies during hard radiation. Once you know the mechanism, there could then also be a way to counteract it. DSSP Topics for July '06 Give object kinetic energy but now you have to conserve it. The question of how You give energy to a stationary object or add energy to a moving object -- and you can do this every day with the same result: the energy is conserved. Fine. But now you have to figure out how it can be done and how it is done by nature. First, you make a (good) guess that the energy you gave the object needs to be attached to the object as a form of vibration that, however, is not visible. You are not miffed by its invisibility. Light, for example, is not visible unless it enters your eye and reduces at the retina. And so you call the energy the virtual energy Second, you will need to arrive at some energy computational framework because you will need to understand and explain what happens at the collision of two objects. The wave is natural here, for waves can add and subtract nicely just as energy can speed up or slow the object down. Third, you will figure out the granularity at which these energy waves operate. You will quickly settle at the atom, giving preference to protons and neutrons. Finally, you get all excited because the wave could be conceivably imparted at a distance and psychokinetics, if amplified, could be applied to stop moving bodies at a distance. By the way, increase in momentum changes the energy waves within the atom and -- if the core is unstable (radioactive) -- its half-life will be impacted. You just came up with the mechanism that explain the increase in half-life as the object's speed increases -- and absolutely at that. DSSP Topics for June '06 New Australian Star What it means and where it comes from Australians are looking for a star to call their own. It is important to them, of course, and the discourse is par for the course. The Australians may be looking deep into their soul and they will want to ask the aborigines for advice as well. The eight pointed star may be a favorite of some, but there are eight pointed structures that are harmonious and there are the eight pointed stars that are disharmonious. There is also a question of the meanings of the horizontal, vertical, and the diagonal. So, here is our contribution: The New Australian Star issues from the constellation of the Southern Cross. Each star of the cross now enlarges into arms that form two sets of diagonals. One set has the right triangle in the 3:1 proportion, which produces the diagonals of the square root of ten. The other, closer-together stars on the horizontal produce triangles in the 2:1 ratio, which produces the diagonals of the square root of five. The point to be made is that the diagonals should not be obscured by solid structures because the diagonals are about vibrations. Spirals or waves can be pretty much anywhere, for these get along with vibrations just fine. The Australians will need to figure out why there are two separated lines for both the horizontal and vertical direction. The (big) hint comes from your bones, which have "empty" space. (The Great Pyramid also has empty spaces, really virtual structures, that some people call "shafts" but they are axes for vibrational formation.) Enjoy. picture filename: DSSP Topics for May '06 Are irrational numbers not rational -- that is, not reasonable? If irrationals are not reasonable, does it make them bad? Is "wow" bad? If you truncate irrational number, did you solve a problem or did you create one? The best example of an irrational number is the square root of two, but sticking to arithmetic to discuss irrational numbers quickly runs out of steam. This is because irrationals issue from geometry and not from the everyday experience of counting. How many times did you need a square root in a restaurant? If you need to deal with space, chances are quite good you will deal with the square and the square root of two. We all know that the diagonal of a unit square is SQRT(2). Unit square's side is 1, but if your square happens to have the side of length b, then the diagonal is SQRT(2). Either way, the diagonal is a number that has an infinite mantissa. This is "no problem" for scientists because they just chop it off -- just at the point where it would not fit in the computer -- and declare it, say, close enough. There is no need to lament over the chopped off number but what has happened is that the irrational number was converted into a rational number by being chopped. Chopped number now becomes a real number because it is finite and the act of chopping transforms the irrational number irreversibly into a real number. The subject of this month's topic is number chopping -- or rounding or truncation. How many decimal places do you need? Is there a point where the chopping of the irrational number is good enough? In other words, is there "close enough" that would make it "good enough?" Take two squares as shown here. Which of the two do you like better? picture filename: • If you were to explain why you like one square or the other, would you use different words for each? • If you were to change your mind about the square you like, would it reflect different ways you are perceiving the liking of the square and the explaining of liking it? • What do irrational numbers have to do with any of this? In Summary The square on the left is a solid square and reflects the truncation of the irrational number. The diagonal distance of the solid square is exact and finite. The diagonal of the non-solid square is open and there is no limit on the expression of the irrational number of its diagonal. Both squares will be described differently [think left and right brain]. It is for this reason that the truncation of the irrational number does not have a measure of adequacy. The irrational number is in a category of its own -- a category that requires the mantissa of the irrational number to be infinite. As a Pythagorean you know that a number can become, and the act of number (mantissa) chopping does not actualize the irrational number. As a Pythagorean you know that if the Good Lord carved a square as a groove into the stone it would not and could not glow. Throughout this topic you could substitute 'incommensurable' for 'irrational.' In that case the transcendentals are included along with irrationals in this month's topic. DSSP Topics for April '06 You were told the photon knocks the electron off its path but what happens to the photon? Okay, if photons could move other things without being absorbed, where do the tired photons retire? The absorbed photon may create heat but what happened to the photon? What happens to the photon that we see and register as light? How is a photon (re)created? Scientists tell us all about the interesting things photons do but they don't tell us what happens to the photon after it does its thing. Chances are good scientists do not know. Even the so called 'Compton effect' erroneously presumes that a photon pings an electron and loses energy, but the scientists' brain is so limited they cannot conceive what would happen after the second, third, and the next "collision" because then the answers do not come out as readily. If we take scientists up on Compton -- who is fortified with the Nobel something or another -- then the world would be full of but beaten up and tired out photons zombily clunking their way around. The fact is, scientists cannot think beyond the second bounce. Inside the scientists' mind it is not unlike the proverbial Johnny Carson fruitcake -- the next fruitcake you see is the same one you sent to somebody else. Photons cannot and do not bump and displace electrons because photons and free electrons are both waves. Photons are created with a fixed amount of energy that stays with the photon for the life of the photon. That is also why a reflecting (or refracting) photon cannot and will not create pressure at reflection. When a photon is absorbed, its energy -- which is also a virtual energy -- transforms into real energy, but only if the photon finds two things to bump into. Two things are needed because two things can then move in the opposite direction while conserving momentum. So there, Compton, you can "bombard" free electrons with photons all day long but none will change its path. Scientists can bombard us with Compton's Nobel Prize but that will not impress the mind of those who ask simple and obvious questions. The photon needs systemic description -- that is, the photon exists in a cycle from its creation to its absorption to its recreation. Once a photon is absorbed it is transformed from the virtual and into the real energy and the original photon is gone. Photon's energy lives on in another form. Photon life cycle Picture filename: A photon is created when a charge is accelerated. For one reason or another the charge does not keep its energy the same way a real particle such as a piece of the rock does. As long as the charge is not accelerating, no energy -- in the form of the photonic virtual energy -- is being produced but as soon as the charge becomes accelerated, the charge begins to radiate virtual energy that is the photon. For your homework, you want to figure out what geometry of the atomic electron does not accelerate the electron's charge. If you like homework, recall that thoughts in your brain are charges (potassium ions?). Does this mean you could create photons in your head? [What?] Note{4/13/06}: If a scientist suggests that light can also increase in frequency as it collides with bigger things such as molecules, the scientist is again not thinking past the second bounce. If a photon were to acquire higher energy as a result of hitting something bigger, photons around us would be in the X-ray region very soon. Again, the photon's energy does not change during non-reducing interactions such as reflection. DSSP Topics for March '06 Flip over hydrogen and get helium -- introducing the Inclusion Principle Who supplies the mirror? Pauli noticed that the two of helium's electrons always have opposite spin and he quickly called it the "exclusion principle." In the early and QM-heady years of the last century, everybody was getting in on the act by naming things, and since nobody observed the two of the helium's electrons to have the same spin, Pauli just said -- without rhyme or reason -- that that's the "principle" and no explanation is necessary. Bohr did something similar when he could not explain the discrete quantized atomic orbitals that matched Balmer's equation: he called the space between orbitals the "forbidden orbitals." Geez. It is not unlikely Bohr happened one day to walk by the sign saying 'Swimming Forbidden' and that was good enough for him -- forbidden it is. Why, Pauli has it even better. If a new quantum number is discovered, as when the magnetic field split the hydrogen orbitals some more, it all continues to fit the "principle" because each electron gets a new extra quantum number. Pauli's exclusion principle does not explain anything and does not predict anything -- it is self-defining and it fits after the fact. You can make your own principle such as the "human water exclusion principle." When there are no humans living under water just like the fish, your principle is good. If you scratch your head on this, the best way of thinking about this high science is that each atomic particle gets its own set of rails and no particle can then collide with another particle. This is the best invention since, but of course, the railroad! New Proposal Then again, the real question deals with explaining why the two electrons of helium have opposite spin. The mirror symmetry makes it possible. If there exists a two-dimensional symmetry (symmetry about a plane), the reflected clockwise spin of one electron becomes the counterclockwise spin of the other electron. The way of creating helium out of hydrogen is to include the second electron and also the second proton with the plane symmetry. So, you see, it is really the inclusion principle that is at work here, for the idea is to grow helium from hydrogen by allowing new components by rules such as symmetry that makes sense computationally or geometrically. And the only way of including a new component across the mirror plane of symmetry is to reverse the spin. (That is how your left hand becomes the right hand on the other side of the mirror, too.) Now, we can make a prediction using the new principle: the spin of the second proton inside the helium core will have its spin reversed as well. Thinking Big The mirror of symmetry exists because it simplifies the computational aspects within the atom. The next question deals with "where is the doubling of helium" manifesting -- and this is perhaps worth some study. The doubling could be on the molecular level -- for gasses anyway. It is apparent that the doubling cannot go on forever. All computability concepts reach limits such as the octave for the outer orbitals, which may need additional computing geometries in addition to two-dimensional symmetry. Whether the plane symmetry of helium is linked in some way to the angular momentum and the flat geometry of the solar system is a big jump, but awfully tempting nonetheless. Note{May 5, 2006}: Balmer is the guy who tied quantum behavior of the electron to math, constants and all. Can you tie Balmer to the King's Chamber, atomically speaking? DSSP Topics for February '06 Golden Ratio is not exclusive to Fibonacci numbers The key is in the infinite addition (read superposition) mechanism Fibonacci series starts with numbers 1, 1, and each consecutive term is formed by adding the previous two terms. As the Fibonacci sequence unfolds, the ratios of two consecutive terms of the Fibonacci series converge closer and closer toward the value of (1+SQRT(5))/2, which is the Golden Ratio and expressed as ratio of two numbers a/b. We do not want to reduce a/b any further and so a=1+SQRT(5) and b=2. Fibonacci series acquired some mystique because of its Golden Ratio convergence property, a property discovered by Kepler. It turns out that any numerical series having the next term generated by summing the last two of its terms will also converge toward the Golden Ratio by rationing the two consecutive terms. The mathematical way of saying it is that the convergence is not a function of the initial conditions. The general proof is straightforward. We start with two numbers representing some positive lengths, say j and k. The next term is, by definition, the sum of the two S0 = j + k All partial sums Sn contain numbers j and k. Next terms are: S1 = S0 + k S2 = S1 + S0 .. .. Sn-1 = Sn-2 + Sn-3 Sn = Sn-1 + Sn-2 Now, the Sn/Sn-1 ratio should converge toward the Golden Ratio a/b if our hypothesis is correct. Then, as n increases toward infinity a/b = Limit of Sn/Sn-1 Substituting for Sn a/b = Limit of (Sn-1 + Sn-2)/Sn-1 = Limit of (1 + Sn-2/Sn-1) a/b = 1 + Limit of Sn-2/Sn-1 Expression Sn-2/Sn-1 is a ratio of two consecutive but inversed terms and, as n increases toward infinity, the ratio converges toward the reciprocal of a/b, which is b/a. Then, a/b = 1 + b/a This equation is not a function of any partial sum Sn and, therefore, a or b is not a function of j and/or k. Any value of j or k can be used in the construction of the series that results in the equation a/b = 1 + b/a. Finally, does a solution exist for a/b? a/b = 1 + b/a Multiply both sides by a/b (a/b)2 = a/b + 1 (a/b)2 - a/b - 1 = 0 For positive solution of spatial distance: a/b = (1 + SQRT(5))/2 a = 1 + SQRT(5) and b = 2 (1) So what's the point in generalizing a particular sequence while "demoting" Fibonacci and his cool sequence of numbers? A lot, if you think Golden Ratio is in some ways related to free energy. One could -- but does not have to -- work with structures that subscribe to Fibonacci. (2) As a Pythagorean you always know the operational representation in nature, and addition or subtraction is about the superposition of the wavefunction (3) {Feb 10, 2006} This month's topic has a great implication for the number zero. Some people include zero in the Fibonacci sequence and that works fine for addition (superposition of the wavefunction). As soon as we get into ratios, however, zero needs to be excluded. We start Fibonacci sequence with 1, 1 to avoid the exception but -- if we were to generalize -- zero can be included as a real number as long as it is subject only to addition. Only virtual numbers can be subject to infinite acceleration, which is none other than the instant light rebound. DSSP Topics for January '06 Particle-wave duality started with deBroglie a hundred years ago, and it is still making waves Particle-wave duality may seem weird, but only if the interpreter does not see the picture Particle-wave states are mutually exclusive and, for this particular transformation, reversible Finally, the correct way of saying it is 'momentum-wave' duality It started as a purely mathematical relationship in deBroglie's head. Momentum, which is a moving energy of a real thing, could be equated with frequency, for frequency also relates to energy. On the one side of the equation is mass and velocity -- that is, momentum p, and on the other side is frequency f and Planck constant h. Altogether it is p = f·h. What holds this relation together is the conservation of energy. The conservation of energy states that energy can be transformed from one form to another, but the overall energy cannot increase or decrease. really 'momentum-wave duality' Picture filename: If it transforms then it must be exclusive In the framework of the conservation of energy, for every bit (quanta) of energy on the left side, there will transform an identical amount to the right side. As a result, the electron cannot be both a particle and a wave because the electron either transforms or does not transform. A moving particle's energy (that is, a momentum) and a wave (wave is the reciprocal of frequency) cannot exist simultaneously. If you wish to do an experiment, you will be able to measure momentum or a wavelength but not both at the same instance. So, if you bind an electron as a real particle, it will not do the wave tricks going through the dual slit, for example. The most potent question since day one concerns the parameter of mass of the moving electron. If we could detect -- that is, measure, electron waves, we do not need (to know) electron's mass, for the energy is conserved and the conservation of energy is upheld by the wavelength measurement alone. Moving free electrons do indeed have wavelength and the wavelength was measured and validated by Davisson and Germer in 1927. This outcome is at the center of the "quantum weirdness" because mainstream scientists hold on to the electron's mass at all times. Even though mass is not needed because the conservation of energy holds without it, the present mainstream science guys -- really all of them: Feynman, Hawking, Wheeler, Penrose .. -- hang on to electron's mass as the last straw the classical mindset wants to take along to the domain of quantum mechanics. You cannot, and in a way you do not want to, take mass with you to the virtual domain of quantum mechanics. [If you do hang onto mass, you will never get there. Of course, if you cannot go there you may as well hang on to mass.] In the picture above, then, the mass of the electron on the right hand side is really not there and the mass of the electron dissolves and "disappears." The good news is that the transformation is reversible and the electron reappears upon measurement. The measurement is at times called the reduction or the collapse of the wavefunction, and in actuality it is the transformation going from right to left. The particle must be moving to have momentum and thus enable the transformation into a wave. The transformation of the particle into a wave and vice versa is a 'momentum-wave reversible transformation.' If you were to get deeper into this, consider the following: (1) The movement of the particle is about momentum and about imparting energy onto a particle -- particle does not need to carry charge, etc. (2) The 'wave of a particle' deals with the conservation of energy and manifests through the wave's property of nonlocality. (3) The conservation of energy is absolute. Energy is local to the particle and -- on the other side of the transformation -- nonlocal with the wave [happy tunneling in the New Year.] Are there some transformations that are not reversible? Yes, the "celebrated" E=mc2 is not reversible. The transformation of momentum is reversible but the transformation of mass (matter) is not. (You will need intelligence in addition to energy to make matter.) There is no need to chastise mainstream scientists even though they do not get it for one hundred years. Just let them know that whatever ideas they have, they are not coming back. Finally, if you like this topic, give QM primer a read. DSSP Topics for December '05 Say hello to Balmer, just another cool math guy King's Chamber in the Great Pyramid beckons to Balmer In the last quarter of the 19th century, Johann Balmer (taught math in girls' school where he) came across the experimental findings that hydrogen radiates light of only certain frequencies. The quantum aspect of the radiated wavelengths intrigued all. Many math guys set out to come up with a formula -- we can say in the tradition of Kepler -- to match some equation, any equation, with observations. Balmer succeeded by making the wavelength lambda = b·m2/(m2-n2). Number b is a constant (could be called Balmer's constant) and contains the Planck constant h, which was not proposed for another fifteen years. Numbers m and n are integers while m is always greater than n. The observed wavelengths matched the formula and Balmer also successfully used the formula to predict other hydrogen wavelengths that were unobserved at the time. Balmer thus tied natural numbers (positive integers) to the atom and Pythagoras suddenly became relevant on the atomic scale. Moreover, the Pythagorean theorem should by now jump right out at you, just as soon as you see the m2-n2 term. This term is about two lengths of the right angle triangle. (Bohr's formulation includes Planck constant but the geometric relations are not apparent.) Balmer's equation is in the simplest and visually powerful format. North indicated by arrow. Why are vertical distances dashed? Picture filename: It is now becoming apparent what you need to do when you want to work the atomic orbitals. One ingredient is missing, but the King's Chamber is but one component of the Great Pyramid. Among the many questions: Is the 'squaring of the circle' about building orbitals from linear, or straight, components? If so, what's the big idea? When we mutilated the Great Pyramid, does it show that the creation could never be as intelligent as the Maker? Is operational pyramid existential? Can we, could we, would we, shape the stone without hammer or heat? DSSP Topics for November '05 Lightspeed is absolute A photon of light leaves its source and propagates at 'c' in the direction the source is pointing Velocity of the light source cannot slow or speed up light Velocity of the light source cannot give light a sideways speed component Michelson-Morley experiment has shown conclusively that a round trip of a photon is of the same duration regardless of the direction the photon is launched and regardless of the velocity (speed and direction) of the launching platform. Presently, however, the prevailing -- that is mainstream -- understanding is that the sideways velocity component of the launching platform is added to lightspeed. It is asserted that light's forward velocity is c and constant, but if the launching platform (the source of light) has an additional, say downward, speed then such sideways speed is added to that of lightspeed. Not so. Agreed: The launching platform (light source) does not and cannot add its own forward or backward speed to that of lightspeed. In other words, a light source such as a flashlight or a laser moving forward or backward does not affect lightspeed and light coming out of the laser is always, and for all observers, propagating at c. However, and incorrectly, the light source "is allowed to" add its own speed that is in direction other than the light's direction. The flashlight, then, does not add its forward or backward velocity component but "is allowed to" add its sideways component to that of light. There is no good reason why the velocity of the light source not be additive to light in the parallel direction, but be additive in another direction. Furthermore, the Michelson-Morley experiment directly proves that the earth velocity -- being introduced as the forward or the sideways component -- is not and cannot be added to lightspeed. Michelson-Morley apparatus as a whole was rotated to release light parallel with, or perpendicular to, that of the earth velocity, but no change in light's arrival was detected. Regardless of the rotation of the apparatus, light's round trip delay was always the same. Michelson-Morley apparatus was purposely designed with enough accuracy to measure the change in light's arrival if earth velocity was added in any which way to lightspeed -- and the expectation of the experimenters was that they will be able to measure the delay in arrival. The earth velocity, however, was not and cannot be added to lightspeed in either the parallel or the perpendicular direction (and anywhere in between). Lightspeed depends on medium (not its launching speed) Picture filename: In the illustration above, two candidate paths of light are traced while the light source is moving down -- that is, perpendicular to light's path (and along with earth velocity). Trace AB is the actual trace because the Michelson-Morley apparatus did not measure any delay in light's arrival. The physics theories of Einstein ignore Michelson-Morley experimental results and claim that light takes the AC path. Not so. Light always takes the AB path, which is the path that is in the direction the light source is pointing at the instance of light's release. Lightspeed is absolute and independent of the speed or direction of the light source that releases the light. Space, and whatever 'space' is, cannot and does not treat light differently as a function of direction. Note that, as the sideways light source velocity keeps on increasing, the detector will at some point miss the returning light (photon) -- and this is yet another way of confirming that the velocity of light and its source is not additive in magnitude or direction. Finally, Einstein's general theory of relativity may be a hopeless artificial theory, but the idea of the absolute lightspeed lends itself to building the absolute clock. That is, the clock as a whole may be traveling fast or slow, but because its timing component is built on lightspeed (rather than on the unstable and decaying matter), the clock's ticks will be absolute. We have the design for the absolute clock. The absolute clock then also resolves "the issue" of simultaneity, for observers can use the absolute clock to synch their watches -- absolutely. NASA projects whose primary mission can be directly invalidated through the Michelson-Morley experiment are gravitation waves, gravitational lensing, and black holes. Other NASA projects would also change their focus because the present theories would no longer be in line and supportive of the project, i.e., dark matter, dark energy. Solar sailing efforts are based on the nonexistent light's pressure, which are addressed in topics about light. The "Space Elevator" spending may be relatively low, but what is high is NASA's travesty of High School physics that is sending a corrupting message to our kids. See below. DSSP Topics for October '05 "Space Elevator" is NASA's example of not understanding basic physics and math NASA has no leadership -- both managerially and scientifically. This is most likely due to NASA being a US monopoly. In the US, NASA leads in mattress materials and that's about it. On the world scene, however, NASA deficiencies begin to show, for no longer can NASA dictate the space spending of other countries including Japan, Russia, European Union, China, India, and possibly Canada. Solar sailing is an example of unworkable and PR-driven technology, and any country that has validated the fact that light (photons) cannot put pressure on mirror can laugh-out-loud at NASA stupidity. This month we take on NASA's "Space Elevator" once again, for this structure will never work as advertised and will always require active rocket propellant with each and every payload. This fact is easy to establish with High School or introductory college math. It is about the angular momentum that is conserved, and no payload can be hauled into an orbit without giving it additional angular momentum. Space elevator is neither Picture filename: As the payload is being raised from r1 to r2, it begins to trail because its orbital velocity is decreasing with increasing elevation. This puts a sideways pull on the rope, which then pulls the geostationary "anchor weight" from its position. The end effect is that the rope will break or the "anchor weight" will be pulled down to earth -- there is no in-between. This effect is similar to the breaking of the payload tether from the NASA space shuttle experiment a few years back because the angular momentum of the shuttle was not changing but the angular momentum of the payload was. Fundamentally, linear and angular momentum can be exchanged only through transformation. Classical mechanics deals with it by separating the linear and the angular momentum -- that is, the law of the conservation of momentum applies individually and separately to the linear motion and to the angular (orbit, orbital, spin) motion. As for recommendations, there are none. NASA is so ignorant of its own lack of merit and there will be no changes forthcoming as a result of our posts. NASA has a closed shop mentality and their contracting practices are not only unfair but outright insulting. In time, however, it will be easier to split up or eliminate NASA because of our posts. Note {Oct. 6 and 22, 05} When an object is moving in a straight line and comes in proximity to another (larger) body, it begins to orbit it. Linear momentum has changed to the angular momentum. So, how come that a payload being hoisted straight up needs rocketry to get angular momentum? Without the sideways-firing rocket, the payload being hoisted up the "Space Elevator" acquires the angular momentum by taking it away from the Elevator's angular momentum -- and that's the only way of conserving momentum. The payload being hoisted up can be visualized as as follows: by moving (walking) from the inside to the outside of the merry-go-round, the angular velocity (rotation) of the merry-go-round will decrease. This is okay for the merry-go-round but the "anchor weight" will tumble down or, if the tether breaks, the "anchor weight" will move into a lower orbit that is no longer geostationary. Note#2 {Jan 6, 06} The latest NASA spin on the space elevator is to put the anchor weight beyond the geostationary orbit and thus keep the tether tight. This appears to muggle the discussion, for tightness is supposed to invoke the image of 'rigid.' Tight tether is not rigid to torque, however, and the first load that is being hoisted up will take the angular momentum from the "anchor" weight just as before. The not so amazing thing of this NASA in-your-face-fraud-in-progress is that they do not offer computer simulation. This is simply the 'tow the line' political project that will go on for as long as the funding holds and regardless of the technological corruption this endeavor carries in its wake. It is also likely that the "damn the physics" approach will negatively permeate other NASA projects, for technical merits are taking second seat to PR or the power to lie. Politically, NASA does not want this project to go to completion because it will not work anyway, and we should expect "technical problems" or "budget priorities" several years from now as explanation for the cancellation. Makes you wonder if the shuttle was not flight worthy and NASA knew it. Note#3 {Jan 1, 07} NASA is mouthing away trying to keep this project in the budgets. The new year spin of 2007 is that NASA agrees that the angular momentum needs to be supplied to the load but that "the earth will supply it." This goes way beyond 'duh' and even beyond the tower of Babel. The tether is not a rigid rod and the torque will not be delivered to the load. When going up the Eiffel Tower (or the Tower of Babel) the tower will supply the angular momentum to the climber but only because the tower is a rigid structure. Names of NASA officials and NASA scientists should be taken for likely criminal prosecution -- that is our latest and the best recommendation. DSSP Topics for September '05 Time is always a dependent variable Pythagorean says: Time is a derivative Buddhist says: Time is obtained by convention Hollywood says: Time machine, yo baby! Time keeping and record keeping has been with us so long we take time as something that exists as if time were a variable having its own existence. Planetary position can be forecast by time alone, but the value of stocks or the parameters of weather cannot. Scientists stay away from relationships that cannot use time for prediction and, for better or for worse, developed only those elements for which time-dependent equations exist. The scientist cannot use time going backward and calculate the value of a stock one month ago, but scientists do not mind telling those who listen that they can go back billions of years and arrive at "the beginning of time." The scientist cannot go back in time and validate, among millions of things, where a person was a week ago, how much change he or she had in their pocket yesterday, and what weather parameters we had in Boston the first of August. Without the actual minute by minute record, the only way the scientist can compute where a runner was during the race, the poor runner is confined to a particular track running at a particular constant speed. The runner cannot speed up or slow down because the scientist would not be able to reconstruct the position of the runner on the track. Your clock is as good as the system that derives it Each and every clock has a system behind it that derives the time. The calendar time comes from planetary motion. "Atomic" clock comes from unstable matter that decays at certain rate but such rate varies as a function of temperature, pressure, and -- perhaps not surprisingly -- velocity. Light or laser clock can derive its timing from the constant velocity of light and its time is then not dependent on velocity. Regardless, there is no time as a standalone variable because different systems produce their own time. When a system from which time issues is changed, the time follows. It is not possible to manipulate time by itself and, for example, change the orbit of a planet. That is what it means that time is always a derived variable where the always part means that the dependency is not reversible and time can never become an independent variable. Actually, you can try an experiment on yourself: slow down your watch and put it in your pocket -- perhaps you will live longer. Time, then, will accept the strengths and limitations of the system from which it is derived, including being the absolute time if derived from light's constant velocity. Variables are usually reversible as to dependency. Time is not Scientists, of course, are free to fool each other. Scientists can take decaying matter and put their money on that but everybody else understands that the derivation of a particular variable may not be reversible. An object casts a shadow which may indicate the position of the object, but it is not possible to move the shadow by itself and thus cause the object to move. There exists a one-way causal relationship between the object and its shadow in the same way there is a causal relationship between a system and the time that is issuing from such system. This particular relationship between the system and its time (or object and its shadow) is not reversible. Variable's reversibility is fundamentally tied to transformations The next part is about the reversibility of mass and energy. Scientists would like to think mass and energy are reversible but they can only talk about it with their hands in their pockets because nobody succeeded in taking some photons and making mass out of it. The scientist, not unlike the five year old, can break matter into matter and antimatter, but they have no clue, and presently do not care, how to make a real matter, and perhaps new matter, and possibly even different matter. DSSP Topics for August '05 When light shifts toward red or blue, it confirms the change in speed in absolute terms 'Refraction Shift' is more descriptive than 'Light Shift' Photon's energy remains the same All of the above An object emitting certain color may be at some spot in the universe but if the detector moves toward the object, the color of the object will shift toward blue. There is symmetry in this phenomena and if the detector is moving away from the object, the object's light will shift toward red. The object's color, however, does not and did not change. Fundamentally, your eye will register the identical color regardless of the speed you are moving toward (or away from) the object. Physically, the 'light shift' refers to the increased refraction of light as light enters the prism of the detector. The color -- that is the inherent energy of light -- does not change, but light enters the incoming prism faster and refracts along a path that is usually a path of different color. With your eyes as your detector, the colors will remain the same but will become slightly out of focus. Refraction changes - but not the wavelength Picture filename: When the distance between the light source and detector remains the same, no refraction shift (light shift) could ever be detected. Therefore, the absolute calibration can be achieved in the laboratory and a prism based detector can map any and all energies (colors) of light in absolute terms. Refraction shift is a great mechanism that is worth exploring and exploiting. The next step is to pick the best colors available from cosmos. Balmer's hydrogen lines have much potential. For example, if the refraction shift changes with seasons but always returns to the same value then a solid conclusion can be reached that the interplanetary hydrogen is stationary, and absolutely so. Now, Earth's velocity can be calculated in absolute terms and, therefore, any other object's speed could be known in absolute terms as well. Refraction shift is a fairly complex phenomena that calls for differentiation between energy and light's property of refraction. Yet, the information presented this month is adequate to take on any and all warp brains that could be out there. For example, you could make a successful case that the energy of the photon is absolute because, regardless of the speed of the source or the detector, the photon's energy remains the same. {Aug 2, 2005} Another way of appreciating the differentiation between the refractive attribute of light and the energy of light is by taking prisms with different indexes of refraction. Each prism will refract the same light differently but the energy of light going through any of the prisms and leaving the prism will remain unchanged. The prism separates photons based on the intrinsic and absolute energy photon has even though the separation distances between colors may be different. DSSP Topics for July '05 Relativistic presumption is presumptuous. Energy of a body stays with the body but the change in reference "takes it" away and "gives it" to another body: Cannot take and give energy through reference alone and all energy references are and must be absolute The relativistic presumption has been with us for 100 years. It was built from the everyday observation that a watermelon being struck by a car causes just as much damage as when the car is parked and the watermelon is thrown at it. People calling themselves scientists extended this simple observation to the entire universe. This happened even though Newton could not appreciate imparting energy to an object and then removing the energy at a whim of shifting the frame of reference -- shifting the reference disregards energy conservation because energy was indeed imparted onto a particular body and not on another body. Scientists were able to go mainstream with the relativistic mindset by saying "it's the same thing" and made relativistic presumption into relativistic postulate. It may be time to look at the relativistic presumption again because the dual slit experiment proves that the frame of reference placement is not arbitrary. In other words, the energy of every individual body must be respected in absolute terms. Dual slit experiment not commutative Picture filename: Every moving particle has a wavelength that is commensurate with the moving energy (momentum) it was given. Greater momentum gives the particle more moving energy and shorter wavelength. Indeed, the ability to mathematically calculate the superposition (interference) pattern is one of the fundamental victories of quantum mechanics because it is the wavelength that is being used in calculations. Moving electron has momentum (velocity) and this translates, through Planck constant, into wavelength. See Louis de Broglie for details. In the dual slit experiment the particle is released from its source at zero velocity. The particle is then speeded up with particular amount of energy and acquires velocity ve -- and that is why the frame of reference must be at a spot that recognizes the velocity ve particle acquired. If, for example, the frame of reference is shifted to the electron, ve becomes zero and the wavelength associated the the particle is now different (zero). The particle, after the frame of reference change, has no wavelength because it has zero velocity and the mathematical result does not match what is being observed. Moving the frame of reference corrupts reality. • From the dual slit experiment it is apparent that each particle conserves the energy it was given. Energy translates (transforms) into a wave that can be measured when particle goes through the dual slit. Therefore [all answers are loaded]: • Newton says: "Yeah, I can find out if somebody threw that watermelon. There is a causal aspect that comes with the absolute frame of reference." • Einstein lives on as the shining example of pathological science • If energy and a particle form a continuum (moving energy continuum that is momentum) then no reference can take the energy away from the continuum • If the electron is not accelerated but the dual slit is, then the superposition (interference) pattern cannot form • If the energy stays with the particle then de Broglie wave stays with the particle. • If the wave stays with the particle then ether must negotiate the conservation of energy • Okay, what if I can add to the wave but without pushing the particle • Okay, what if I can subtract from the wave.. • Okay, what if.. DSSP Topics for June '05 The first test of mathematical soundness: Is the solution tractable? The math guys make two things quite clear. The first is that any math is good math. The second is that math operators we use presently, such as addition and multiplication, are complete and adequate. The first one is so-so, the second is not so. What complicates things a bit is that some operators such as square root are tractable in geometry but intractable in arithmetic. The best example of the mathematical intractability pretending to be a solution arises with the dual slit experiment performed with electrons. The path each electron takes is by some modeled as "sum over all possible paths," or "sum over all possible histories." Mathematically, however, this model leads straight to the intractability that is analogous of the traveling salesman problem. Unfortunately, intractable solutions continue to be proposed even though no computer can execute such solutions in real-time. Electron becomes nonlocal Picture filename: One 'left brain' model that describes the dual slit experiment with electrons is based on the presumption that the electron cannot be split. (If the electron cannot be split with a hammer, it cannot be split -- that's the rationale.) This, in turn, ushers in the intractable model of the traveling salesman problem because the experimental results call for the electron to also go through both slits and then the electron's path needs to be tracked in every possible way. • Dual slit experiment with electrons could be mathematically modeled as tractable or intractable because (pick what you like): • It gives us a sense of power that we can see nature as being confused -- tractability has nothing to do with it • It gives professors importance in being able to show that something is intractable and, even though they cannot apply it, they continue to make theories with it • Each and every intractable solution is not a solution because all experiments reflect nature that is demonstratively, and therefore inherently, tractable • It may not work with a hammer but did you try honey, dear? • If the electron becomes virtual electron in the atom, it could become virtual outside the atom. Look at the electron as a wave or go ask Schrödinger because the wave solution is tractable Note: Any and all intractable solutions are not real or practical solutions. Intractability is not an approach nature takes and alternate model/solution should be sought. Feynman's explanation of the dual slit experiment is not worth the work. Neither is Dedekind's "proof" that irrational numbers are real numbers because he relies on intractable methods taking infinite amount of time. In fact, Dedekind's proof never ends (and it is still going on). Adopting tractable-only baseline also means that you cannot use the parameter of distance when going from one solar system to another -- spatial distance parameter becomes intractable because it takes unbounded amount of time to traverse cosmic distances and "you cannot get there from here" by using the parameter of distance. [For the projection operator, think of the dependent parameter idistance after transformation.] DSSP Topics for May '05 You cannot cut a point in two, and only geometry lets you figure it out Electron takes advantage of the truth What math is and is not You can cut the area of a circle exactly in many different ways. You could cut the circle into halves, thirds, quarters, fifths, sixths, or any multiples thereof such as twelfths. No problem. Yet, every time you do so, you cannot include the center point in the cut. You could invoke Euclid and say that Euclid said: "Point is that which has no parts," but that's the easy way out because you argue on the basis of a definition rather than on the basis of merit. [If you are a bureaucrat then you are right and you need not go any further.] You could say that area is an area and point is not an area because point is zero-dimensional. That's much better. Pythagoreans through the Tetractys differentiates between zero, one, two, and three-dimensional entities. Knowing what physical parameter is the area helps Picture filename: The best way of looking at cutting the circle is that if the center point is included in the division then the point would belong to each and every part of the cut. This would allow the point to be separated and then the point would no longer be zero-dimensional, which goes against the definition of a point. If you succeed in cutting a point then you cut something else (and maybe created polarity) but you did not cut a point. A point, then, truly cannot be cut. Some may think we are going back and forth "splitting hair," wallowing in the theoretical this or that mumbo jumbo that any self-respecting samurai would scorn at. ("Off with their heads!" shouts the Queen.) When an electron becomes constrained by measurement or by physical structures it instantly reduces -- into a point. As a point the electron cannot be subject to division by any force. Nobody succeeded in cutting the electron (or proton) into pieces. The best part of this month's topic is that only geometric reasoning will arrive at the indivisibility of a point. Only geometry can make a link between a reducing (collapsing) electron and its indivisible foundation. Arithmetic can be applied to prove some things but only after the geometric percept establishes the indivisibility of a point. Shake hand with a Pythagorean. (1) The underlying foundation of reality may be mathematical but that does not make mathematics a panacea. Mathematics is but an umbrella term for many diverse pursuits, and math can be corrupted just as any thinking process can be. There is a way of working numbers rationally, irrationally, geometrically, under different degrees of freedom, with and without symmetry. Such workings are separate and interactive, transformative (reversible or irreversible), logical, physical, symbolic, and mythical (numerological and magical). Each category has unique advantages and that is why each category exists. You can possibly lose your head in more ways than one. Enjoy. (2) {Feb, 2006} Metaphysically, the exclusion of the center point can be worked with Isis and her putting Osiris together: She could not find his centerpiece and so she fashioned one. DSSP Topics for April '05 Memory makes you smarter Empower the observer and leave relativity in the dust for black hole enthusiasts to kick around You can see the promise and get there, too The illustration below is used by science writers to "prove" that the observer C cannot determine the precedence of events. If the event happens simultaneously at A and B and if the observer C is on a moving train then, they say, C cannot recognize the event as simultaneous event. The argument is that, while lightspeed is constant (that is, absolute), C is moving toward the message sent by B and receives it from B before he receives it from A -- and there is "no way" of figuring out when events really happened. Freebie illustration for those who can Picture filename: The problem of simultaneity is solvable in several different ways.. Conductor C could know or figure out how fast the train is going and use a calculator. Even without measuring the train speed, observers A and B can meet conductor C anywhere on the train, synchronize their timepieces, and move to their stations at the ends of the train. When the event happens, conductor C receives messages containing the time stamp of the event's happening. The very existence of absolute lightspeed allows the construction of absolute clock that keeps absolute count during motion by both A and B. After all, the (parallel or anti-parallel) round trip of light is always the same regardless of the train speed. The formal system is saved once again by a regular guy who is the chief conductor and the captain of his ship. {August 2005} This "problem" can be also stated as the 'problem of synchronizing watches.' As the light-based clock moves from A to C to B, the light-based clock synchronizes all observers to the same and absolute time thus allowing the time stamp method to resolve this "issue." To claim relativity theory as proven, the scientist deliberately takes the resources away from observers until they just do not know. So, by taking the boat away the scientist can prove it is not possible to get across the ocean. Why, you could even discover that intelligence is needed for this problem. You could also figure out that the knowledge of simultaneity and the answer regarding event precedence is inherent in the problem, and it takes a bit of smarts to bring it out. It should not be unusual for the universe to develop concepts such as memory and get to the solution that is contained within the problem. In addition to memory, a solution can be obtained through measurement, computation, and movement. Memory, measurement, computation, and motion could well be the first four tools to begin understanding the universe. • The differentiation of precedence and the recognition of simultaneity may be easy for some, but what else is there that can be derived from this (cross out the silly ones) • No time travel -- that is, time travel is not forbidden, it just does not work because time is always a derived variable (or, as Buddhists would say, time is a variable obtained by convention). A math guy could figure out that a variable that is a derivative cannot be hidden or, better yet, cannot lead (cannot have priority over) other variables. Time, being always a derivative, can also be called time overlay. [The impossibility of time travel does not invalidate the possibility of superluminal travel -- but that's another topic.] • Go time travel -- that is, time is recorded in memory and it is then possible to associate past time with past events just as when reminiscing from old photographs, newspapers, paintings • Nature manifests addition as superposition • Nature manifests multiplication as force (always), energy, or transformation (sometimes) • Nature manifests memory as • conservation of energy/momentum • closure in • angular geometry such as planetary orbit and/or atomic orbital • linear geometry of standing waves (specific vibrations or vibration mix between nodes) • unique structures that reduce unique photons (content addressability) [grand prize of computing, possibly for Pythagoreans] • Nature manifests movement as • inertia/acceleration • Doppler shift • Nature manifests computing as • spatial geometric forms (formations) of atoms, solar systems, galaxies • multidimensional computing [for Pythagoreans] Note: Absolute photonic undulation of hydrogen radiation can be measured. The absolute speed can then be measured through Doppler shift if the interplanetary hydrogen is stationary DSSP Topics for March '05 Points exist in a plane but the constructed distance between some of them cannot be exact Euclid's first amendment: "You can draw but one line between two points but don't ask me how long that line is." Pythagorean says: "Incommensurable numbers are loaded." Schauberger says: "Virginia, if you looked in my bag you could figure out I not only look like Santa Claus, I am Santa Claus." The 'number line' is for underachievers The incommensurate (irrational) number is usually discussed in a context that "this number cannot be obtained by rationing." A real mathematician may well be complaining about the computer that does division but still cannot portray the world around us. A mathematician, of course, would not be complaining about a number. Irrational numbers are constructed with good ol' integers but you may as well leave the computer at home. Irrational numbers are constructed only geometrically in 2D or 3D space. Irrational numbers exist in 1D but they cannot be created by staying in 1D -- or measured in 1D after they've been constructed with the Pythagorean Theorem. Pythagoreans like to construct a pentagram and they like the pentagram because it has a lot of incommensurable golden proportions. Pythagoreans like the golden proportions because .. there is a lot of concurrency and dimension mixing happening inside certain pyramids. Mario Livio in his Golden Ratio categorically states there is no linkage between the Great Pyramid and the golden ratio. He dismisses the connection because there is a 0.1% golden ratio measurement discrepancy with the pyramid, which is by now rough at the edges. The author does not know where to take it if there is a connection, and it is then easier to deny it. Once you become familiar with scientist-speak, you will understand it as, "I have no clue but if somebody does, I got that 0.1% keeping me warm." Let's take a look at the (square) root of ten Inexact has benefits Picture filename: You construct the exact length of three (horizontal), the exact right angle, and the exact length of one (vertical) -- and get two points in space. However, the diagonal (straight, 1D) line between these two points is not exact. Actually, the length of a straight line spanning these two points can never be exact. We can have a line that is exact and shorter -- or exact and longer -- than the root of ten, but there exists no solid line that is exactly the length of the root of ten. There now exists a gap that cannot be closed if you wish to construct the distance exactly. The right angle construction is exact but the straight line (diagonal) construction is not exact. You, (sort from best to worst) • Write a very unkind letter to Dedekind • Take advantage of the truth • Write a letter to Noether and tell her the space symmetry and space homogeneity has real problems because there are 'holes in space' and 'distance gaps,' and some spatial directions, angles and distances are inherently not deterministic. After all, the conservation of energy is exact, not just "close enough for government work" • Connect the two points with a solid line and • muse about it • forget about it • Connect the two points with a dashed line and • reflect on it • build a gallery over it • Keep away from all irrational numbers and construct only the 'clean and rational' triangles the likes of 3, 4, 5 • Keep squeezing the gap until you get to zero at infinity and declare irrational numbers with infinite mantissa a subset of real numbers [caution, those who took this path are still squeezing the gap] • Differentiate between constructible (two points exist) and executable (length and angle implementation is exact) • As every Pythagorean knows, what works in 0D may or may not work in 1D or 2D. Because you cannot divide an area of a circle into three exact parts unless you exclude the center point, then the reverse is also true: What works for 2D does not necessarily work for 0D. (Euclid said a long time ago that a point is not divisible, but he said it as a definition.) (1) It is not possible to place the root-of-ten (or any irrational number) on a number line. The line you think is root-of-ten long is not exact and can never be exact. You cannot mark a line at a spot that has infinite number of decimal places, just as you cannot store the infinite mantissa number on your computer. Real number is unbounded but finite, and once you truncate the irrational number you'll get a real number but this number is no longer an irrational number. Irrational numbers are irreducible because the conversion of the irrational number into a real number is not reversible. [You would not catch a Pythagorean reducing incommensurable numbers into real numbers.] The idea, then, is to forget about the number line and think of some nifty applications for irrational numbers. If you are a hard core reductionist and must keep on reducing, that's fine -- black hole beckons.. (2) There is a way of thinking about the irrationals through the number Pi. Pi is transcendental and is composed of infinite series of components. Pivoting around a fixed point and pointing in all possible directions cannot always result in the exact angle. If it did, Pi could be an exact number as well. This is more fundamental than you might think. (3) Root of two is the most mysterious of all because the diagonal construction angle of 45 degrees is executable while the length of the diagonal is constructible but not executable. {Dec 1, 2009}: Even though the diagonal is not constructible, diagonals of any and all squares can be placed on the number line -- but only as two points of a distance in space. This gets interesting in the Great Pyramid since the SQRT(2) diagonals can overlay the centerline in the Grand Gallery. (A transcendental such as Pi cannot be put on a number line even as two points.) Notes {3/31/05} (1) Euclid's axiom stating that there exists but one line that can be drawn between two points is not incorrect. Such axiom, however, is also incomplete because it makes no statement regarding the length of a segment between the two points -- some segment lengths are rational and have finite mantissa while others are irrational and have infinite mantissa. The irrational segments, then, cannot be drawn without truncating the mantissa. Some mathematicians define irrational numbers as real numbers but there is no rhyme or reason for doing so. You can define anything anyway you want, but the fact remains that irrational numbers are always infinite in mantissa while real numbers are always finite. The idea is that there are certain advantages to keeping the infinite mantissa with irrational numbers and if you think of irrational numbers as real numbers you will never figure it out. (2) There are some funny looking symbols that nevertheless make sense in their own way. One of these is 'Tibetan Master' symbol, which is shown in one book on Reiki healing and is relevant to this month's topic. It's coming to you Picture filename: [This particular irrational number application is likely a significant East-West divide, which will be bridged with one addition to Euclid's axioms. This is the second enhancement. The first is the understanding and inclusion of Dantien (Hara) from September 2004 Exercise.] This particular property is discussed in the Quantum Pythagoreans book. DSSP Topics for February '05 Some mirrors give some people a splitting headache Religious Photon: "Once I was split but now I am found" James Bond Photon: "Parted, not split" The illustration below shows that in the A1&B1 path all photons are always detected, while in the A2&B2 (dashed) path the photons are never detected. Photons are best visualized as paper-thin wave crests -- that is, as two-dimensional waves that, nonetheless, can rotate about the axis of propagation while maintaining its flatness. You may have heard that the reflecting photon rotates 180 degrees and becomes out-of-phase. But if you apply it in the illustration, something is not right because both paths (A1&A2 and A2&B2) would have photons propagating in-phase (together) and, therefore, photons should be detected in both paths. Actually, reflected photon rotates 90 degrees CCW during splitter reflection but rotates full 180 degrees during fully coated mirror (regular mirror) reflections. The incoming photon P approaches the half-silvered mirror (splitter) with, say, Up polarity. You define which way is Up. When the photon rotates 90 degrees CCW from Up then the result is Left polarity, while 180 degree rotation results in Down polarity. During transmission there is no rotation and no change in polarity. Photon rotation = polarization Picture filename: The reflected photon rotates 90 degrees CCW during splitter reflection but rotates 180 degrees during fully coated mirror reflections. Rotation difference leads to this month's exercise. • Photon "splits" at the half-silvered mirror. Its reflected component (branch) rotates 90° CCW because (check all that makes sense) • Reflecting photon thinks it has the perfect impedance match. Between 0° for transmission and 180° for perfect reflection is 90°! • It's the same thing as perfect optical coating and the angle of incidence (45°) does not matter. Photon rotates 90° regardless of the angle of incidence • Polarizing filter can make the rotation any number of degrees. The splitter is designed for 90° • Who cares. Just tell me how many degrees. Besides, why do I have to use my left hand if the world was created for the right handers • Photon (or photon branch) rotates by 180° when it is reflected by a regular mirror. This is because: • Photon is an even function and multiplying by -1 (or twice by i) results in the reflection about the axis: f(x)=f(-x) [Tricky, leave for last. Think what axis we are talking about.] • Reflecting photon sees infinite impedance. It must rebound 180° out of phase (Up becomes Down, Left becomes Right, and vice versa) • When I look in the mirror, right becomes left but top stays on top. Something's fishy. [Not so. Think how the direction of propagation does or does not affect the Up/Down and Left/Right definition.] • This is all about snakes and snakes are creepy. And snacks make me fat, and .. Even though the photon can be detected in but one path, the photon is really not recombined. For every photon, 25% + 25% of its wavefunction continues on A2+B2 path but since the polarities on this path are always out of phase, the photon will never be detected there. The photon is still parted in both branches (1 and 2) and unless the photon is reduced it will stay parted that way, forever. There is still one question you may want to answer: What is the wavefunction amplitude in both branches? [Advanced. Think about the difference between superposition and self-superposition. Photon polarity by itself does not affect the probability of detection/reduction.] DSSP Topics for January '05 Say hello to the extra moon A quick check on Mars' moons shows these two (Phobos & Deimos) are two octaves apart: Their measured orbital ratio is 0.253, which is pretty close to the ideal 0.250 (or ¼) for two octave difference. A quick check on Jupiter's moons shows one and two octave separation: Jupiter's Io and Ganymede periods are measured at 0.247, which is close to the perfect two octave separation of 0.250. Io and Europa have a single octave separation, which is measured at 0.498 orbital periods ratio (close to 0.500 or ½ ratio for a single octave). Last month I was lowering Newton's contribution in proving Kepler's equations. I've been reminded that the easy part is only in cases of the circular, or idealized, orbits (Zembrowski A History of the Circle). For Newton to prove that the solution holds for the elliptical orbit, he needed and developed the concepts of calculus (fluxions, infinitesimals), which took a good part of a decade and a priority tangle with Leibniz. New Orbit for the Extra Moon The best orbit ratio appears to be ¼ (two octaves). This ratio is found with the moons of Mars as well as with Jupiter's moons. Moon in the inner orbit (closer to Earth) will be the best as it could be used for staging. The new moon should be about 1/10th the mass of the present moon if the new moon were to have but a moderate influence on tides. (This may continue to be an issue.) Applying Newton's enhanced Kepler equation: P12/P22 = (M+m1)/(M+m2) · R13/R23 P1, P2 .. Period of moon1(2) m1, m2 .. mass of moon1 (current) and moon2 (new). M is Earth's mass, R1 is moon1 average distance of 384,467 km. (M+m1)/(M+m2) is very close to one since the present moon's mass is only about 1.5 percent of the Earth's mass P1 = 4P2 (2 octaves). New moon's period is 4 times faster than the present moon (new moon will have about 7 days orbit period). Substituting numerical values, 16 = R13/R23 R23 = R13/16 R2 = R1/(2·21/3) = 384,467/2.52 = 152,566 kilometers The average distance to the new moon would be 153 thousand kilometers or about 40% of the existing moon's distance. The new moon will exert gravitational force on Earth that is about 64% of the existing moon's force (use Newton's force equation) and tides would change significantly. Another way of mitigating the rise in moons tides is to put the new moon in the outer orbit using the same, 2 octave separation. In that case the tide increase due to the second moon would be negligible but the second moon would not be of much use. Note: The influence on earthquakes may be difficult to asses. Assuming the moon does affect earhquakes to begin with, the inner orbit moon can have a calming influence because it averages the existing moon's forces. On the other hand, at times the forces would be additive and earthquakes could increase. {2/28/05} Earthquake analysis needs to consider both force and duration. Increase in force due to the combined influence of both moons is of some duration. Decrease in force due to subtracting influence is comparatively much longer in duration. Overall, it is most likely that earhquakes would decrease with the presence of the second, and particularly the inner, moon. Constructing the outer moon to mitigate tides would make it more distant but only under the presumtion of applying the current (propulsion, inertial) transport technologies. DSSP Topics for December '04 In search of planetary musical notes,   Kepler is cool,   Pythagoras makes it true and pleasant, and   Most planets (all terrestrial) stay within two octaves Pythagoras claimed planets make sounds. During the Renaissance this became the music of heavenly spheres and each planet got a note. This month it is about the orbits that are indeed musical. Kepler is the most cool guy from a simple perspective. After 2000 years of geometry Kepler comes up with equations that allow the actual mathematical prediction of the planet's position -- you do not need lookup tables, you want a PC. Kepler coined the word satellite (and focus as well). Kepler's equation is used to get orbit times for any satellite or a planet. Newton later derived Kepler's equation from a more fundamental relationship. But as any math guy will attest, once you know you are dealing with the right answer the individual components are easy to put together. In other words, it is much easier to disassemble and reassemble a puzzle than to put it together from individual and possibly incorrect pieces. Kepler's result is that any planetary orbit period P has a relationship to the orbit radius R according to P2 = k · R3 where k is a constant. [If you think of 'doubling the cube' problem, there is Pi in the k.] Yet, there is still something missing because neither Kepler nor Newton say anything about planetary separation. Technically, we can go right ahead right now and validate Kepler's equation for all planets, but planet-to-planet interactions are important. It is easy to dismiss interplanetary interactions as "way too weak," but the periodic nature of orbits makes such interactions repetitive. Disturbances would add up and that is why there is not only a large separation between planets, but also there are particular orbital ratios between planetary orbit times. For example, Venus-Earth orbit ratio is 8:5. You did not learn about this ratio in a public or private school, in part because your teacher would not know what to make of it, and also because the school committee cannot deal with too many implications. When 8:5 is reduced to 1.6 it becomes just a number and school committees can deal with that. Unreduced, however, 8:5 is (a) composed of integers, and (b) it is a ratio corresponding to particular notes of the musical octave, and (c) the unreduced number of a ratio remains true to its ratio while the numerator and the denominator are translating and/or rotating (and geometry is what you need -- reduced ratio is one number and it's just a point). A particular planet does not have a particular musical note (as was thought during the Renaissance) but pairs of planets share musical notes because the orbital ratios match the vibrating ratios of the notes of the octave. We need musical ratios of Pythagoras because rational numbers (numbers having integers in numerator and denominator), and notes of the octave in particular, have little or no harmonics. We don't want too many overtones and we surely don't want irrational numbers because we want the new orbit to be stable for years to come. So, we can play the orbits just fine, but we need to use only particular tones -- we need to use the notes of the octave for the ratio of Earth and another planet's orbits. The Construction The following table gives the orbital periods calculated with ratios from the musical octave and the actual planetary orbit times (periods). All ratios are with respect to Earth's period: Planet (Earth and..) Musical ratio Calculated (Reduced) Actual (Measured) 1/4 (2 octaves) You may want to look up the actual notes (do, re, mi.. or C, D, E..) for the above ratios. Two other planets, Neptune and Pluto, orbit in a musical ratio of 3:2 as well. You can also calculate the orbital distance R using the musical ratios and, applying Kepler's equation, compare the musical and the actual results. You want to stay with ratios because you will not need to know the value of the constant k. Next month we will look at what it takes to build a new, second moon for Earth. We don't need to know k for the planets of Oss (our solar system) but we will need to know the guts of constant k to do the extra moon -- and so we'll bring Newton along just for that. So far, 5:3 (la, A) musical ratio for the outer moon seems fine and makes for a unique note because this ratio is not used by the planets (but it is close to Mars' B). Happy and musical Pythagorean New Year! Credit: University of St. Andrews, Scotland Pythagoras on a Greek coin (date of coin unstated). On the coin, Pythagoras: • Helps out with modeling of a planet or another moon • Levitates a stone ball • Keeps warm by creating a ball lightning • Divinates 2,600 years into the future • Has good time posing for the coin maker • Strikes a note of the heavenly sphere • Empowered by Apollo, Pythagoras directs the divine spark to get the solar furnace going • All of the above {September, 2005} Orbit and musical ratios are alse addressed in the Venus, Mars, and Pluto orbits through the construction of multi-pointed stars. DSSP Topics for November '04 Does photon always have the same energy regardless of the observer's motion -- that is, is photonic energy absolute? Planck constant remains a constant while becoming virtual Photon's wavefunction has positive and negative probability What if superposition is instantaneous? If all observers measure the same value then such measurement is said to be absolute. As you move toward the photon of light its frequency (of the undulation of the wavefunction) increases because the wave crests are coming in faster. You also know that frequency is proportional to the energy of light and so you think that photon's energy increases as you approach the photon. Not so, amigo. It is true that frequency increases but photon's overall length also becomes shorter. What gives? Exercise Questions • Observer moving toward a photon may register increase in photon's frequency. If observer recedes from photon's source then frequency may decrease. So, photon's frequency is not absolute but is photon's energy absolute? • Frequency increases and so energy of the photon increases. Done deal. Finished • When a photon slows down inside the glass, for example, its length shortens as in the left picture above but the photon has not lost or gained any of its energy • It's relativistic. Put your faith in the equation and don't you worry about it (or, let me worry about it) • When photon exits glass it speeds up again, its wavelength lengthens as in the right picture above, but the photon still has the same energy. Use your reason. (Watch for change in speed, however) • If the photon has 10 units of total energy and if the observer has 1,000 units of total energy, why would photonic absorption result in more (or less) than 1,010 units of total energy? Why would the sum be different if the photon catches up with the observer who is either coming or going? The conservation of energy holds and the photonic energy is absolute • Make up an answer pretending to be NASA spokesperson, such as: "All photons in the universe are the same and they come to us as the uniform background radiation. All other photons were taken out through the diligent work of like-minded scientists. If you cannot see any of the scientists here, that's because you cannot tell them apart from the woodwork." • Photon, once created, has the same and fixed amount of energy in absolute terms for the life of the photon. Photon can be passed through slits resulting in self-superposition (self-interference) and its frequencies (shape) may change significantly but its energy stays the same irrespective of the speed of the source or the observer • Does the photon as shown above move in the horizontal or vertical direction? • Of course it is horizontal. That's how all texbooks show it • If photon is an even function then it must be symmetrical and, therefore, propagate in the vertical direction while always presenting its even symmetry to the environment • Diana's bow has an arrow that shows the axis of symmetry. It is apparent that the photon moves vertically [Heads up to all guys -- Diana's on the hunt.] Note: If you are ready to apply Planck relation E = f·h to this month's DSSP topic, consider that Planck did not (have to) use this relation in the moving source-observer context or light-moving-through-matter context. Planck reached his number constancy (h) conclusion because the experimental results matched his equation. In fact the match is so good his equation drove the change of the atomic model from 'part-time oscillator' to 'electron radiates when orbitals change.' If the frequency f in Planck relation is in fact tied to the orbital period (1/f), and it sure looks like it is, then Planck constant remains a constant. The orbital is a real thing (eigenstate, if only for a moment) and, since photon's energy and momentum are both virtual (light cannot push mirror), Planck constant is to become a virtual number or be multiplied by i. Planck also coined the word quantum. As far as the moving source-observer context is concerned, think of the energy density as being a function of an area consisting of positive and negative segments of a curve, in 2D. Dirichlet is right on when it comes to photons coming from materia(ls). The most interesting thing about Dirichlet function is that it has positive and negative values inherent inside the function. Mathematically it is trigonometry but the interesting connection to physics is that photon's wavefunction takes on negative values. (Photon is pure energy and it is nothing but a wavefunction.) This means that the positive probability of a wavefunction alternates and combines with negative probability. Because the positive probability starts off larger at the center, the total probability is never negative -- even if there is a running sum that happens during reduction (absorption). The positive area is the energy that comes from the higher orbital while the negative area comes from the lower orbital -- that is, lower orbital's lower energy is subtracted on the fly from the higher orbital's energy. Multiplying by photon length (or integrating all the way) then yields the total and absolute energy. The sum, of course, will be the same if you integrate slow or fast. Also, the quantity of photonic undulations (frequency) can change, but the positive and negative areas stay in the same ratio and then the energy stays the same. [Light is pretty smart after all.] Note {11/30/04} The probability of the photonic wavefunction alternates between positive and negative values. The energy at reduction is never negative but is proportional to the square of the net area, which is positive and constant. Photon always reduces as a whole. [Euclid defined a point as that which has no parts. Photon has no parts.] If you work the moving observer context and find there is energy left over, you may have strayed (photon carries all of its harmonics with itself and photon's geometry is [for now] linear..). Dirichlet also worked the polarity in general, which is relevant in the explanation of the existence of the electric charge Nature can integrate during instantaneous photon reduction and this also means that the creation of superposition is instantaneous. Of significance is that the operation of superposition is not the limiting factor in light propagation. More the wing you spread Weave the waves ahead A friendly squeeze and in a breeze You're there, instead DSSP Topics for October '04 Gateways between the real and virtual domain are at points where both domains touch Another way of looking at independent (or leading) variable: Independent variable is enforceable You do not need to get involved with Gödel's problems to see how someone could get totally mixed up in a formal system. Taking the output of the inverter and routing it to its input is an allowed combination that has indeterminate and confusing outcome. The inverter, as the name suggests, inverts true logic level, say 1, into false level that is 0, and vice versa. When connected as shown below the inverter's output cannot make up its mind. Allowed and can be reconciled Picture filename: Reconciliation of the conflict arising from formal procedures is through enhancement in context, which, in turn, facilitates completeness. [Completeness, however, cannot be achieved at the expense of tractability. Should intractability arise you will need to tackle it by engaging the dragon -- but not in this article.] In the case of the inverter connected as shown above, once the frequency response of the inverter is included as the new parameter of the real system the problem is resolved. Frequency (or vibration) is also to be found in the virtual domain and some people may see such coincidences as gateways or windows or 'points of contact' between the real and the virtual domain. but less conflicted gateway happens with Euclid's 5th proposition of his Elements. This proposition postulates there exists but one line that passes through a point while parallel to a given line. If, however, spatial distance is allowed to bend, another line or lines can snake around the given line in a spiral fashion while being equidistant from ("parallel" to) the given line. The resolution here is somewhat more complex. (To merge/differentiate parallel and equidistant you need a flat/curved plane.) In the real domain, spatial distance is the absolute construct and distance bending can be disallowed simply because any amount of spatial distance bending can be measured and only one line is then a straight line. Euclid's 5th proposition is thus correct because when Euclid calls for a straight line he can have it in absolute terms. In the virtual domain, however, spatial distance parameter is a dependent parameter. A wave can be a straight standing wave as well as a wave of the distance-curving orbital, and computable solutions exist for both or one or many or none, depending on vibration and geometry. Nonetheless, Euclid's Elements is valid and complete in the real domain because a line is defined and enforced as infinitely straight. The gateway in this case is the parameter of spatial distance, which in the quantum mechanical environment (in the virtual domain) becomes dependent on other variables. Nonlocality of the wave in the virtual domain does not allow the parameter of spatial distance to be the independent variable and, therefore, spatial distance is not enforceable in the virtual domain. The finger. Science writers oftentimes bring up the example of a triangle being drawn on a spherical surface where the internal angles of such triangle exceed 180 degrees. This is, rather, an example of feeble science writing because the curvature of the sphere is easily measured. Euclidean geometry is applicable to the real system that is also a formal system that is also an unambiguous system. The sum of internal angles being 180 degrees holds for a flat plane triangle and the flatness is enforceable. To say that triangle's angles on a curved sphere exceed 180 degrees has as much intellectual weight as the claim that overloaded truck's tires may not hold out. The weight of the truck is a real parameter in the real domain and can be enforced in the real domain. If, however, you find the virtual domain to be a significant component of the universe and want to pick organization out of chaos, then welcome to the ride -- it starts at the point of contact. Point is zero-dimensional and that's another way of applying zero. In fact, real and virtual domain can contact only at a point. Note: {10/2/04} Riemann used "his own" version of non-Euclidean geometry. His triangles always exceed 180 degrees because they are always on a sphere. His lines were not only not infinite or unbounded, but finite. It turns out Riemann was working the atomic geometry where modulo math always makes lines into finite segments and incomposite (prime) numbers help out with the orbitals. Note: {1/31/05} Gödel could not prove he is sane or insane, and so he ended up .. nuts. Note: {2/28/05} Euclid understood irrational numbers as numbers that cannot be obtained by rationing. If he did know that the irrational numbers have infinite mantissa, he could appreciate it is not possible to exactly determine the length of a line formed by an irrational number. For 2,600 years irrational numbers are truncated as "close enough," but the 'infinite mantissa problem' provides for yet another gateway from within the Euclidean geometry. DSSP Topics for September '04 Zero is emptiness to some, infinity to others Pythagoras keeps it simple with magnitude and multitude Real numbers represent real things. Real things cannot be zero because real things must be something tangible to be real. An argument can be made that zero is not a real number because in the absence of real things zero becomes nothing, absence, void, or vacancy -- and none of these are real things. The total number of real things in the universe is finite but unbounded. Virtual numbers represent virtual variables. Virtual numbers can be positive or negative and pass through zero. Virtual numbers are about pairs of opposites that include zero. The total number of virtual variables in the universe is infinite. Since there is infinity of virtual variables then there is infinity of zeroes. Exercise Question Zero is not a real number but there can be infinity of virtual variables that can have a value of zero. You conclude: • All real numbers are single ended and exclude zero • All virtual numbers are double ended and include zero • All virtual numbers can be centered about zero. Centering results in balancing of all virtual variables and centering is a subjective operation • In the Pythagorean tradition, the pursuit of real numbers is about magnitude while the pursuit of virtual numbers is about multitude Real numbers don't need zero Picture filename: Bonus question: How would you map the human spine as the zero axis? [Start with Leonardo and think three vs. four plus dantien] Notes {8/30/04} Dantien (Hara in Japanese) is a point in a human body that presently has no English equivalent. In time, this point will be explained as the 'second point of balance' and generically called the Couplex. Yes, there is the 'first point of balance,' which is also Couplex but in a different part of the body. Real variables are always single-ended. At times, pressure and temperature may be said to be negative but both the pressure and temperature are real variables issuing from real parameters that are vibrating real things. Temperature and pressure are single-ended because they have a limit at absolute zero (but they do not reach absolute zero). Real variables are not zero but they come close to zero. What is this close-to-zero parameter and how is it quantified? [think Planck]. DSSP Topics for August '04 Golden ratio is everywhere but it is mostly shown as linear proportion Modern Pythagorean definition of irrational numbers Introducing the Golden Triangle that is the Cosmic Triangle Golden ratio is also called the Divine ratio. It is found in nature and the ratio of two consecutive Fibonacci numbers converge toward the Golden ratio -- a property discovered by Kepler. Golden ratio is constructible, which immediately sets it apart from (Pi) and e. Golden ratio is irrational (incommensurable) but, unlike Pi, Golden ratio is not transcendental. Because Golden ratio is a ratio, sometimes called proportion, Golden ratio is best expressed as, well, a ratio a/b. Keeping along with our May '04 exercise we are not going to reduce a/b into a number because incommensurate numbers are "in potentia" as irrational numbers. In modern Pythagorean classification, irrational numbers are not reducible. Reduced irrational number undergoes transformation into a real number that has finite mantissa. Irrational numbers comprise a unique class of numbers and irrational number is not a real number. If we give the Golden ratio one number such as 1.6180 it becomes a real number and unbecomes the Golden ratio. For a/b to result in the Golden ratio, we take distances a and b and require that they maintain proportional relationship such that the greater distance a is related to smaller distance b in the same proportion even if the length changes. Then, a/b = (a+b)/a. This relation results in (a/b)2 = (a/b) + 1. Solving for a/b we get two roots (1 ± 5½)/2, and we are not going to reduce these roots any further. Pentagram is possibly the richest source of Golden ratios. However, pentagram's Golden ratios are mostly linear -- that is, a/b may be in Golden ratio but geometrically a and b are in-line. Golden ratio happens all over the pentagram and some writers get excited about it. Some make a rectangle having a and b on each side and get excited about it. Fine, but let's look at the general equation: Squaring the Golden ratio is the same as adding 1 to it. This means no fancy computer is needed to do squaring because we just add unity. So, by adding 1 we can do some quick and easy squaring! [Self-test: You are a Pythagorean if you are excited now. You know squaring is about force (acting on a string, etc.).] If you are "totally cosmic" you know which entities can only add and which can only multiply. (Question: What is the general form of addition?) Note {8/31/04}: Construction of an irrational (incommensurable) number facilitates its transformation into a (finite mantissa) real number. Note {10/30/04}: Pythagoreans called irrational numbers 'unspeakables.' Other than the apparent literal meaning of secrecy, the unspeakable aspect is that the irrational number is not to become real (not reducible into real number in today's terminology). Golden ratio in particular allows superposition of certain vibrations Pythagoreans though harmonizing and, therefore, healthful. The irrational number is to remain irrational while being useful without becoming real, as in the case of exactly, rather than just precisely, doubling the area of the square. While the incommensurate (irrational) numbers could not be spoken they could conceivably be sung or chanted. Incommensurate numbers, no doubt, had divine meaning for Pythagoreans. Visualization may be yet another way of keeping the incommensurate numbers unspoken and useful. The unspeakable property, then, does not mean that irrational numbers should not be spoken or spoken about but, instead, that irrational numbers cannot be (fully) described by words. You can talk about irrational numbers all day long and without necessarily breaking Pythagorean secrecy rules but the point of the Pythagoreans is that you will never be able to describe them. It is difficult to say all Pythagoreans shared in all of their knowledge. It is known their knowledge was separated into at least three levels or grades of study and 'initiation.' With Pythagoreans' emphasis on friendship it is not likely their knowledge was compartmentalized, but no knowledge was available to outsiders. With so much misunderstanding concerning irrational numbers the Pythagorean secrecy certainly works, and those the likes of Cantor never figured out the meaning of the unspeakables.# Incommensurable numbers carry in their core the infinite superposition and calling them irrational, for example, corresponds to the lowest level of understanding of these numbers. Having said that, we shouldn't have a problem calling Tarot#0 The Fool, for we've all been there. # Cantor never understood that real numbers have finite precision and, therefore, finite resolution. He could certainly define a new class of infinite numbers and do his work there but he though his work was applicable to real numbers and that is where he failed. Our left brain operates in the emulation of the real world and the left brain needs to stay true to that world (which is unbounded but finite). Cantor has gone insane and died insane. All state committees approving math teaching material may need to realize Cantor is attacking the intractable problem with sequential methods. Then again, math committees may not have access to Pythagorean teaching Exercise Question a and b can be any number and as long as the a/b is the Golden ratio, the relation (a/b)2 = (a/b) + 1 holds. The number 1, or unity, compared to any number that a or b can represent means that • There exists the absolute one, the monad, God, unity • There is the absolute unit of measure that is the length of one • Get with it. Everything is normalized to the shortest distance. a and b relate to that shortest distance that becomes the length of one. Mayans did not use fractions because everything can get normalized to the shortest distance. [But unfortunately Mayans did not get into modulo math, which is about harmonics and harmonics are about fractions.] • Whatever the units of measure for a and b, there exists the one of that unit of measure. [But only if you do not divide anything. It's okay to divide by two or more, even for Pythagoreans.] • There is no such thing as the shortest distance and the Golden ratio equation is a mystery Right and Golden Triangle Take the linear Golden ratio and rotate segment a clockwise until there is a right angle triangle. You get: Golden triangle Picture filename: Right and Golden Triangle is the Cosmic Triangle In addition to (a/b)2 = (a/b) + 1 there is now also h2 = a2 - b2. Working these equations you find h2 = a x b. Much better than, say, a triangle having sides 3, 4, and 5. This is Cosmic [very cool] even though you have to think about it. Ancient Egyptians got involved here and it appears they knew about the Golden ratio and put it in the pyramid but they were mum about it and did not identify or disclose Golden ratio. Rhind papyrus has fractions approximating Pi but nothing on Phi, the Golden ratio. if we presume that the height of the Great pyramid is the same as the radius of a circle that has a circumference equal to the perimeter of the base (8b), then 2h would equal 8b. (For more details see Mar '04 Exercise.) Carrying out the calculations will give us from (32/(1+5½))½ that is within 0.1% of the actual . A discussion can now be had whether the ancient Egyptians were going after Pi or after Phi in the construction of the pyramid. Since construction tolerances and aging/settling are greater than 0.1% the discussion can go on and on and either position can be justified. However, since many other dimensions within the pyramid were targeted at Phi, the true answer is not that difficult to figure out. Note: {April, 2996} We now also have a page on Golden Proportion DSSP Topics for July '04 Even if time is not inherently absolute the construction of the absolute clock is possible If time is always a derivative then time cannot dictate what will happen Time is derived from periodic orbits, initially the moon and the sun (earth). If these orbits were to change then our time reference would change as well. Atomic clocks may appear to be a more stable time reference than planetary orbits but temperature or pressure will change such clocks as well. Some even claim mental influence can change atomic clocks. Can we then say that, since time always depends on other things, absolute time cannot happen? But if time is derived from a source that is known to be absolute, can you construct absolute clock that counts absolute time? [Yes] Absolute time stays the same for all stationary or moving observers -- that is, time is not a function of the clock's velocity. (See last month Note on speed approaching lightspeed.) Exercises (Dealing with derived properties -- and what it does and does not mean) • If time is derived from a source that is known to be absolute, absolute clock is constructible. Additionally, can you construct Newtonian framework of absolute spatial distance and time? If so, • Will absolute clock get rid of chaos? 1. Guaranteed 2. Only helps 3. Neither guarantees nor helps • Is relativity postulate correct? [Yes, but only in the most trivial context.] • Are instantaneous events consistent with absolute time? Is instantaneous event the same for all observers in the framework of absolute spatial distance and time? (Instantaneous event is a nonlocal event during quantum mechanical reduction.) • Having established absolute space (spatial distance) and time, can you make a case for absolute gravitational force? If so, can you use conventional (Turing machine) computational means to.. • Describe the universe? • Construct the universe? That is, construct formal systems while avoiding chaos? • Grow the universe? That is, add new formal systems on top of existing systems while avoiding chaos? • Repair the universe? That is, reorganize a subset of a formal system that has gone chaotic? • If time is a derivative and absolute time can be constructed, can you • Reverse the relationship where other things subordinate to time? For example, can you physically travel in time in absolute terms? • Send out absolute time pulses and expect the universe to organize? • If time is a derivative then the "arrow of time" • Follows the nonlinear increase in entropy if time is derived from a closed system • Follows the nonlinear increase in entropy if time is derived from Eddington's proposition: 'Universe is a closed system -- all you get is increasing entropy.' (Since universe has no physical or thermal barriers that would make it a closed system then 'Entropy in the universe is increasing' proposition is intellectually so weak Eddington's competence and motives are issues here.) • Keeps repeating periodically if time is derived from a periodic system such as orbits. The mathematical solution exists and is periodic. (Pretty much what is observed when you look up.) • Keeps repeating asynchronously when considering electron's time-based evolution and asynchronous reduction. Time arrow moves forward but gets reset from time to time as a result of electron's interaction with photons or physical structures. (Electron's propensity to evolve fits nicely with Aristotle's potentia.) • Moves forward in the computer under program control but human interaction can have the arrow of time jump back and execute the same or another program again. (Fits nicely with Aristotle's causality.) • Points in backward direction if time is derived from nonreversible transformations but only after the fact (only after supernova in fact happens). Matter is destroyed and there is a setback because matter cannot be rebuilt readily • Points in forward direction if time is derived from the expansion of the universe, which, in turn, is correlated with the increase of organization in the universe [fits nicely with yours truly] • If time is derived from reversible transformations, time can be defined to go forward or backward, be periodic, point randomly, be event-driven, or be human defined. Yet nothing can be inferred from such time behavior and the arrow of time can be anything for all reversible transformations There is uncanny lack of understanding regarding derivatives -- as if scientists were somehow bounded and petrified of derivatives. Science writers can spell the word but they have little knowledge on what can and cannot be done with derivatives. Business writers do a bit better. Once we establish that time is always a derivative then this means that time is not intrinsically independent and we cannot make conclusions based on time alone. If time is derived from a variable that is independent then time can be treated as independent variable for as long as the variable from which time is derived remains independent. Time cannot be manipulated with the expectation that something else will change in the present or in the future. Time can be used as a reference to go into the past and recall memories only (things that already happened). Derived property is a subordinated property of an entity. As example, a shadow is a derived property of the object's position -- object's movement moves the shadow but the movement of the shadow cannot move the object. Mathematical differential is also called a derivative. However, a mathematical operation that is a differential (derivative) is different because mathematical differentials deal with changes -- need to distinguish between a mathematical differential and a derived property. If scientists became more educated about derived properties they would use fewer equal signs (=) and more derivation pointers (—>). Algebra is (becoming) inadequate. Another example of a derivative is the frame-of-reference. Frame of reference is a derivative of a spatial position or a movement of a real thing because the frame of reference has no mass and its position does not become nonlocal as it should be in the quantum mechanical environment. The frame of reference, just like time, is an inferior/subordinated parameter of a real-only object. Note {July 30, 2004} In summary, time can be found in various relationships with the observed physical phenomena but such time relationship is specific to the phenomena and time cannot be generalized by removing (disconnecting) time from the phenomena DSSP Topics for June '04 Keep mass and inertia together because inertia is a characteristic of mass that is derived from the behavior of mass If there is no light here then this must be hell Inertia gets enhanced description (below) Newton took the static definition of mass based on volume and density -- mass weight -- and gave mass a dynamic characteristic -- mass inertia. Inertia is the dynamic property of mass because every mass object will resist the change in the object's velocity and more inertia (resistance) there is more mass there is in the object. Inertia's unit of measure is force. Newton did not endow light with mass or inertia and he characterized light as being corpuscular. Newton knew light has no material property and picked a unique name to describe light. It turns out light is quantized into packets of energy now called photons. At the time of the Michelson light speed experiments in the 1880s it was well established that light's repeated bouncing between mirrors does not slow light down. Light, then, has no mass. Light is a wave and wave is by definition nonlocal, for light's energy is distributed over the presence of the entire wave. Light is a wave that can be branched at lightspeed in infinity of ways, and it is for this reason as well that light has no mass. The massless nature of light, however, did not stop some people from asserting that light has inertia. Exercises (something to consider -- or not) • Light slows down instantly when it enters glass and light also speeds up instantly when it leaves glass. How could somebody assume light has inertia if light speeds up on its own? • If light has no inertia how can light cause electrons to be ejected from atoms/molecules? [Think conservation of energy when photon is absorbed and ceases to exist as photon of light.] • Why nobody performed the experiment (theXperiment) that would measure the presumed pressure laser light imparts on mirror as it bounces from it? That is, why nobody measured the presumed inertia light is supposed to have? • Although inertia is defined through mass (it is a property or a derivative of mass), one scientist • Split inertia from mass and claimed that reflecting photons have inertia and push objects without slowing down and without having any mass • Confused himself so well he called light schizophrenic at times, and as having spooky action at other times • While holding on to photon's inertia he backfilled light with "effective mass" and proclaimed light subject to gravitation • Single-handedly launched a branch of pseudoscience that worships black holes • Light is energy. Light has no mass and no inertia, and light's momentum is virtual. Light is not subject or party to gravity. Light's unique property is that it can take on infinitely many shapes. Light can push things only when light becomes real energy at absorption, at which point the photon is gone but its energy lives on in another form (electrical, heat, motion, pressure). It is the infinite multitude of photonic shapes that enables the creation of many different types of energies when photon is absorbed. Having fun with light includes asking your teachers questions they will not have good answers to. Try our Stump Your Teacher. New Definition of Inertia is based on the conservation of energy: Inertia is the ability (property) of a real mass to accept work, store it, and return it in equal measure. The capacity of mass' inertia to store work is unbounded. Inertia is the agent (arbitrator, mechanism) of the conservation of real, in this case moving, energy. Moving energy is linear and/or angular. (Energy and work is the same thing -- they have the same units of measure of Newton (force) x meter (distance).) Often ignored aspect is that any distance must include direction because the force must be expanded in some direction. Work is force x distance. Mass body changes its speed or direction if mass inertia is to store the energy as work. The measure of work includes direction and inertia's ability to return work in equal measure includes magnitude and direction (it's a vector). Inertia mediates and facilitates the conservation of energy -- a mass object receiving energy in one part of the universe is able to conserve and return such energy in equal measure at another part of the universe. Because the direction is the component of energy that is being conserved, the directional aspect of the universe is absolute (agreeable to all observers). Movement of the frame of reference, linear or not, does not and cannot engage mass' inertia -- something Newton is quick to point out (Principia). At near the speed of light, matter offers greater and greater resistance to the applied force. Although real matter's speed cannot exceed the speed of light, matter's inertia has the ability to store unbounded amount of work. Rest mass or mass' gravitational force does not increase at such high speed but inertia continues to accept and save work without bound [obvious but not taught at regular schools]. [For Pythagoreans the operation of multiplication or squaring (matrix multiplication in general) facilitates transformation that gives rise to forces.] Note {June 30, 2004} Inertia negotiates the conservation of energy and at speeds approaching lightspeed the accumulated energy improves core/particle stability. Core/particle's half-life constant is no longer constant but increases without bound as lightspeed approaches. Near the speed of light the momentum of an object increases without bound (even though the speed is bounded by lightspeed) and the half-life parameter increases without bound as well. Atomic clocks based on half-life property of matter cannot be used to make accurate clocks because such clock's accuracy is a function of its absolute speed. However, using absolute clock as a reference, atomic clock can be used to determine the absolute speed of the ship. Yes, atomic clock will stop at lightspeed but absolute clock will not. The construction of the absolute clock is below (November '03 DSSP Exercise). Note {2/28/05} Newton defined inertia as force (that resists ..). Newton also made force responsible for the change in movement (velocity). However, it is not force but work (work is energy) that is responsible for the change in movement of a real mass body. What saves Newton is that force cannot arise for a zero time duration. What saves Newton is that, quantum mechanically, force arises as a non-zero amplitude (and opposite) pair from a reducing even function. Because force manifests and engages the body over some non-zero distance, every time one speaks of 'force acting on a body,' one also speaks of work in consequence. Technically, however, force is not energy and a cup on a table is subject to force but not to energy. All said, it may be worthwhile to restate Newton's conservation of motion in terms of work (or energy). While at it, the conservation of direction should be formally stated as well. Note {2/23/11} Yes, the conservation of direction comes from Leibniz (and my version of the law of the conservation of direction is in the Quantum Pythagoreans book). Note {April 14, 2011} In July 2010 DSSP topic I put forth the inertia mechanism conserving the moving (real) energy. DSSP Topics for May '04 If a number is called a real number then it must represent real things. Quantum just may be the inevitable consequence Looking for nodes on a planetary musical string Last month's square root rosette is attributed to Theodorus of Cyrene (~400 BC) of the Pythagorean school. It is apparent Theodore wanted to show that a square root of a number such as two, while irrational, can be constructed as a real thing of a stick of a particular length. Presently, one can enhance on that by making a case that any irrational number such as golden ratio can become real (rectangle, flower, spiral) only if it has finite precision (finite mantissa). The irrational number, then, can be realized and its real representation (exact measurement) is possible. When irrational number becomes real it becomes necessary for its numerical representation to have finite precision. Reality always seeks a definitive (localized, particular) answer. Present day math guys then also need to refine the definition of a real number by making all real numbers subject to finite precision. Some sophistication may be called for when translating irrational numbers into real numbers, for simple truncating or rounding may not be sufficient. The exact measurement of an irrational number, however, is not possible and irrational number remains "in potentia" or "in waiting" much the same way a+b remains a+b. In the Pythagorean tradition the irrational number is incommensurable (not in accord -- think music). Square root of 2 is irrational number but 1.4142 is real number. The (non-Pythagorean) kicker is that rational numbers can become subject to finite precision if their decimal fraction goes on for many decimal places without repeating in cycles. The open problem is then showing whether all rational numbers have repeating fraction. Or is it, perhaps, that only the rational fractions of the pleasant-sounding octave (1/1, 9/8, 5/4, 4/3, 3/2, 5/3, 15/8, 2/1) are easy to translate into real numbers of particular real force and nodal length. The fact is that all rational fractions and the geometric mean (square root of a product of two numbers) are constructible, but the important part is the finite precision that happens in the act of construction (realization).  Exercises (school text worth changing -- or not) • Every real number is bounded in precision (mantissa is of finite length) • Collectively, real numbers are unbounded in quantity but there cannot be infinite quantity of real numbers (quantity of real things is finite though unbounded) • There can be infinite number of virtual numbers because there is infinity of virtual variables • Ancient Egyptians expressed fractions as harmonics. Explore this to see if there are advantages • God may have created integers but it is up to the humans, if they can figure it out and improve on it, to harmonize with the universe. (There are prominent roles for near-integers.) • The transcendental aspect is that the bounded precision of a real number leads to the digital (quantitative) nature of the quantum mechanical environment. Physical quantities realizing (albeit temporarily) at the atomic scale will have finite precision values and this environment then cannot be fully continuous. [It can get pretty unusual here.] Note: Rational fractions 1/3 and 2/3 are special because many (though not all) angles, including 360, can be divided by three exactly. Think of the angle as rotation and think of the 360 degree angle as the orbit; now use compass and straightedge (real tools) to divide the orbit by three exactly and realize these fractions with quadratic methods -- this is the ancient Egyptian's "secret" on why fraction 2/3 was not written out in terms of the harmonics series. One can visualize this by having a ship navigate a route but being able to calculate its exact position only at particular points along the way -- the ship can navigate successfully even though it cannot make continuous exact adjustments. During the orbit the planet having 2/3 orbital relationship with another planet can make out (calculate) the exact mutual position at a very large number of angles (3, 5, 15, 18, 30, 40, 45, to name a few). It is likely Kepler was getting into this area when he proposed additional angular planetary relationships for astrology charts. Kepler stated he did not know why some angular relationships were more harmonious than others and Gauss was yet to come with his constructible polygons. [Some occult writers explain the mystery of the Sphinx as being based on thirds. Sphinx, however, has two pairs of legs and is but a messenger -- one of many at that.] Note{May 31, 2004}: Present day mathematicians are quick to point out that an arbitrary angle cannot be divided by three exactly using quadratic methods (compass & straightedge). Delighted in telling us what cannot be done, they may want to consider that the solar system is using quadratic methods just fine and regardless of their arm chair conditions. [Are three pieces of rock smarter than all mathematicians? More practical, for sure.] Note{June 7, 2004}: The bounded precision of real numbers has implications that go well beyond arguing about the Pythagorean rational numbers foundation and Pythagoreans were mum about incommensurable (irrational) numbers for mystical reasons. First, put aside Cantor's much admired pole-sucking nonsense of mapping all numbers onto a line. The mysterious quality of bounded precision of realized irrational quantities comes to light if you desire to take the U out of the UFO. DSSP Topics for April '04 Get square root of any integer and then pick systemic subset Some infinite sums are bounded -- that is, even though the number of summing components is infinite, the sum itself is finite. If, say, the summing components are energy the total sum of energy would be finite. The finite (or bounded) nature of the sum then also allows to address the entire system computationally -- even in the instantaneous (quantum mechanical) environment. Picture filename: Take a stack of right triangles that have unit distance (distance of one) on a side. We get a square root of any integer we want. Mathematically, since the basic harmonic series Sum(n-1) is unbounded, Sum(sin-1 n) for n=2 to infinity is likely unbounded as well. To apply the rosette, if you think in the gravitational context the squares of radial distances are inversely proportional to forces, which then form a square harmonic series. But there is more Exercises (ideas worth developing -- or not) • The angle for each n½ is a sequence of unitary pendulum periods (length = 1) under unitary acceleration forces normalized under 2. (Pendulum periods are independent of weights placed at the ends of the n½ distances.) • Radial distances of all integers in a sequence n½ represent the linear progression of unitary angular momentum. (Angular momentum is energy.) • Unit weights placed at the ends of the n½ distances will have the same (unitary) angular momentum if the angular velocity is a multiple of n (divide by n if going to a larger orbital, multiply by n if going to a smaller orbital) • Weights placed at the ends of the n½ distances can conserve energy exactly if the angular velocity is constant and if the weight is finite-divisible by n. Finite divisibility is defined as a property of any fraction that results in finite decimal mantissa. [The idea is that in the atomic environment the conservation of energy must be exact during orbital changes, which are instantaneous and quantized -- that is, energy cannot be approximated or averaged once photons are created (transformation is exact). This could also shape electron's mass into a particular and fixed mass value.] • Unit weights (fixed masses such as electrons) cannot jump to those lower/higher orbitals for which the difference in energy does not have finite mantissa values • If the elimination of those orbitals that do not have finite mantissa values results in convergent sum of all remaining orbital momenta then the total energy is finite and the problem (atom) becomes systemic and manageable through computational means • Pick an incomposite number like, say, 137, and see what happens DSSP Topics for March '04 Squaring the circle: From Greeks to alchemy to Great Pyramid, with the mention of present lam science Squaring of the circle was a great pastime of Greek geometers, who tried to construct a square from a particular circle using but a compass and a straight edge such that the area of the circle equals the area of the square. In addition to exploring the constant (3.14..), this exercise acquired a particular mystique because it could not be done exactly with the instruments provided. The Creator, then, needs to have other tools besides a compass and a straight edge to square the circle or, in reverse, make a circle of the same circumference given the square. One can also reach the conclusion that since the Creator can deal with limitations, the exact solution is not necessary and the value of any real variable will then also need to have its mantissa (precision) trimmed at some finite length. Real things, the Creator may decide, will not be infinite. Alchemists of the Renaissance picked up on circle squaring and added additional challenge: The man is the creation in the image of the cosmos and, therefore, circle squaring has a lot to do with man's existence and spiritual growth. The square is supposed to represent the spiritual (virtual) aspect of man but others say the square really represents the real part. [Nothing to get hung up about.] The measurements of the Great Pyramid in the past (19th and 20th) centuries revealed that the circumference of the pyramid's square base is equal to the height of the pyramid multiplied by 2. That is where it rests. However, the obvious interpretation is that the circumference of the base of the pyramid is the square of the circle that is formed by taking the height of the pyramid as the radius of such circle. The ancient Egyptians wrote as a fraction equal to about 3.16. (Rhind Papyrus, 1,600- BC.) Note that most square-the-circle problems deal with equality of the area of the square and the area of the circle. Archimedes addresses the circumference because he uses rotation. In all, the problem of the squaring of the circle is applicable to either the area or the circumference because both problems are qualitatively the same. The present day science is so close to engineering it does not see many implications in the squaring of the circle. After a short Internet (Google 'squaring circle') and Borders bookstore search, I was not able to find any published associations between the Great Pyramid and the squaring of the circle. {Did find an association by Farrell in the Occult-Speculation section.} Exercises (True or False): • The squaring of the quantum mechanical wavefunction results in the realization (reduction) of the wavefunction. The intangible thus becomes tangible • The "pyramid power" rests on the outside (the skin) of the pyramid rather than on the inside (volume) • The "pyramid power" rests on the skeletal (one-dimensional) geometry, which enables materialization inside the pyramid • The pyramid glows when viewed in "frequency" domain. Alien ships use it as a beacon • The pyramid of certain proportions allows materialization of intangible cosmic energies into organized structures such as molecules of water and gases • Pyramid's materializing characteristics can be (or are) reversed: Tangible can become intangible and the pyramid acts as cosmic "pump" • Materializing pyramid calls for three-sided pyramid of tetrahedron. [Where is Pythagoras when we need him.] Squaring The Circle In constructing the square-base pyramid that has its height the 2 multiple of its base circumference, there is but one azimuth angle x that the edge of such pyramid can have:   x = sin-1(2/8 +1) The resulting angle is not a function of the size of the base and this means that any horizontal cut of the pyramid leaves a pyramid that retains the circle squaring geometry. You will recognize the term 2/8 as the limit of Euler's sum of all odd square harmonics. The azimuth angle x is almost exactly 42°. Working with the exact angle of 42° the value of is computed at 3.14128 versus the 3.14159 for the real . If the 42 degree angle can be constructed (by adding 12 to 30 or subtracting 3 from 45, for example), you can then make a circle from a square to the accuracy of three parts in ten thousand, which is better than 0.01%. In reverse, you can also make the square from the circle by constructing a 48 degree angle. One source (Kazarinoff via Gauss) claims that any angle satisfying 360/3n (n is integer) is constructible and the 3 degree angle would then be constructible. Picture filename: If you are making the actual pyramid and have four pieces for a base, get the pyramid's edge length by multiplying the base by 0.951. (5% is one part in twenty and Mayans would then subtract one part because of their vigesimal counting system. Ancient Egyptians would use 1/18th but 1/20th is much more accurate.) If the base of the pyramid is four units on a side then the area and the circumference of the base are the same. To be "totally cosmic," use a particular distance that is derived from actual cosmic dimensions and apply it for a base. For example, four meters is 1/10,000,000 of the Earth's circumference along the longitude. Present day mathematicians have no idea why ancient Egyptian mathematicians expressed fractions as a sum of only those fractions in which the numerator is 1. Thus, 5/8 was written as 1/2 + 1/8. A good reason for doing such elaboration can be found in the harmonics series Sum(1/n). The sequence number n is not repeated in the harmonics series nor was it repeated by the Egyptians. For extra mystique you may include the destruction of Fourier memorials by the (Nazi) Germans, Fourier proof that all frequencies fit into 2, and (of course) the UFO post-crash environment. Also consider what it means to have 90 degree rotation (if you are 100% real then this means nothing). The orbital ratio of Earth and Venus is 5/8. If you know details on ancient Egyptians' fractions, you will also know about the only exception Egyptians made for fraction 2/3. That is the ratio of Neptune and Pluto orbits. In mainstream magazines most present day astronomers show that they 'have credentials -- write trash' when they question Pluto's purpose as a planet. For more on harmony of the spheres link up to New Star In The Heaven. Note {Mar 31, 2004}: All pyramid references calculate the angle at the midpoint of the base going up the face of the pyramid. The idea is to present the pyramid in the framework of pyramid construction from a solid material rather than building the frame of the pyramid. The ancient Egyptians had called this angle the seked, which is the cotangent of the base and the side planes. The angle at the midpoint of the base is the steepest one possible and, (maintaining the circle-squaring geometry) such angle should be 51.8539.. degrees. The pyramid-writers, however, fail to examine this angle in its general format:  x = sin-1(2/16 +1) Comparing this solution with the one for the edge of the pyramid, we will find that the Euler's harmonics series term 2/8 is still there but it is now halved. By taking a walk [or dance] around the pyramid the Euler's term diminishes by half at the midpoint of the base and goes back up to its full value at the pyramid's edge. There is one point in each quadrant on the base of the pyramid where pi is not needed but at this point the construction angle is most likely [better be] transcendental. Looking at circle-squaring from the proper perspective: We would be in a real pickle if was not transcendental. {April 30, 2004}: Squaring of the circle by geometric (quadratic) means is impossible. The impossibility, however, is but an interim step. Squaring of the circle is the koan of Western origin, which, as any koan, surfaces the transcendental aspects of nature. {June 30, 2004}: All three pyramids at Giza have a straight pathway from the mid-base to the edge of water but each pathway has different direction. At the end of each pathway there is a highly symmetrical structure dedicated to a particular deity. These partly subterranean structures seem to have a transducer/impedance match/resonator function. What stars are rising from the horizon annually for the first time at these pathway directions? Pumping cosmic energy or just pretend-cosmic-vapors? Pi, the golden ratio (Phi), and e are all related. It seems ancient Egyptians liked the golden ratio (Phi)(Phi-1)=1 but Pi allows the substitution of Euler's series, which gives additional insight. {September 1, 2004}: In DSSP September '04 Exercise, additional work on Golden ratio makes a case that the tie-in of Pi to the Great pyramid may be incidental, for primary purpose of the Great pyramid geometry was the tie-in to the Golden ratio Phi rather than Pi. {January 10, 2007}: Squaring of the circle now has a page of its own. It's about the energy and what needs to get done if infinite components are superposed to complete the circle's construction in finite time. {May 5, 2007}: Constructing the outline of the Great pyramid is easy. First, construct the golden proportion. a book by Mike Ivsin Direct link to Printer Quantum Pythagoreans book establishes the foundation for the understanding and application of energy. Because the energy can be multidimensional and exists in superposition -- it is mutually inclusive -- certain geometric structures are needed for the organization of energy. The pyramid forms a computational construct for dealing with and applying such energy. My job was to put all this in a simple, cohesive, and systemic manner for the reader's understanding and appreciation. More .. DSSP Topics for February '04 Why would things hold together in a continuum? Just add energy. Introducing space-energy continuum as a gateway to reversible transformations Continuum has a nice ring to it. Continuum just keeps going and forever doing whatever it is we wish for. Momentum is a continuum of mass and velocity because mass-velocity product continues to be conserved regardless of the size and the number of colliding bodies. Space-time continuum would have you believe that spatial distance and time cannot be separated. However, all nonlocal events ignore space-time continuum and if you were the inventor of the space-time continuum you would have no choice but to ignore anything that deals with nonlocality: transistors, lasers, electron microscopes, Bell experiments and, of course, UFOs. As an advocate of space-time continuum you could divide the universe into macro and micro, and pretend you know the macro part. Even though there is money, fun and survival to be had in integrated circuits and communications, the best anyone can offer the believers in space-time continuum is that you cannot get anywhere from here in less than a million years. One thing that is proven is the conservation of energy. For example, momentum is a moving energy and momentum -- the continuum of the mass-velocity product -- is conserved because the energy adds up to the same value before and after some operation such as collision. • Neither spatial distance nor time contain energy, individually or in combination. You conclude (pick all that apply): • Space-time continuum is really arithmetic equivalence of spatial distance and time that arises in simple systems such as two-body systems. Time and spatial distance are related through an equation in a rather narrow subset of the real world. More often than not, real parameters such as temperature, pressure, humidity and density must be held constant to simplify the system and coax space-time continuum to hold and behave tractably (if not linearly) • Space-time continuum is derived from certain relationships but such derivative cannot then be applied to constrain other relationships. Scientists treat all variables symmetrically and they have no clue how to differentiate the leading and the following variables. All fish are swimmers but swimming will not make you into a fish. 'Swimmer' is the derived (dependent) variable and, therefore, 'swimming' cannot constrain you to be or become a fish. [Even Aristotle would understand this.] • Space-time continuum advocates find solace in the belief that if they cannot get there from here, nobody can get here from there. [Besides, you have free will and to prove it you get yourself a beer.] • If you stand in an elevator and turn the lights off, you will not be able to prove what planet you are on! [One of the ideas for Adventures In Pseudoscience, 20th Century edition.] • Moving electron contains energy and becomes a wave. Therefore: • When electron spreads (per Schrödinger evolution, de Broglie wave, and Heisenberg uncertainty) the electron becomes a vibrating entity, which can now be described as space-energy continuum. Energy keeps the electron together even though, as a wave, the electron reaches out nonlocally in up to 3D and on macro scale, no less • Credit JJ Thomson who was the first to suggest that atomic particles can spread and become nonlocal [the proof is in the pudding] • Spatial distance and time cannot form a continuum because neither distance nor time contain energy that would hold the continuum together • Transformations can and do happen. When a particle changes into a wave (or vibration), particle unbecomes local because it has a reach well beyond its original spot. In fact, the spreading of the wave is unbounded. During transformation some things change while others remain invariant. You may conclude: • By transforming into a wave, spatial distance is no longer a leading (dominant, priority) variable. Spatial distance becomes a following (subordinated) variable because spatial distance is not energy. Electron-as-wave can now behave nonlocally • Some transformations are reversible while others are not. Reversible transformations are the way to go for traveling, while nonreversible transformations are for the one-way street. Matter and antimatter annihilation is a nonreversible transformation. Other transformations are easy to do, but difficult to reverse. (Absorption of a particular light quanta is easy but radiation of a particular quanta is difficult -- it just looks easy for the atom.) Note 2/28/04: Energy and frequency are related through de Broglie relation. Space-energy continuum can also be called space-frequency or space-vibration continuum. DSSP Topics for January '04 Microgravity is a term used by NASA to describe the space station environment. Being weightless does not mean gravity is gone. The issue of angular momentum Show-stopper? No problem. Bring in the clowns It is easy to put Newton in his pajamas running around in the middle of the night screaming "The moon is falling, the moon is falling!" Yet Newton's apple is falling the same way moon is. The moon has additional velocity component and while moon is indeed falling it is also missing the earth and so the moon stays in one orbit -- falling toward and perpetually missing the earth. In the absence of gravity you just point your rocket anywhere you want to go, and go there with the thrust of your engines. The thrust can be as short as you want and eventually you'll get there. In the earth's orbit, however, gravity is continuously pulling you in and if you aim your rocket at the moon, the brief firing of your thrusters will not send you on the way to the moon but will put you in a more elliptical orbit. If you point your rocket at the moon and fire your thrusters when the moon is receding, you may end up coming back to earth. Another way to appreciate gravitation is with a beach ball. The ball will deform under acceleration from the rocket's thrust but the ball will remain nice and round under gravitational acceleration, as every part of the ball is accelerated with the same force. • In the weightlessness of the orbiting space station (pick all that apply) • Gravity is near zero (microgravity) • Gravity is nearly the same as on earth's surface • All objects in the space station are being accelerated at the same rate and that is why objects in the space station appear to be floating • No experiment on the space station can ever be performed in microgravity (or in absence of gravity) • To escape earth's gravity and end up with microgravity, you • Dig a hole to the center of the earth • Go about four-fifths of the way toward the moon (where the earth's pull is the same as the moon's pull) • Go to geostationary orbit • Two of the above • If you still think NASA understands gravitation then consider the past (and failed) NASA's experiment of reeling out a piece of weight out of the space shuttle. NASA thought the weight was going to go straight up -- in defiance of angular momentum and radial gravitational acceleration • If you still think NASA can learn from its mistakes then consider the space elevator where NASA continues to ignore the angular momentum of the load as the load is being moved up -- it's a show-stopper. (Space elevator strings strong microtubules from ground to space and NASA claims to be able to haul a load straight into orbit.) You recommend that: • NASA stop space elevator parade of clowns because the future as portrayed by NASA is nothing but a delusion. The load must acquire angular momentum and space elevator will always need rocketry propulsion to supply angular momentum to each and every load • Transfer microtubule development to another agency such as DOT. Enhance DOT objective to apply microtubules in compression (tension strength, suitable for bridges, is a given) • Beat up on NASA engineers and make them do a bang up job while ignoring fundamentals such as angular momentum and the absence of light's pressure at reflection Gravitation Fact The easiest and simplest way to prove that all bodies subject to gravitation fall at the same rate (Credit Benedetti): Two objects that have the same weight are falling at the same rate. Because neither object gets ahead of the other these two objects can be joined together and, although the joined object is now twice the weight, the new object is also falling at the same rate. [Need right-brain for joining operation.] Notes 1/31/04: There is likelihood that matter that is not subject to gravitation will disappear (become virtual) but will not disintegrate. At minimum, matter's inertia would decrease in microgravity. Something worthwhile to look into. (Quite the opposite of what guys like Mach and Puthoff would say.) Pythagorean numerology does not give the number two any positive properties. Number two (or division into two equal parts), however, becomes central when proving certain gravitational relationships. Ditto in the context of momentum creation. [Will be fixed.] Note {Apr. 30, 2008}: We have an update on the space elevator in our Oct. 2005 DSSP topic (this page). DSSP Topics for December '03 How can Incomposite numbers, today called prime numbers, be composing anything if they are incomposite. Riemann [a guy from Wendland] hypothesis has its basis in physics The Incomposite numbers of Pythagoreans are today called prime numbers. They are incomposite because they are not composed of any other number. The composition, you note, also refers to a musical score and you know that with Pythagoras discovery of the musical octave the Pythagoreans were into music of all kinds. Nice part about incomposite numbers is that, while no other number can divide into them, they make all other numbers by multiplying among themselves. Although prime numbers are indivisible they are also included. Free moving electron can acquire any energy it wants and thus its wavelength can be anything it wants. Atomic electron wavelength fits smoothly around the core wherever it can -- in particular increments. The problem comes when there is more than one electron because electrons need to stay away from each other and work with the core on the balancing act as well. Balancing assures that the electron can be a real electron at times and a virtual electron at other times while conserving energy at all times. Virtual electron is the electron state that allows it to spread as a wave, while the real electron state is needed when electron must find another orbital as things get hot or cold. [If you want to get deeper into this, do electron guessing under Gauss and compute a few matrices with Riemann. You will be ahead because you now know the real numbers come from the electron while the virtual numbers come from photons. Here is also a mechanism that takes random photonic values and selects only those that subscribe to a particular structure.] When photon is absorbed by the atom, one half of its energy goes to the core and the other half goes to the electron because that is the only way of conserving momentum during energy transfer (really a transformation). In fact, the ½ term of the Riemann's Zeta function comes from the law of the conservation of momentum. It would be cool to show that the physical law of the conservation of momentum can be proven through numerical analysis alone but what also needs to happen is that the energy imparted on two bodies is not only split 50-50 but that the split is always in the opposite direction. (Linear momentum can acquire but one degree of independence.) • When photon enters the atom then the electron may need to move into the next available orbital, and [pick the third one below] • Electron adds half of the absorbed photon's energy to its own energy and tries to make an orbital with the new energy it has. If that does not work then the electron leaves the atom [our goal here is not ionization, which deals with core's computability] • Electron computes the energy of the photon (half is available) and absorbs the photon only if the new orbital does not bump into another electron • Electron computes the energy of the photon (half is available) and absorbs the photon only if the new orbital is OK (in harmony) with all electrons already present • Atomic electron cannot just add any photonic energy to form its own independent orbital because • That's the way it is and I like Rock and Roll just the way it is • The orbital wavelengths of multiple electrons interact (superpose) with each other and if their mutual orbital ratios do not add up to a full wavelength across all orbitals in the octave then there would be peaks of instability • Electron orbital must harmonize with other orbitals in the octave or across several octaves. To do that, • Compose all prime numbers that multiply out to multiple orbitals. If there are no compositions available then that is where the gap is going to be for sure (forbidden orbital, Zeta function is zero) • More multiples a particular orbital may share with other orbitals the better. [Pythagoreans call numbers having a large quantity of divisors the abundant numbers.] • Orbitals seem to fit series (n+1)/n, which has a triangular pattern, numerically speaking. [Orbital is not the same thing as radius but they are related. See Kepler in the macro dept.] 1. Using the photon that has just arrived, electron computes energy up and down (Gaussian distribution being symmetrical) and if it finds a computable orbital going down, you just discovered how laser and laser cooling works. Happy New Year! 2. Work with two (twin peaks) distributions and the highest octave to emulate a molecule. Perhaps even come up with a brand new molecule. Happy conjugating! [about the diagonal, too] 3. Using 'incomposite' terminology instead of 'prime' helps with visualization because it connects in more places. 'Prime' has only the exclusivity component [the verboten part] and that is not the whole story Notes 12/31/03: 1. Energy states in the atom have a wonderful progression of integer squares. (For thorough, clean, clear, and most understandable primer on quantum numbers and how they relate to orbitals consider [Big deal? Some energy exchanges are direct transfers rather than transformations.] 2. Gauss introduced the Modulo math, which is intersted in the remainder after the division. The physics basis of Modulo math is that the electron orbitals are stable if the remainder is zero or if the remainder repeats. If more orbital periods are required before the remainder repeats the less stable the orbital DSSP Topics for November '03 Absolute clock yields local and absolute reference. Feynman does not get it. Greetings, John Harrison. Causality is back in the formal system Background and Construction of The Absolute Clock Newton first postulated the existence of absolute space (spatial distance) and absolute time. Newton could not prove absolute space and time because the absolute rest (absolute reference) were to be proven first. If, however, absolute velocity is available instead of absolute rest, could absolute space and time be constructed from that? You bet. With the advent of light's invariant velocity, absolute clock can be constructed with a mirror. Light propagates at constant and known speed and the round trip from its source and back to detector will yield absolute clock. Absolute clock from lightspeed Picture filename: The outcome of the Michelson-Morley experiment is that the velocity of light is constant because the velocity of the light source is not added to, or subtracted from, the velocity of light c. If the clocking device is moving forward (to the right) and light is also sent forward then the velocity of the clock device is not added to light's velocity. Some interpret the Michelson-Morley experiment as proving the velocity of light has the maximum velocity at c. However, if the clocking device is moving to the left and light is sent to the right, the velocity of the clock device is not subtracted from light's velocity and c is the only velocity light has in a particular environment (usually vacuum). Light's velocity is c and independent of the velocity of the clock device. The round trip between source and detector will remain constant regardless of the velocity of the clock device. The same is true if the back and forth path of light is perpendicular to the direction of movement because the velocity of the source of light is not additive to the velocity of light -- forward, backward, or sideways. With constant round trip period the absolute clock can be constructed. Any and all absolute clocks moving about will always show the same time. Picture filename: Newton's absolute time postulate is thus proven. Absolute space (distance) is now easy to prove as well because any distance between any two points in the universe is determined with lightspeed and by using absolute send and receive times. With absolute distance comes absolute gravitational force. Also, absolute space and time does not dictate space-time continuum. An object can disappear at one point and appear at another point without violating absolute space and absolute time constructs. Further, absolute space (distance) and absolute time are absolute overlays onto space and movement in excess of c is not in conflict with the absolute space (distance) and time overlay -- movement in excess of c, even if discontinuous, yields unambiguous path while preserving formalism and causality because 'event A before event B' is valid for all observers. Feynman claims that absolute clock cannot be had but he treats only the case where the path of the light beam is perpendicular to the velocity of the clock device. He then claims that the round trip time changes with clock device velocity but it is only because he adds the (downward) velocity of the light source to the velocity of light. Historical Note It is said V. A. Fock tried to get general relativity published in the Soviet Union despite their objections that, according to the reading of their proletariat scriptures, forces there were absolute. While all cultures strive to perpetuate themselves and call on the heaven and the sciences for auspicious developments, the fact remains that the script, any script, will change from time to time. It is the pursuit of the truth that becomes right and not the other way around. Newton's "I frame no hypothesis" is possibly the most democratic expression of his scientific leadership. Exercises [multivariable analysis] • Feynman claims that absolute clock cannot be had, but, in addition to misinterpreting the Michelson-Morley experiment, he also ignores the simpler case of the light path being parallel or antiparallel to the velocity of the clock device. You conclude (pick all that apply): • If Feynman were to put light's path (anti)parallel with the direction of movement then general relativity is out. [Fortunately for students the perverted clock is in his first chapter on general relativity.] • Classic communist science. Feynman tows the party line while wrong or false has nothing to do with it. ["My children, follow me to the black hole."] • California is going to be carved off and sunk for trying to pervert supreme science [if you like Atlantis and the wrathful God angle.] • Feynman was upset because he could not sell his ideas for the Final Fantasy • Feynman was the agent of the aliens and the Deuce of Darkeyes was his handler. [If you like conspiracy then ignore Menzer because he was brainwashed at Harvard's INH branch (It Never Happened branch).] • If you were a tenured professor and discovered that the absolute clock can be constructed, you could: • Be a debunker of your fancy and enjoy it. [Cool if you do not use public funds. You get afflicted with swamp gas if you do.] • Decide that since it is easier to corrupt science than to advance it then let the hell run with it. [Eddington seconds that and Einstein sleeps like a baby.] • Tell students there are great mysteries in the universe, particularly if one cannot make up his own mind • Realize that if we can get there from here, they can get here from there. This gets you so riled you travel to Puerto Rico to see if you could blow up the alien base there. [Neah, with attitude like that you would never get tenure.] • Publish the truth. (Among the benefits is a reference clock for com networks allowing yet faster and more reliable transmissions because the reference can be decentralized. GPS can improve to inches.) You may want to have a new (and beneficial) explanation for the experimental result where subatomic particle's decay time slows down with increasing speed. This is a fundamental question and you: • .. publish only on the absolute clock and allow someone else to work on unstable cores and particles • .. focus on what locality and nonlocality can and cannot do for you. [Start with pendulum. Yep, there are not many people who understand pendulum mechanics and there are no scientists among those -- fixed pivot or not.] • Write Dear Colleagues letter to Hawking and Penrose and tell them where they can find their black holes • Write Dear John letter to the Pope to let him know that the falling into the space-time continuum resulted in the you-cannot-get-there-from-here syndrome. You indicate that the falling out with the Creator has now been fixed and request to be taken off the original sinner list • Remember John Harrison Other References Michelson-Morley experiment and the absolute velocity of light link up to: Here, have some light with ether on top Notes 12/1/03: Even if absolute rest could be found it would be difficult to extend such reference to a point where it may be needed. The ability, through lightspeed, to derive absolute time and space (distance) at any point in the universe and thus making it local is a much better way of building the formal part of the universe. Just because the absolute clock can be constructed does not mean a human is the clock maker the same way the Creator is. Creator makes the clock by creating formal systems through the application of informal -- that is virtual, methods. Because the absolute clock can be constructed at will by deriving it from the constant velocity of light, time is always a derivative and as such can never be a leading, or independent, variable. By itself time has no independent dimension and time appears independent only when derived from a periodic -- that is organized, system. Note 4/30/05 Present atomic clock reference is based on half-life property of unstable materials such as Cesium. The claim that the time of such clock is affected by gravity is correct inasmuch the gravitation affects the stability (half-life) of the material. Gravity does not affect time per say and laser-based clocks that derive time from repeated bouncing of light across fixed distance are not affected by gravity and remain absolute clocks. DSSP Topics for October '03 Chi. Life energy from China. Serious fun is had by all • Chi energy is supposed to be a life force permeating all things. Chi energy concept originated in China and, you guessed it, there is Yin Chi and Yang Chi. Chi (Yin form) enters human body along points that are acupuncture points. This is all very mysterious to the Western mind but, after some serious if quick thought you conclude: • Because Chi manipulation is happening at different monasteries in China with some degree of secrecy, there are many flavors in the use of Chi • If Chi can help then Chi can also hurt. Chi could be turned into a deadly force • If Chinese communists had their way they would want everyone to apply to have their Chi registered. But if either side refuses the registration then the person would have to leave the country. That is how Chinese lost neikung • Acupuncture needles are but little antennas that entice Chi to enter the body • Chi are photons but they are smart photons [okay, smart photons can slow down] • Chi are electrons [okay, if everybody believed this then the Tao symbol would not work] • Chi are both photons (Yin) and electrons (Yang) • Chi is a new particle or vibration that has polarization to explain Yin and Yang • Yang is real and Yin is virtual (i, square root of minus one) • Two of the above • Take two aspirin and call me -- I mean my nurse -- call my nurse in the morning DSSP Topics for September '03 What's Wrong Now, Algebra? What the.. • The definition of i is a square root of minus one, or i2=-1. Dividing both sides by i gives i=-1/i. Yet last month we also said that i=1/i. We have two different equations, both claiming to be correct but they differ by a minus sign. What gives? Algebra? Take both equations: [1] i=-1/i and [2] i=1/i Use proper algebra rules as follows: In the first equation multiply both sides by i. In the second equation, square both sides. You get: From [1]: -1=-1 From [2]: -1=-1 By using proper algebra rules (multiplying both sides by equal amount), two different equations are now the same and correct. Simpler way of seeing it is as follows: Take i=1/i. If you square both sides the equation is correct. If you multiply both sides by i the equation is incorrect. You conclude: • Algebra is too old by now because it was done by somebody named Al in the 12th century • This guy Al also copied Indian numerals to be passed on as Arabic numerals but he did not copy zero at first because he thought zero was not needed. Maybe Al did not copy the algebra rules in full • Save this example to explain low SAT scores, Mars probes and shuttle crashes, etc. • Teacher does not have a clue. Make a bet? • Algebra rules should be changed. Somebody ask Al Gore, the inventor of algebra. [Or was Al Gore the inventor of the algorithm? -- with apology to Al Khowarizmi.] Note {12/1/04}: So far nobody figured out what breaks the algebra. What kind of number i is? Or, what makes you think i is a number? [If you are familiar with ancient Egyptians, think eye of Horus. If you are into Mayan mythology, think Hunab-Ku.] DSSP Topics for August '03 Atomic versus Free Electron -- Get electron straight and get photon straight, too • Compton effect is the central reference for photons presumably hitting electrons and exchanging their momentum in the process. Is that so? There is a remnant of a conspiracy here because Compton effect was improperly claimed as an example of a photon carrying the real punch of the real momentum. If you take a physics course you will get the dose of 'photon-has-momentum' doctrine via the Compton effect. Compton effect is invariably described as a lone electron being whacked by a photon, each going their separate ways with classically apportioned and modified momentum. But you know that Compton effect results from photons being directed at a piece of crystal and you know atomic electron is bound to the core and can behave differently than a free electron. You also know that free electrons were available in Compton's time inside the early CRT (Crooke's tube). You now walk up to your teacher and have him pick the right answer(s) out of the following: • Compton could not afford free electrons and so he used atomic electrons • Compton effect would never work with free electrons and that is why Compton used atomic electrons -- but he did not mind claiming it makes no difference • Photons do not scatter from free electrons. Nobody performed scattering on a free electron experiment (using a beam in a CRT, for example) because, you guessed it, photons and free electrons do not scatter when they interact with each other. There are cloud chamber experiments by R.T. Wilson but cloud chamber is not vacuum and photons, particularly X-rays, move electrons by ionizing molecules -- photons stripping electrons from atoms has never been the same thing as hitting a free electron • Photon can move atomic electron to higher energy state (orbital) by being absorbed and then the same or another electron drops to the now-vacant state issuing another (weaker or stronger or the same) photon -- different mechanism than the presumed whacky billiard ball photon • Free electron has de Broglie wavelength, Heisenberg uncertainty and Schrödinger evolution. Photon cannot possibly whack free electron • Oh no, Dr. Bill! Free electron is a wave. NASA thinks this is too complicated for college kids and NASA preference runs for sling shots anyway. [No joke, NASA's Institute for Advanced Concepts likes their sling shot really big -- and institutional.] • You the teacher were taught photon is nothing but a little mass ball and that is where physics ends [It's your lucky day. If nobody understands quantum mechanics then you are included] • Free electron is a wave and photon interacts with it as another wave [pick this one -- it's close enough for partial credit] • Photon cannot be divided. So how can photon suddenly have a different energy value unless it is an altogether different photon • Compton got his prize and you are now stuck with his erroneous interpretation. But the students are paying for it and that takes the pain away.. • Bottom line: Compton effect is a case akin to the photoelectric effect where the atomic electron is displaced by absorbed photon and a neighboring electron fills in the empty state while emitting another photon [Don't worry about Nobel -- they either fix it or they'll merge with Oscars, the Royal Swedish flying meatballs chefs] • We can talk about Compton all day long but directing a laser at a mirror could measure the photonic pressure directly -- and this has not been done because.. [A: Light's pressure does not exist; B: NASA is busy putting out their version of the story that the shuttle is still flying and pictures to the contrary are a hoax -- besides, they have equations to prove it; C: All of the above] • But what if somebody sees through the hype and goes after the truth? There are some cool opportunities in QM Ivsin's question with an answer and then some: Which number is the same as its reciprocal besides 1 and -1? A: i . 1÷1=1, 1÷-1=-1, 1÷i=i (square both sides) Big deal? Sure is if Planck constant h is really a virtual number hi [Hello!]. Then, suddenly {Aug 1, 2003}, the momentum of a photon becomes virtual, as it has always been DSSP Topics for July '03 Relativity Postulate Is Neither -- It Is Not Even Wrong • General relativity postulates that all motion is relative and, therefore, a person cannot tell the difference between a stationary and a moving object. At the very least the relative motion is supposed to be sufficient. Well, every accelerating electron radiates electromagnetic field and all things wireless work because of that. If you apply the relativity presumption (that all motion is relative) then the accelerating frame of reference passing a stationary electron will elicit electromagnetic radiation from such electron. Also, frames of reference accelerating at different rates elicit different amounts of energy from the electron the references are passing by. This is such an obvious nonsense you conclude: • Simplistic and incorrect assertions of general relativity not only make the theory a farce but those who accept it got suckered into doing pseudoscience [governments make good suckers because they see nonsense as power exercise] • Relative motion is a special case of motion that cannot be generalized and can lead to erroneous results. [Newton has it right. Poincaré could do singularity with math and he was looking for something physical to back him up -- doing science as fantasy first and looking for reality second.] • General relativity is not even wrong, it is fraud. You can get a refund and a smart lawyer can get plaintiff triple damages on the fraud part • Physics, both particle and astro, has become politicized with broad streaks of fraud. For instance, solar sailing is easy to prove as fraud, both through faulty and purposely corrupted logic, by willful abstention from validation, and by active avoidance of validation (NASA Institute for Advanced Concepts an excellent example). This can escalate if human lives were endangered -- not necessarily by faulty engineering but by fraudulent science and management • Proposals to government that are based on general relativity are fraudulent. Gravitation mechanics are quantum based and require unique and straightforward approach. Proposals to measure gravitation waves, for example, are but a cover story to get, spend and possibly divert funds Note {Aug 1, 2003}. I got a note on how Newton explained the need for absolute reference. Newton used a spinning bucket with water moving up the sides and the idea is as follows: If you are on a merry-go-round you know you are spinning and when you stop you know you are not spinning -- while the stationary or spinning frame of reference has nothing to do with it. The spinning object accelerates toward the center of rotation and so any acceleration can be used to disprove the relativity postulate. Linearly accelerating object or linearly accelerating frame of reference can thus be differentiated -- with a bucket full of water no less. (If the bucket overflows you are accelerating. If not, it is the frame of reference that is accelerating.) • Now you have two choices. Spin it or go quantum. The spin starts when somebody tries to explain UFO-like behavior in general relativity terms. When a ship (used to be called a rocket) disappears, some may say that the ship just goes into a black hole and then it can appear somewhere else. You know that: • It is only a story because black hole's definition is infinite mass that crushes everything and nothing can survive, including the theorist. Appearing suddenly someplace else also violates the [are we there yet] space-time continuum • It is only a story because some people just like to get confused and love to blabber on. General relativity is built on light's pressure that never existed and one can be truly confused for a long time -- a lifetime, hundred years perhaps • Academia can be a keeper and defender of nonsense • General relativity proponents have way too much invested in this and they do not mind breaking the rules of the theory while being its proponents at the same time. These guys cannot deliver because they know their foundation is fraudulent but they are looking for a story to sell [Let me see those comics..] • Challenge any and all funding based on general relativity. Money available for space work is well spent someplace else and by people who can validate what they espouse. Gravitation waves measurement, solar sailing, and neutrino detection are all duds. It is easy to challenge all programs that at some point rely on light's pressure at reflection such as 'light has momentum' or 'energy has inertia' [Tell Hawking Newton wants his chair back] • Anybody who argues anything based on general relativity can be trashed at any time • It is time for NASA breakup -- a sight to behold General relativity also builds on the presumption that light carries real momentum and exerts pressure at reflection. Light is then presumed subject to gravitation (black hole, etc.). This is not possible (Light Cannot Push Mirror, But..). {July, 2005} In this date's monthly topic, relativity postulate is being disproved even for objects that are moving at linear (constant) speed. The energy imparted on particle stays with the particle. The proof is in measuring different wavelengths that are commensurate with energies imparted onto particles. Change in the frame of reference causes loss of mathematical ability to describe reality. DSSP Topics for June '03 Spectacles Before They Were Glasses • When Galileo introduced the telescope with glass lenses and encouraged everyone to look (he made several telescopes for local pols and even sent one to Kepler in Prague), the academia skeptics derided the whole optical enlarging idea of the telescope as something as unproven as the spectacle lenses. It is now circa 1610. One would think the learned men of academia have something relevant to say about new things. So, you are to guess how long the spectacles were around and used by people who did not give a damn about academic (dis)approval of spectacles: • 50 Years • 150 Years • 300 Years • It is apparent that the lag between the invention and its approval by academia is very long. In the spectacles case it is at least 300 years -- yes, 1280 is roughly the year of the spectacles introduction. ("..by the [end of] 13th Century they [spectacles] were common enough both in Europe and China." The Story of Light by Ben Bova.) You conclude: • Academia is irrelevant to economic growth. They are the last to come on board • Academia does not or cannot validate their equations because it can turn out the equations are wrong • Academia equations are in areas that are not practical or that are pure fantasy because that is yet another way equations cannot be validated [In billion years, ..] • Academia is but a priestly class that can make those who believe them feel good • Academia teaches mostly trash while making money on the side with other things • Academia's skepticism is the devil's advocacy here on Earth -- with every tenure the academic gets bad breath and a spatula for flipping other people's hamburgers DSSP Topics for May '03 Big Deal About Irrational Numbers • Over two and a half Millennia ago the integers and their ratios were thought to explain everything. Then it became apparent that some numbers could not be made by rationing and there was a crisis of sorts because of these -- irrational -- numbers. You, being knowledgeable in the ways of the world, had a meeting to resolve the problem. You reach one of the following conclusions: 1. Keep the irrational numbers the way they are and let them have the infinite magnitude (mantissa, precision) they ask for. There are rational people and there are irrational people, and it takes all kinds 2. All irrational numbers must be bounded before they manifest in the real world as real numbers. Why, even the easiest example of a diagonal of a square must be bounded in magnitude before anyone could possibly draw it on paper! • There was this guy named Cantor and he thought the answer #1 above was correct. He set out to prove it, and: • Had gone nuts • Proved that one kind of infinity could fully contain another infinity with room to spare for more infinities • All of the above [If he did not go nuts he would still be at it today] More on Numbers: Living With Numbers DSSP Topics for April '03 Get Prize With Photons -- Or, How Science Went Terribly Wrong But The Carrot Was There To Cash In On It • Around 1900 it comes to pass that light's energy is proportional to its frequency. (Higher frequency corresponds to higher energy.) Also, electrons are released from certain metals when exposed to light but only if light has frequency (energy) of at least f0. It then becomes apparent that each electron was released by an individual pocket (photon) of light. What partitions light into photon pockets (or Newton's corpuscles) is the Planck constant h and so the energy of each photon that is imparting energy onto each electron is proportional to hf0. To win the prize, you: • Formulate the equation that says the kinetic energy of the ejected atomic electron is not more than hf0 and you subtract a bit of work w to explain heat. You claim that photon knocks the atomic electron out of orbit but you do not mention the fact that light can never impart kinetic energy to free electron(s). This fibs the government (and Nobel committee) into believing that they can push payloads with photons. Government is on a hook for a myth. It's early Twentieth Century and the prize is yours. [If you are into mythology you would say: 'The corrupted mind spoke the corrupted word. And the word was willed to make it so because the damned said it was so! The gateways of hell creaked open with black holes offering to crush the light in the onslaught of infinite masses.' Wars, communism, etc.] • Formulate the equation in the framework of momentum conservation where the imparted photon energy is shared equally between the electron and the atomic core. Kinetic energy of the ejected atomic electron is then not more than ½hf0 where the other half is imparted to the core and accounts for heat. Also mention the fact that light cannot impart energy to free electrons or onto mirror surface because light has virtual momentum. You say there is nothing spooky about light, nature, or virtual momentum. In fact, the conservation of momentum framework allows you to explain selective absorption of photons by gas. [Did not happen then. If it did there would be no black hole legacy.] • After you get the prize you write a [strangely out of character] paper that says one can trigger emission of light (lasing) with photons that match the energy of a normally (spontaneously) emitted photon. Later you say that nonlocality of light is spooky. You get some academic credit for inventing a laser, although other people fight it out for the laser (and maser) profits. Meanwhile: • NASA continues to spend money on photonic pressure projects such as solar sailing. This will never work but you are dead by then • NASA cannot figure out that the absence of photonic pressure at reflection is the same thing as absence of photonic pressure at radiation (emission) and this means that laser has no recoil. NASA goes on pretending nothing has changed because pretend-physics is now a part of NASA heritage • NASA uses lasers and after directing laser at mirror NASA cannot validate the photonic pressure on mirror. NASA announces that light cannot push mirror, which challenges physicists to advance understanding of light. [Very straightforward and so it did not happen. After hundred years academia would have to stop collecting teaching and consulting fees while perpetuating a myth. Gravitation may be NASA's middle name but by now NASA managers know how to be sincerely ignorant and keep the status quo.] DSSP Topics for March '03 Reading Past Records • Some people can describe an event or a place that can be verified to be historically correct -- even though the person had no previous knowledge of the event or the place. Oftentimes such readouts can be repeated by another person. These phenomena are: • Fraud or conspiracy. People make up the story for fun or profit • Proof of time travel. People go back to some point in time and can mess it up for us here in the present • Proof of many worlds or parallel universes. There are a large number of independent realities going on and some people can hop between them • Knowledge or data is in a form of energy that is indestructible. Some people can read the data but it is read-only -- much as if you were looking at old movies DSSP Topics for February '03 Light Mill Moves and Rotates • When the light mill is illuminated by sunlight or bright lamp the mill rotates such that the dark paddles recede. When you place the mill in the freezer the mill reverses rotation and bright paddles recede. This can be explained as: • Light is absorbed more by dark paddles and gas trapped in the surface shoots out as it outgasses. In the freezer, gas is reabsorbed and as it does the molecules pull on dark paddles. • Gas flows up along the dark side of paddles and this creates little vortices at the edges where dark and bright surfaces meet. In the freezer something else is happening • Dark paddles are hotter and neighboring gas molecules absorb more heat. Heat absorption expands gas when atoms in molecules bounce away (while conserving momentum) and push dark paddles away. In the freezer gas molecules radiate heat into dark paddles more readily that into bright paddles. Gas cools off quicker at dark paddle surface and gas contraction pulls dark paddles (atoms in molecules bounce together). [Think Ampere: Every molecule has two atoms. Ampere was French but the Brits can give a little. It's the bouncing atoms in a molecule that account for gas pressure, not bouncing molecules.] • When the mill is left in the freezer to cool down and then removed, it begins to rotate such that dark paddles recede even though it is not illuminated. If the mill is moved the paddles rotate much faster. When the mill no longer moves its rotation returns to previous speed. The mechanism is: • Gas molecules move closer to the surface and the expansion of gas at dark paddles happens faster • Light bounces around and gets utterly confused. The mill then rotates like crazy DSSP Topics for January '03 Electron On The Move • Free electron is represented by a wavefunction described by Schrödinger equation. Since the coefficients are composed of virtual numbers, • Electron is a (zero-dimensional) dot with attributes described by the Schrödinger equation. • Electron diffuses according to Schrödinger equation and electron's position is in a superposed state. • Because the electron wavefunction is the electron's probability of appearance, a change in wavefunction changes electron's final destination (changes electron's path). Therefore, • One can move the electron with electromagnetic field. • One can move the electron by modifying its wavefunction computationally -- that is, logically. Go or select another topic from the gold post HyperFlight home Portal in new window ©2003-2011 Backbone Consultants. Copyrights Information
66ce42b54910dfa2
Collective excitations of dipolar gases basedon local tunneling in superlattices Collective excitations of dipolar gases based on local tunneling in superlattices Lushuai Cao Ministry of Education Key Laboratory of Fundamental Physical Quantities Measurements, School of Physics, Huazhong University of Science and Technology, Wuhan 430074,People’s Republic of China    Simeon I. Mistakidis Zentrum für Optische Quantentechnologien, Universität Hamburg, Luruper Chaussee 149, 22761 Hamburg, Germany    Xing Deng Ministry of Education Key Laboratory of Fundamental Physical Quantities Measurements, School of Physics, Huazhong University of Science and Technology, Wuhan 430074,People’s Republic of China    Peter Schmelcher Zentrum für Optische Quantentechnologien, Universität Hamburg, Luruper Chaussee 149, 22761 Hamburg, Germany The Hamburg Centre for Ultrafast Imaging, Universität Hamburg, Luruper Chaussee 149, 22761 Hamburg, Germany July 8, 2019 The collective dynamics of a dipolar fermionic quantum gas confined in a one-dimensional double-well superlattice is explored. The fermionic gas resides in a paramagnetic-like ground state in the weak interaction regime, upon which a new type of collective dynamics is found when applying a local perturbation. This dynamics is composed of the local tunneling of fermions in separate supercells, and is a pure quantum effect, with no classical counterpart. Due to the presence of the dipolar interactions the local tunneling transports through the entire superlattice, giving rise to a collective dynamics. A well-defined momentum-energy dispersion relation is identified in the ab-initio simulations demonstrating the phonon-like behavior. The phonon-like characteristic is also confirmed by an analytical description of the dynamics within a semiclassical picture. I Introduction Collective excitations constitute a fundamental concept in condensed matter physics which is at the origin of various phenomena in the field Anderson (); Simon (). Remarkable examples of collective excitations are phonons, magnons or plasmons. Among them, phonons describe the collective dynamics of the atomic vibrations in the crystal lattice, and play a key role for different fundamental effects in condensed matter physics, such as superconductivity Bardeen () or the thermal transport in solid matter Lepri (); Li (). Generalizations of the concept of a phonon can be found in ion traps Porras (); Bissbort () or ultracold dipolar quantum gases Pupillo (); Ortner (). Phonons in ion traps refer to the collective dynamics of ions’ motion around their equilibrium positions in e.g. Paul traps. In dipolar quantum gases, it describes the coupling between the local vibrations of dipolar atoms in a self-assembled chain or a lattice. Generally speaking, phonons in crystals, ion chains and dipolar lattices all refer to the collective dynamics of vibrations, which have a direct analogue to the motion of classical vibrators, and these phonons can be seen as a direct extension of the classical dynamics of a vibrating chain to the quantum regime. Here, we introduce a new type of collective dynamics in ultracold dipolar gases in a one-dimensional superlattice. The key ingredient for this collective dynamics is a local tunneling, which possesses no classical counterpart. Our investigation is mainly based on ab-initio simulations, besides a semiclassical analytical treatment. The simulations are performed by employing the numerically exact Multi-Layer Multi-Configuration Time-Dependent Hartree method for identical particles and mixtures (ML-MCTDHX) MLX (), which has been developed from MCTDH Meyer (); Beck (), ML-MCTDH Wang (); Manthe () and ML-MCTDHB Kronke (); Cao (), and has a close relation to MCTDHB(F) Alon (); Alon1 (); Axel (). The ab-initio simulations of the corresponding ultracold quantum gases take into account all correlations, and can unravel new effects beyond the predictions of mean-field theory and for lattice systems beyond the single-band Bose-Hubbard model. Representative examples along this line are the loss of coherence and the decay of contrast of different types of solitons Streltsov (); Kronke1 (), higher band effects on the stationary Alon2 () or dynamical properties Zollner (); Sakmann (); Cao1 (); Mistakidis (); Mistakidis1 (); Mistakidis2 () in optical lattices. To investigate the collective dynamics in the double-well superlattice, we employ ML-MCTDHX which allows for a full description of the dynamics, and proves the robustness of the collective dynamics against higher order correlations and higher band effects. This work is organized as follows: In section II, we present an introduction to the detailed setup (Sec. II.A), the initial state preparation (Sec. II.B), the local effect of the perturbation that drives the system out of equilibrium (Sec. II.C), and the global collective dynamics induced by the local perturbation (Sec. II.D). We also supply a semiclassical analytical description of the collective dynamics (Sec. II.E). The discussion of our results and the conclusions are provided in section III. Ii Collective excitations based on local correlation-induced tunneling ii.1 Setup We consider a dipolar superlattice quantum gas (DSG) composed of spin-polarized fermions confined in a one-dimensional double-well superlattice of supercells, a unit filling per supercell. All the fermions interact with each other by dipolar interactions. The Hamiltonian read as follows The first term refers to the single-particle Hamiltonian, where models the double-well superlattice with . This superlattice can be formed by two pairs of counter-propagating laser beams of wave vectors and , and the strength of the lattice can be tuned by the amplitude of the laser beams. We consider a finite-length lattice of supercells, and hard-wall boundaries are applied at positions , to allow only supercells in our simulation. The second term in the Hamiltonian models the dipolar interaction between the fermions. In this work we consider the situation that all the dipoles are polarized along the same direction, perpendicular to the relative distance between the fermions, and denotes the strength of the interaction. To avoid in simulations the divergence of the interaction at , an offset is added to the denominator. The offset takes a rather small value, which is about eight times smaller than the spacing of the discrete grid points chosen within our simulations. More specifically, in the present work we focus on the situation where all fermions reside in different wells for which case the distances where a significant overlap exists are much larger than . Then, the offset is negligible. In order to investigate the collective excitations, the DSG is firstly relaxed to the ground state of , and at a local perturbation is applied to a single supercell of the lattice, the outer most left cell is taken out of equilibrium. This perturbation is intended to induce a local dynamics in the left cell, and is applied only for a short time period, to avoid affecting the global dynamics on a long time scale. We model the local and temporal perturbation by , with the Heaviside step function . The perturbation is then modeled as a local step function applied to the outer most left supercell and lasts only for the temporal interval . The double-well superlattice and the local perturbation are sketched in figure 1 for five fermions in a five-cell superlattice. In the simulation, we render the Hamiltonian dimensionless by setting , which is equivalent to rescaling the energy, space and time in the units of , and , respectively. The setup discussed above can be realized in ultracold atom experiments. Moreover, dipolar quantum gases have become a hot topic in the field of ultracold atoms and molecules Lahaye (); Baranov (). Their rich phase properties Yi (); Capogrosso (); Hauke (); Kadau (); Barbut () and perspectives in, for instance, quantum simulations Micheli (); Gorshkov (); Kaden () have inspired extensive studies on dipolar quantum gases. Experiments can nowadays prepare dipolarly interacting particles in lattices, due to the rapid development of cooling atoms Zhou (); Olmos (); Baier () with large magnetic dipole moments and polar molecules in optical lattices Ni (); Deiglmayr (); Yan (); Guo (); Frisch (). Specifically, the double-well superlattice has been realized in experiments, and has become a widely used testbed for various phenomena, such as correlated atomic tunneling Folling (), generation of entanglement of ultracold atoms Dai () and the topological Thouless quantum pump Lohse (). The setup discussed and analyzed here is therefore well within experimental reach. Figure 1: Sketch of the dipolar fermionic gas in a double-well superlattice of five cells. The blue and black lines show the double-well superlattice and the local perturbation modeled as a step function applied only to the most left site of the lattice, respectively. Five fermions (red dashed gaussians) are loaded to the superlattice, each of which is localized in a different supercell and occupies the two sites in the cell for the initial state. ii.2 Paramagnetic-like initial state The dynamics investigated in the present work strongly depends on the choice of the initial state, which is prepared as the ground state of in a particular parameter regime. It has been shown Yin () that the DSG system can be mapped to an effective Ising spin chain model under the so-called pseudo-spin mapping. Then, within the Ising spin picture the ground state of the system undergoes a transition from a paramagnetic-like state to a single-kink state for increasing dipolar interaction. The paramagnetic-like state refers to the pseudo-spins polarized in the same direction due to an effective magnetic field, whereas the single-kink state is composed of two effective ferromagnetic domains aligning in opposite directions. In the present work, we focus on the dynamics in the weak interaction regime, the DSG system initially resides in the paramagnetic-like state. To comprehend and analyze qualitatively the initial particle configuration (being characterized by the many-body state ) we shall employ the notion of reduced densities. The one-body reduced density matrix , is obtained by tracing out all fermions but one in the one-body density operator of the -body system, while the two body density can be obtained by a partial trace over all but two fermions of the two-body density operator . Subsequently, the initial state can be characterized by the two-body and one-body correlations in the superlattice, as shown in figure 2. Figure 2() presents the two-body correlation of five fermions in a five-cell superlattice. The vanishing occupation along the diagonal direction in the two-body correlation illustrates that no two fermions (or more) occupy the same supercell, and each supercell hosts only one fermion, which is a Mott-like configuration. Being localized in a separate supercell, the fermions can occupy the left and right sites of the cell simultaneously, giving rise to particle number fluctuations in these sites and in particular to a non-vanishing off-diagonal one-body correlation, as shown in figure 2(). This non-vanishing one-body correlation plays a key role in the collective dynamics investigated in the present work. To a good approximation, the paramagnetic-like ground state (see also Appendix B) can be expressed as where and denote the lowest-band Wannier states in the left and right site of the -th supercell, respectively. Figure 2: () The two-body correlations and () one-body correlations of the initial state of five fermions in a five-cell superlattice. The two-body correlation illustrates that any two fermions cannot occupy the same supercell, each supercell hosts a single fermion. The off-diagonal of the one-body correlations indicates the delocalization of the fermion between the left and right sites of the supercell. The parameters used here are , which correspond to a weakly interacting dipolar gas in a deep superlattice. ii.3 Correlation-induced tunneling in a single supercell To drive the system out of equilibrium from the initial state , we apply a local perturbation to the most left supercell. The perturbation is intended to induce a local tunneling of the fermion in this supercell and is modeled by a step function applied to the left site of the supercell. In this subsection we describe the local dynamics in this supercell under the perturbation . The step function introduces an energy offset between the left and right sites of the cell. Normally, the energy offset inhibits the tunneling between the two sites, of which the amplitude is reduced, by increasing the amplitude of the offset. When a particle is initially prepared in a superposition state involving the two sites equally, however, the offset can enhance the tunneling of the particle in a narrow parameter window of the offset strength. The tunneling amplitude becomes maximal when the strength of the perturbation matches that of the hopping between the two sites. The explanation of such an unusual tunneling is as follows: In the normal case, the energy offset breaks the resonance between the two sites in terms of the potential energy, and thus it inhibits the tunneling between the two sites. When the initial state is chosen as a superposition state of the particle occupying the two sites equally, a finite kinetic energy is stored in the system. The finite kinetic energy can then compensate the resonance breaking of the potential energy and promote the tunneling. The maximum compensation is reached when the energy offset matches the initial kinetic energy, which can be realized when the strength of the offset equals the hopping strength. In the double well system, the kinetic energy coincides with the one-body correlation between the two sites, up to a factor determined by the hopping strength, and we term this unusual tunneling as correlation induced tunneling (CIT) Cao2 (), to indicate the connection between the kinetic energy and the one-body spatial correlation. Moreover, the CIT can also be viewed as a Rabi oscillation between the two states , where the tilt couples these two states and determines the corresponding Rabi frequency. In figure 3 we illustrate the CIT of a single particle confined in a double well potential with a temporal energy offset. To proceed we calculate the population of each well, e.g. for the right well (with being the one-body density). As shown in the figure, initially the particle is occupying both sites with equal probability, and after the perturbation (applied at ), the probability oscillates from the right to the left well, indicating a tunneling between the two wells. When the perturbation is turned off (the turn-off time is marked by the dashed red line in figure 3), we observe that the tunneling persists. Turning to the whole superlattice, it can be expected that the CIT also takes place in the left supercell when the same perturbation is applied to a double-well supercell. Figure 3: Density oscillation of a single particle in a double well with a local perturbation applied to the left well. The local perturbation is applied for a short time period, and the red dashed line marks the time when it is turned off. The double well is taken from a single unit cell of the superlattice with , and the height of the perturbation potential is . ii.4 Collective dynamics of local CIT Having introduced the initial state and the local dynamics of the CIT, let us proceed to the global dynamics of the entire DSG system being subjected to a local perturbation. Our main finding can be summarized as follows: Once the local perturbation induces the CIT in a single supercell, the most left one as considered here, the dipolar interaction can transport the local CIT to other cells. In this manner, all the fermions, while remaining well localized in their separate supercells, perform local CIT between the two sites of their supercells, giving rise to a collective dynamics of local CIT in the DSG system. Moreover, the collective dynamics resembles phonon-like excitations, with a well defined momentum-energy dispersion relation. In following, we shall demonstrate the collective dynamics of local CIT employing ab-initio simulations. Figure 4: () The density evolution of a three-fermion system in a three-cell superlattice, and () in a plain triple well, under a corresponding local perturbation. The dashed lines attached to the right of the figures illustrate the corresponding trapping potentials. The arrows mark the position where the local perturbation is applied for both cases. The parameters used here and in figure 5 are . Firstly, we simulate the collective dynamics of fermions confined in a three-cell superlattice, a 3F3C (3 fermions in 3 cells) system, and compare it with the phonon of three dipolar-interacting fermions in a plain triple well. In figure 4() we show the one-body density oscillation of the 3F3C system under the perturbation . We observe that CIT takes place in all the three supercells, with no inter-cell tunneling between neighboring supercells. This collective dynamics of CITs is different from the dipolar phonon as well as the ion phonon, which refer to the collective dynamics of local classical vibrations of dipolar atoms or ions confined in a lattice respectively. In figure 4() we also present the one-body density of the dipolar phonon of three fermions in a plain triple well. In the dipolar phonon case, a local tilt induces a dipole oscillation of the fermion in the left well, and the dipolar interaction transports the local density oscillation to fermions in remote wells, giving rise to the collective phonon dynamics. Firstly, a similarity can be drawn between the collective CIT and the dipolar phonon, where both cases are composed of local dynamics coupled by the dipolar interaction. On the other hand, the distinction of the two collective dynamics is also obvious: The dipolar phonon (as well as the ion phonon) is composed of local oscillations of particles and can be seen as a direct extension of the classical phonons to the quantum regime. Meanwhile, the collective dynamics of CIT has no counterpart in the classical world and is a pure quantum effect. Figure 5: The density evolution of () a five-fermion and () eleven-fermion system of unit-filling, under the local perturbation. The dashed yellow lines in both figures illustrate the finite transport velocity of the local CIT through the superlattice. The dashed lines (see the corresponding slopes) also suggest an equal transport velocity of the CIT in both systems, implying that the transport velocity is independent of the system size. To demonstrate the generality of such collective dynamics with respect to the size of the superlattice we show that the same behavior is evident in 5F5C (5 fermions in 5 cells) and 11F11C (11 fermions in 11 cells) systems, as shown in figures 5() and 5(), respectively. In both figures we observe that the collective dynamics of local CIT indeed takes place in bigger systems, indicating that it is not restricted to a particular size. Moreover, in the longer lattices, we observe more clearly how the local CIT transport through the whole system: They are not simultaneously excited along the lattice once the perturbation is applied, but the CIT are transported with a finite velocity from the left supercell to remote ones. The transport of local CIT with a finite velocity is illustrated in both figures with the yellow dashed lines, where one can even observe the reflection at the edges of the lattice. In this way, the collective dynamics of local CIT in the DSG systems also serve as a test bed for the light-cone like behavior of two-body correlations. It is known that all phonon-like collective excitations share a common property of well defined momentum-energy dispersion relation, where the collective dynamics can be decomposed into a set of momentum modes and each mode has a well defined energy, characteristic frequency. It is interesting to investigate whether the collective dynamics of DSG systems is also associated with a dispersion relation. For this purpose, we calculate the density difference between the left and right sites of each supercell (), and further define a set of -modes as To verify the corresponding dispersion relation we then calculate the spectra of and . We show the spectra of for 5F5C and 11F11C in figures 6() and 6(), respectively, and the corresponding spectra of in figures 6() and 6(). These figures demonstrate that, firstly the spectra of show main peaks for the -fermion system, each of which corresponds to one -mode, indicating that the collective dynamics can be indeed decomposed into -modes. More importantly, each -mode is associated with a dominant frequency peak, as shown in the spectra of , and this directly verifies a well-defined momentum-energy dispersion relation in the collective dynamics of the CIT. Further, we also observe some weakly pronounced peaks lying near zero in the spectra, which are close to the values of the frequency difference between the corresponding peaks. These peaks are attributed to a weak nonlinear effect similar to phonon-phonon interactions. Figure 6: The frequency spectra for a five-fermion (left column) and eleven-fermion (right column) system of unit-filling. The upper and bottom rows correspond to the spectra of and the modes , respectively. The dashed vertical lines demonstrate a one-to-one correspondence between the peaks in to the peak of a particular -mode peak. The arrows in figures () and () mark the tiny peaks in the low-frequency regimes, which are understood as a nonlinear effect of frequency subtraction. As a result of the low resolution of the spectra with respect to the dense packing of the peaks in the 11F11C case, some peaks are not well presented in figure (). ii.5 Semiclassical description of the collective CIT dynamics In this section, we supply a semiclassical description of the collective CIT excitation, in terms of . The starting point is the second-order time derivative equation which is derived simply by applying twice, while the notation denotes the expectation value . Then the major task of solving equation (4) is to find proper expressions of the Hamiltonian (for more details see Appendix B and in particular equation (B1)) and to solve for . We adopt the lowest-band Hubbard model for and apply degenerate perturbation theory to solve for and derive a set of closed equations for : a detailed derivation is given in Appendix B. The final form of the equations that obeys, reads where and refer to the intra-cell hopping and the dipolar interaction strength, respectively. Equation (5) is the semiclassical version of equation (4), and it is clear that equation (5) resembles that of classical vibrating chains, where plays the role of the local displacement of the -th vibrator. The general solutions of equation (5) correspond to a set of eigenmodes, and a particular solution is given by the superposition of these eigenmodes (equation corrected) where , and , are determined by the initial state. The semiclassical equations (5) and their eigenmode solutions (see equation (6)) directly illustrate the phonon-like behavior of the collective dynamics of the local CIT in the DSG system. Iii Discussion and conclusions In this work we demonstrate a new type of collective excitations in dipolar quantum gases confined in the double-well superlattice with a unit filling factor. The collective excitations manifest themselves as the coupling and transport of local CIT within each supercell. The local CIT are a pure quantum effect and have no classical counterpart, which endows the dynamics composed of these collective excitations with a pure quantum nature, instead of being a quantum correction to any classical dynamics. These collective excitations can also be generalized from the double-well superlattice to more complicate superlattices, where new properties of the collective dynamics can be engineered. For instance, the CIT in a double well possesses a single characteristic frequency, and in the spectrum of the collective dynamics a single band arises from this characteristic frequency. When the supercell is expanded to multiple wells, the corresponding characteristic frequencies of the local CIT will also increase, and each of these frequencies seeds a band in the spectrum of the collective dynamics of the local CIT, resulting in a multi-band spectrum. The tunability of the band structure by the supercell properties indicate a high flexibility in designing and engineering new properties of such collective excitations. Meanwhile, in the relatively strong interaction regime, one can expect more pronounced nonlinear effects, such as the scattering of the collective excitations, which, however, is beyond the scope of the current work. It is in place to discuss the realizability and robustness of the collective excitations under realistic conditions. Firstly, these excitations are not restricted to fermionic dipolar gases, but can also be realized with bosonic dipolar gases, as the particles are localized in separate cells and the particle statistics plays almost no role here. For realistic implementations, the collective excitations may be blurred by effects due to finite temperature, an additional external potential and the imperfectness of the filling factor. To observe collective excitations, it is required to cool the particles to the lowest band of the lattice. In previous experiments on double well superlattices, this condition has been fulfilled for contact interacting atoms, and with the fast progress in cooling dipolar lattice gases we expect this condition will become also feasible for our setup. In experiments, the confinement of lattice gases to a finite spatial domain is realized by an external harmonic trap, which will also introduce some constraints on the realization. In the bottom of the harmonic trap, it is possible to prepare a paramagnetic-like state, while at the edge deviations from the perfect paramagnetic-like configuration can arise. It has been shown that this edge effect will not change the global paramagnetic-like configuration Yin (), and we also note that it is now possible to compensate the extra harmonic trap with a dipole trap in experiments Will (), which can further release the constraints. Finally if the filling deviates from unit filling per supercell, holes or doublons can arise in the superlattice, which can scatter and couple to the collective excitations. New phenomena can be generated due to such scattering and coupling, and we refer the reader for possible new phenomena to future investigations. This work is dedicated to Prof. Lorenz Cederbaum on the occasion of his 70th birthday. The authors acknowledge the efforts of Sven Schmidt and Xiangguo Yin in the initial stage of the work. L. Cao is also grateful to Antonio Negretti for inspiring discussions on ion phonons and the conditions of realistic implementations. S.M and P.S gratefully acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG) in the framework of the SFB 925 ”Light induced dynamics and control of correlated quantum systems”. Appendix A Ml-Mctdhx The Multi-Layer Multi-Configuration Time-Dependent Hartree method for multicomponent quantum gases (ML-MCTDHX) MLX () constitutes a variational numerical ab-initio method for investigating both the stationary properties and in particular the non-equilibrium quantum dynamics of mixture ensembles covering the weak and strong correlation regimes. Its multi-layer feature enables us to deal with multispecies systems (e.g. Bose-Bose, Fermi-Fermi or Bose-Fermi mixtures), multidimensional or mixed dimensional systems in an efficient manner. The multiconfigurational expansion of the wavefunction in the ML-MCTDHX method takes into account higher band effects which renders this approach suitable for the investigation of systems governed by temporally varying Hamiltonians, where the system can be excited to higher bands especially during the dynamics. Finally within the ML-MCTDHX approach the representation of the wavefunction is performed by variationally optimal (time-dependent) single particle functions (SPFs) and expansion coefficients which makes the truncation of the Hilbert space optimal when employing the optimal time-dependent moving basis. The requirement for convergence demands a sufficient number of SPFs such that the numerical exactness of the method is guaranteed. Therefore, the number of SPFs has to be increased until the quantities of interest acquire the corresponding numerical accuracy. In a generic mixture system consisting of atoms (bosons or fermions) of species the main concept of the ML-MCTDHX method is to solve the time-dependent Schrödinger equation as an initial value problem, , by expanding the total wave-function in terms of Hartree products Here each species state () corresponds to a system of indistinguishable atoms (bosons or fermions) and describes a many-body state of a subsystem composed of -species particles. The expansion of each species state in terms of bosonic or fermionic number states reads where each atom can occupy time-dependent SPFs . The vector contains the occupation number of the SPF that obeys the constraint . Note that for the bosonic case while for the fermionic case only are permitted due to the Pauli exclusion principle. In the present work, we focus on the case of a single fermionic species in one spatial dimension where the ML-MCTDHX is equivalent to MCTDHF. To be self-contained, let us briefly discuss the ansatz for the many-body wavefunction and the procedure for the derivation of the equations of motion. The many-body wavefunction which is a linear combination of time-dependent Slater determinants reads Here denotes the total number of SPFs and the summation is performed over all possible combinations which retain the total number of fermions. In the limit in which approaches the number of grid points the above expansion becomes numerically exact in the sense of a full configuration interaction approach. Another limiting case of the used expansion refers to the case that equals the number of particles, being referred to in the literature as Time-Dependent Hartree Fock (TDHF). The Slater determinants in (A3) can be expanded in terms of the creation operators for the orbital as follows satisfying the standard fermionic anticommutation relations , etc. To determine the time-dependent wave function , we have to find the equations of motion for the coefficients and the orbitals (which are both time-dependent). To derive the equations of motion for the mixture system one can employ various approaches such as the Lagrangian, McLachlan or the Dirac-Frenkel variational principle, each of them leading to the same result. Following the Dirac-Frenkel variational principle we can determine the time evolution of all the coefficients in the ansatz (A3) and the time dependence of the orbitals . In this manner, we end up with a set of non-linear integro-differential equations of motion for the orbitals , which are coupled to the linear equations of motion for the coefficients . These equations are the well-known MCTDHF equations of motion Alon (); Alon1 (). Within our implementation, a discrete variable representation (DVR) scheme is applied, and in particular we adopt the sin-DVR, which intrinsically implements hard-wall boundaries conditions. Furthermore, for the cases of three and five fermions, six and ten SPFs have been used, respectively, i.e. the number of SPFs being twice the number of the particles. As it turned out, the number of major occupied natural orbitals for both cases, reflecting the convergence of the simulation with respect to the number of SPFs, is equal to the number of particles. This indicates that one just needs to use as many SPFs as there are fermions in order to reach a converged simulation. Finally, in the simulation of the eleven-fermion case, only eleven SPFs have been used. Appendix B Semiclassical equations We firstly re write the Hamiltonian of equation (1) in the Hubbard form. Upon the lowest-band Wannier states , we define a set of basis vectors of , where denote the corresponding symmetric (anti-symmetric) superposition within the -th supercell. By regarding / as a pseudo-spin state of /, we introduce the Pauli matrices , with for these basis vectors. We focus on the evolution of the system after the tilt is removed, and the corresponding Hamiltonian can then be expressed in this basis as where J refers to the intra-cell hopping strength, and V, U are determined by the interaction strength. In this reduced Hamiltonian, we approximate the dipolar interaction by a nearest-neighbor interaction and neglect the inter-cell hopping, which is valid within the weak interaction regime considered in this work Yin (). Based on equation (B1), can be solved analytically by perturbation theory. It turns out to be enough to use the first order perturbation. In the perturbation treatment, we take the last two terms in equation (B.1) as a perturbation. To zero-th order, the ground state is given by equation (2). The first-order correction of degenerate perturbation theory gives that a set of low-lying excited states bunch into a band on top of the ground state, and the eigenstates in this first excited band can be expressed as where . It can be shown that for the collective dynamics considered here, it is enough to focus on the ground state and first excited band. Without loss of generality, we can assume the wave function at time , when vanishes, as with . Then becomes In the basis of , becomes . Substituting this expression and equation (B1) into equation (4), we obtain Further using equation (B3) for the average , it can be proven that . It is then straightforward to obtain the time derivative equation (5). • (1) P. W. Anderson, Concepts in Solids: Lectures on the Theory of Solids. World Scientific Lecture Notes in Physics. World Scientific Pub Co Inc, (1998). • (2) S.H. Simon, The Oxford solid state basics. Oxford: Oxford University Press., 1st ed. edition, (2013). • (3) J. Bardeen, Cooperative Phenomena. Springer-Verlag, Berlin, Heidelberg, New York, (1973). • (4) S. Lepri, R. Livi, and A. Politi, Thermal conduction in classical low-dimensional lattices, Phys. Rep., 377, 1 (2003). • (5) N. Li, J. Ren, L. Wang, G. Zhang, P. Hänggi, and B. Li, Colloquium : Phononics: Manipulating heat flow with electronic analogs and beyond, Rev. Mod. Phys., 84, 1045 (2012). • (6) D. Porras, and J. I. Cirac, Bose-Einstein condensation and strong-correlation behavior of phonons in ion traps., Phys. Rev. Lett. 93, 263602 (2004). • (7) U. Bissbort, D. Cocks, A. Negretti, Z. Idziaszek, T. Calarco, F. Schmidt-Kaler, W. Hofstetter, and R. Gerritsma, Emulating solid-state physics with a hybrid system of ultracold ions and atoms, Phys. Rev. Lett., 111, 080501 (2013). • (8) G. Pupillo, A. Griessner, A. Micheli, M. Ortner, D.-W. Wang, and P. Zoller, Cold atoms and molecules in self-assembled dipolar lattices, Phys. Rev. Lett., 100, 050402 (2008). • (9) M. Ortner, A. Micheli, G. Pupillo, and P. Zoller, Quantum simulations of extended hubbard models with dipolar crystals, New J. Phys., 11, 055045 (2009). • (10) L. Cao, V. Bolsinger, S.I. Mistakidis, G. Koutentakis, S. Krönke, J.M. Schurer and P. Schmelcher, An all-in-all ab-initio approach to multi-component quantum gases of fermions and bosons, in preparation. • (11) H.-D. Meyer, U. Manthe, and L.S. Cederbaum, The multi-configurational time-dependent hartree approach, Chem. Phys. Lett., 165, 73 (1990). • (12) M. H. Beck, A. Jäckle, G. A. Worth, and H. D. Meyer, The multiconfiguration time-dependent hartree (mctdh) method: a highly efficient algorithm for propagating wavepackets, Phys. Rep., 324, 1 (2000). • (13) H. Wang and M. Thoss, Multilayer formulation of the multiconfiguration time-dependent hartree theory, J. Chem. Phys., 119, 1289 (2003). • (14) U. Manthe, A multilayer multiconfigurational time-dependent hartree approach for quantum dynamics on general potential energy surfaces, J. Chem. Phys., 128, 164116 (2008). • (15) S. Krönke, L. Cao, O. Vendrell, and P. Schmelcher, Non-equilibrium quantum dynamics of ultra-cold atomic mixtures: the multi-layer multi-configuration time-dependent hartree method for bosons, New J. Phys., 15, 063018 (2013). • (16) L. Cao, S. Krönke, O. Vendrell, and P. Schmelcher, The multi-layer multi-configuration time-dependent hartree method for bosons: Theory, implementation, and applications, J. Chem. Phys., 139, 134103 (2013). • (17) O. E. Alon, A. I. Streltsov, and L. S. Cederbaum, Multiconfigurational time-dependent hartree method for bosons: Many-body dynamics of bosonic systems, Phys. Rev. A, 77, 033613 (2008). • (18) O. E. Alon, A. I. Streltsov, K. Sakmann, A. U. J. Lode, J. Grond, and L. S. Cederbaum, Recursive formulation of the multiconfigurational time-dependent hartree method for fermions, bosons and mixtures thereof in terms of one-body density operators, Chem. Phys., 401, 2 (2012). • (19) E. Fasshauer and A. U. J. Axel, Multiconfigurational time-dependent Hartree method for fermions: Implementation, exactness, and few-fermion tunneling to open space, Phys. Rev. A, 93, 033635 (2015). • (20) A. I. Streltsov, O. E. Alon, and L. S. Cederbaum, Swift loss of coherence of soliton trains in attractive bose-einstein condensates, Phys. Rev. Lett., 106, 240401 (2011). • (21) S. Krönke and P. Schmelcher, Many-body processes in black and gray matter-wave solitons, Phys. Rev. A, 91, 053614 (2015). • (22) O. E. Alon, A. I. Streltsov, and L. S. Cederbaum, Zoo of quantum phases and excitations of cold bosonic atoms in optical lattices, Phys. Rev. Lett., 95, 030405 (2005). • (23) S. Zöllner, H.-D. Meyer, and P. Schmelcher, Few-boson dynamics in double wells: From single-atom to correlated pair tunneling, Phys. Rev. Lett., 100, 040401 (2008). • (24) K. Sakmann, A. I. Streltsov, O. E. Alon, and L. S. Cederbaum, Exact quantum dynamics of a bosonic josephson junction, Phys. Rev. Lett., 103, 220601 (2009). • (25) L. Cao, I. Brouzos, S. Zöllner, and P. Schmelcher, Interaction-driven interband tunneling of bosons in the triple well, New J. Phys., 13, 033032 (2011). • (26) S. I. Mistakidis, L. Cao, and P. Schmelcher, Interaction quench induced multimode dynamics of finite atomic ensembles, J. Phys. B: At. Mol. Opt. Phys. 47, 225303 (2014). • (27) S. I. Mistakidis, L. Cao, and P. Schmelcher, Negative-quench-induced excitation dynamics for ultracold bosons in one-dimensional lattices, Phys. Rev. A 91, 033611 (2014). • (28) S. I. Mistakidis, T. Wulf, A. Negretti, and P. Schmelcher, Resonant quantum dynamics of few ultracold bosons in periodically driven finite lattices, J. Phys. B: At. Mol. Opt. Phys. 48, 244004 (2015). • (29) T. Lahaye, C. Menotti, L. Santos, M. Lewenstein, and T. Pfau, The physics of dipolar bosonic quantum gases. Rep. Prog. Phys., 72 126401, (2009). • (30) M. A. Baranov, M. Dalmonte, G. Pupillo, and P. Zoller, Condensed matter theory of dipolar quantum gases, Chem. Rev., 112 5012, 2012. • (31) S. Yi, T. Li, and C. P. Sun, Novel quantum phases of dipolar bose gases in optical lattices, Phys. Rev. Lett., 98 260405, (2007). • (32) B. Capogrosso-Sansone, C. Trefzger, M. Lewenstein, P. Zoller, and G. Pupillo, Quantum phases of cold polar molecules in 2d optical lattices, Phys. Rev. Lett., 104 125301, (2010). • (33) P. Hauke, F. M. Cucchietti, A. Müller-Hermes, M.-C. Banuls, J. I. Cirac, and M. Lewenstein, Complete devil’s staircase and crystalsuperfluid transitions in a dipolar xxz spin chain: a trapped ion quantum simulation, New J. Phys., 12 113037, (2010). • (34) H. Kadau, M. Schmitt, M. Wenzel, C. Wink, T. Maier, I. Ferrier-Barbut, and T. Pfau, Observing the rosensweig instability of a quantum ferrofluid. Nature, 530, 7589 (2016). • (35) I. Ferrier-Barbut, H. Kadau, M. Schmitt, M. Wenzel, and T. Pfau, Observation of quantum droplets in a strongly dipolar bose gas, Phys. Rev. Lett., 116, 215301 (2016). • (36) A. Micheli, G. K. Brennen, and P. Zoller, A toolbox for lattice-spin models with polar molecules, Nat. Phys., 2, 341 (2006). • (37) A. V. Gorshkov, S. R. Manmana, G. Chen, J. Ye, E. Demler, M. D. Lukin, and A. M. Rey, Tunable superfluidity and quantum magnetism with ultracold polar molecules, Phys. Rev. Lett., 107, 115301 (2011). • (38) R. A. H. Kaden, S. R. Manmana, M. Foss-Feig, and A. M. Rey, Far-from-equilibrium quantum magnetism with ultracold polar molecules, Phys. Rev. Lett., 110, 075301 (2013). • (39) X. Zhou, X. Xu, X. Chen, and J. Chen, Magic wavelengths for terahertz clock transitions, Phys. Rev. A, 81, 012115 (2010). • (40) B. Olmos, D. Yu, Y. Singh, F. Schreck, K. Bongs, and I. Lesanovsky, Long-range interacting many-body systems with alkaline-earth-metal atoms, Phys. Rev. Lett., 110, 143602 (2013). • (41) S. Baier, M. J. Mark, D. Petter, K. Aikawa, L. Chomaz, Z. Cai, M. Baranov, P. Zoller, and F. Ferlaino, Extended bose-hubbard models with ultracold magnetic atoms. Science, 352, 6282 (2016). • (42) K.-K. Ni, S. Ospelkaus, M. H. G. de Miranda, A. Pe’er, B. Neyenhuis, J. J. Zirbel, S. Kotochigova, P. S. Julienne, D. S. Jin, and J. Ye, A high phase-space-density gas of polar molecules, Science, 322, 5899 (2008). • (43) J. Deiglmayr, A. Grochola, M. Repp, K. Mörtlbauer, C. Glück, J. Lange, O. Dulieu, R. Wester, and M. Weidemüller, Formation of ultracold polar molecules in the rovibrational ground state, Phys. Rev. Lett., 101, 133004 (2008). • (44) B. Yan, S. A. Moses, B. Gadway, J. P. Covey, K. R. A. Hazzard, A. M. Rey, D. S. Jin, and J. Ye, Observation of dipolar spin-exchange interactions with lattice-confined polar molecules, Nature, 501, 7468 (2013). • (45) M. Guo, B. Zhu, B. Lu, X. Ye, F. Wang, R. Vexiau, N. Bouloufa-Maafa, G. Quemener, O. Dulieu, and D. Wang, Creation of an ultracold gas of ground-state dipolarNaRb molecules, Phys. Rev. Lett., 116, 205303 (2016). • (46) A. Frisch, M. Mark, K. Aikawa, S. Baier, R. Grimm, A. Petrov, S. Kotochigova, G. Quemener, M. Lepers, O. Dulieu, and F. Ferlaino, Ultracold dipolar molecules composed of strongly magnetic atoms, Phys. Rev. Lett., 115, 203201 (2015). • (47) S. Fölling, S. Trotzky, P. Cheinet, M. Feld, R. Saers, A. Widera, T. Muller, and I. Bloch, Direct observation of second-order atom tunnelling, Nature, 448, 1029 (2007). • (48) H.-N. Dai, B. Yang, A. Reingruber, X.-F. Xu, X. Jiang, Yu-Ao Chen, Z.-S. Yuan, and J.-W. Pan, Generation and detection of atomic spin entanglement in optical lattices, Nature Phys., 3705 (2016). • (49) M. Lohse, C. Schweizer, O. Zilberberg, M. Aidelsburger, and I. Bloch, A thouless quantum pump with ultracold bosonic atoms in an optical superlattice, Nature Phys., 12, 350 (2016). • (50) X. Yin, L. Cao, and P. Schmelcher, Magnetic kink states emulated with dipolar superlattice gases, EPL, 110, 26004 (2015). • (51) L. Cao, I. Brouzos, B. Chatterjee, and P. Schmelcher, The impact of spatial correlation on the tunneling dynamics of few-boson mixtures in a combined triple well and harmonic trap, New J. Phys., 14, 093011 (2012). • (52) S. Will, T. Best, U. Schneider, L. Hackermuller, D.-S. Luhmann, and I. Bloch, Time-resolved observation of coherent multi-body interactions in quantum phase revivals, Nature, 465 7295 (2010). Comments 0 Request Comment You are adding the first comment! How to quickly get a good reply: Add comment Loading ... This is a comment super asjknd jkasnjk adsnkj The feedback must be of minumum 40 characters The feedback must be of minumum 40 characters You are asking your first question! How to quickly get a good answer: • Keep your question short and to the point • Check for grammar or spelling errors. • Phrase it like a question Test description
aff93a2ecb7d196a
Correlation between Diffusion Equation and Schrödinger Equation Journal of Modern Physics Vol.4 No.5(2013), Article ID:31602,4 pages DOI:10.4236/jmp.2013.45088 Correlation between Diffusion Equation and Schrödinger Equation Takahisa Okino Department of Applied Mathematics, Faculty of Engineering, Oita University, Oita, Japan Email: okino@oita-u.ac.jp Received February 28, 2013; revised March 20, 2013; accepted April 27, 2013 Keywords: Diffusion Coefficient; Diffusion Equation; Schrödinger Equation The well-known Schrdöinger equation is reasonably derived from the well-known diffusion equation. In the present study, the imaginary time is incorporated into the diffusion equation for understanding of the collision problem between two micro particles. It is revealed that the diffusivity corresponds to the angular momentum operator in quantum theory. The universal diffusivity expression, which is valid in an arbitrary material, will be useful for understanding of diffusion problems. 1. Introduction For micro particles such as atoms or molecules in the homogeneous time and space of, the macro behavior of their collective motions is presented by the well-known diffusion equation of where is the concentration of them and the diffusivity when it does not depend on [1]. The motion of a micro particle is presented by quantum mechanics and its behavior is investigated by using the Schrödinger equation of where is using the Plank constant, the state vector and the Hamiltonian meaning the total energy in the given physical system [2]. In case of a free particle, it is given by where is the particle mass and the momentum. In the present study, the correlation between (1) and (2) was investigated. It was found that the Schrödinger equation (2) is reasonably derived from the diffusion equation (1) by means of using the imaginary time for (1). As a result, we revealed that the diffusivity in (1) corresponds to the angular momentum operator in quantum mechanics. The obtained new diffusivity will be useful for understanding of an elementary process of diffusion [3]. 2. Necessity of Imaginary Time The micro particle in a solid crystal jumps instantly to the nearest lattice site through an energy barrier when it obtains an activation energy caused by the thermal fluctuation. The micro particle in a fluid collides with another one via the movement of the averaged free path and the particle jumps to a neighbor site. For a Brownian particle of mass m, the well-known Langevin equation is where the velocity and the viscosity resistance f are and, respectively [4]. In (4), the time-averaged value of external force satisfies in a collision problem. Hereafter, we do not discuss but the acceleration in a collision problem between two micro particles. In the three dimensional space, the acceleration is expressed as: Since the physical essence is still kept even if we consider the simplest collision problem of one dimensional case, we thus investigate a perfect elastic collision problem between a micro particle A and a particle B of the same kind. When the particle A moves at a velocity and collides at time with the particle B in the standstill state, if we can clarify the distinction between A and B after the collision, the particle A decelerates from the velocity to the velocity zero and the particle B accelerates from the velocity zero to the velocity between. On the other hand, if we cannot clarify the distinction between A and B after the collision, it seems that the particle A decelerates from the velocity to the velocity zero between and subsequently accelerates again from the velocity zero to the velocity between. In other words, the particle motion seems as if there is no collision process. If we notice the acceleration of in the above latter case, the relation of between is valid in the three dimensional collision process, using a probabilistic parameter of. Therefore, this indicates that the impossibility of discrimination between the particles A and B yields or between, as can be seen from the expression of (5). In the present study, we thus accept the imaginary time as an essential characteristic of a micro particle caused by the impossibility of discrimination between micro particles. In a collision problem, the acceleration is meaningless, although is finite at the limit of and. 3. Diffusion Equation of Imaginary Time Rewriting the concentration of diffusion particles into a quantity of state expressed by a complex function, (1) is presented as: Assuming, (6) can be solved by the separation method of variables. Using complex numbers and determined from the initial and boundary conditions, the general solution of (6) is obtained as; where. Substituting into (7), it becomes and using the real function and, we rewrite the complex function into the complex-value function yielding . (8) Further, substituting (8) and into (6) and multiplying the both-side of (6) by, (1) is rewritten as: 4. Diffusion Coefficient of Micro Particle The function is defined as a probability density which a diffusion particle in the initial state of exists in the state of after j times jumps. A diffusion particle moves at random and it is, therefore, considered that the jump frequency and jump displacement are equivalent in probability to their mean values of all diffusion particles in the collective system. Since it is also considered that the probability of diffusion-jump from the state of to is equivalent to one from the same state to, the relation of is thus valid. The Taylor expansion of the left-hand side of (10) yields The Taylor expansion of the right-hand side of (10) also yields The substitution of (11) and (12) into (10) gives Since the probability density function f of a diffusion particle corresponds to the normalized concentration C, the comparison of (1) with (13) gives the diffusion coefficient yielding as a relation satisfying the well-known parabolic law [5]. 5. Diffusion Coefficient and Angular Momentum When a micro particle randomly jumps from a position to another one, the jump orientation becomes the spherical symmetry in probability. Using the equation of relevant to the angular momentum defined by a position vector and a momentum, the right-hand side of (14) is rewritten as: where is valid in the spherical symmetry space. Considering the eigenvalue, the relation of (14) is thus rewritten as an operator relation of Substituting (15) into (9) gives Here, if we define the relation given by (16) becomes the equation of Further, the substitution of (3) into (18) yields the well-known Schrödinger equation (2).The defined equation (17) is one of the basic operators in quantum mechanics. Hereinbefore, the Schrödinger equation was reasonably derived from the diffusion equation. It was also found that the diffusivity corresponds to the angular momentum operator in quantum mechanics. The relation of (15) is concretely investigated in the following section. 6. Discussion and Conclusion In mathematics, it was clarified that we can transform the diffusion equation for the collective motion of micro particles into the Schrödinger equation for a micro particle. In physics, energy E, momentum and angular momentum are expressed as operators yielding We cannot observe imaginary physical quantities. Therefore, the eigenvalues of their operators are meaningful in quantum mechanics. As previously mentioned in a collision problem, the impossibility of identification between micro particles corresponds to introducing the imaginary time into those motions and also it corresponds to yielding the meaningless acceleration. It is considered that the physical concept obtained here is generally valid for the micro particle motions. Thus, the concept of acceleration disappears in quantum mechanics. Except constant physical quantities, physical variables containing an imaginary number i should be accepted as physical operators in quantum mechanics. Here, note that the kinetic energy in Hamiltonian is acceptable as an operator. On the other hand, the photon energy expressed by using a frequency is acceptable as an operatoralthough as well as is also an energy representation. The existence probability of a micro particle in a collective system of heat quantity Q and absolute temperature T is given by the well-known Boltzmann factor of where is the Boltzmann constant [6]. There is an energy barrier for a diffusion particle in order to jump from a site to another site. Therefore, it is necessary for a diffusion particle to obtain the activation energy Q from the thermal fluctuation. In a collective system composed of micro particles, the diffusion coefficient D is thus directly proportional to the probability factor of (19). The jump of a diffusion particle in a solid crystal depends on a factor derived from the atomic configuration and on the entropy S derived from an elastic strain. In a solid crystal, therefore, (15) is rewritten as where and are the Avogadro constant and the molecular or the atomic weight. Here, (20) was obtained as a new representation of diffusion coefficient. If we consider in the given diffusion system of an arbitrary material, the universal diffusivity expression of is thus obtained, where. The correlation between the diffusion equation and Schrödinger equation was clarified. We revealed that the diffusion coefficient D in classical mechanics corresponds to the angular moment in quantum mechanics. The physical constant of in (20) is an essential quantity in the diffusion problems. 1. A. Fick, Philosophical Magazine Journal of Science, Vol. 10, 1855, pp. 31-39. 2. E. Schrödinger, Annalen der Physik, Vol. 79, 1926, pp. 361-376. doi:10.1002/andp.19263840404 3. T. Okino, Journal of Modern Physics, Vol. 3, 2012, pp. 1388-1393. doi:10.4236/jmp.2012.310175 4. P. Langevin, Comptes Rendus de l’Academie des Sciences (Paris), Vol. 146, 1908, pp. 530-533. 5. A. Einstein, Annalen der Physik, Vol. 18, 1905, pp. 549- 560. doi:10.1002/andp.19053220806 6. L. Boltzmann, Wiener Berichte, Vol. 66, 1872, pp. 275- 370.
43cd521a8d65a53d
World Library   Flag as Inappropriate Email this Article Mathematical analysis Article Id: WHEBN0000048396 Reproduction Date: Title: Mathematical analysis   Author: World Heritage Encyclopedia Language: English Subject: Mathematical constant, Real analysis, Pure mathematics, Glossary of topology, List of important publications in mathematics Collection: Mathematical Analysis Publisher: World Heritage Encyclopedia Mathematical analysis A strange attractor arising from a differential equation. Differential equations are an important area of mathematical analysis with many applications to science and engineering. Mathematical analysis is a branch of mathematics that includes the theories of differentiation, integration, measure, limits, infinite series, and analytic functions.[1] These theories are usually studied in the context of real and complex numbers and functions. Analysis evolved from calculus, which involves the elementary concepts and techniques of analysis. Analysis may be distinguished from geometry; however, it can be applied to any space of mathematical objects that has a definition of nearness (a topological space) or specific distances between objects (a metric space). • History 1 • Important concepts 2 • Metric spaces 2.1 • Sequences and limits 2.2 • Main branches 3 • Real analysis 3.1 • Complex analysis 3.2 • Functional analysis 3.3 • Differential equations 3.4 • Measure theory 3.5 • Numerical analysis 3.6 • Other topics in mathematical analysis 4 • Applications 5 • Physical sciences 5.1 • Signal processing 5.2 • Other areas of math 5.3 • See also 6 • Notes 7 • References 8 • External links 9 Archimedes used the method of exhaustion to compute the area inside a circle by finding the area of regular polygons with more and more sides. This was an early but informal example of a limit, one of the most basic concepts in mathematical analysis. Mathematical analysis formally developed in the 17th century during the Scientific Revolution,[2] but many of its ideas can be traced back to earlier mathematicians. Early results in analysis were implicitly present in the early days of ancient Greek mathematics. For instance, an infinite geometric sum is implicit in Zeno's paradox of the dichotomy.[3] Later, Greek mathematicians such as Eudoxus and Archimedes made more explicit, but informal, use of the concepts of limits and convergence when they used the method of exhaustion to compute the area and volume of regions and solids.[4] The explicit use of infinitesimals appears in Archimedes' The Method of Mechanical Theorems, a work rediscovered in the 20th century.[5] In Asia, the Chinese mathematician Liu Hui used the method of exhaustion in the 3rd century AD to find the area of a circle.[6] Zu Chongzhi established a method that would later be called Cavalieri's principle to find the volume of a sphere in the 5th century.[7] The Indian mathematician Bhāskara II gave examples of the derivative and used what is now known as Rolle's theorem in the 12th century.[8] In the 14th century, Madhava of Sangamagrama developed infinite series expansions, like the power series and the Taylor series, of functions such as sine, cosine, tangent and arctangent.[9] Alongside his development of the Taylor series of the trigonometric functions, he also estimated the magnitude of the error terms created by truncating these series and gave a rational approximation of an infinite series. His followers at the Kerala school of astronomy and mathematics further expanded his works, up to the 16th century. The modern foundations of mathematical analysis were established in 17th century Europe.[2] Newton and Leibniz independently developed infinitesimal calculus, which grew, with the stimulus of applied work that continued through the 18th century, into analysis topics such as the calculus of variations, ordinary and partial differential equations, Fourier analysis, and generating functions. During this period, calculus techniques were applied to approximate discrete problems by continuous ones. In the 18th century, Euler introduced the notion of mathematical function.[10] Real analysis began to emerge as an independent subject when Bernard Bolzano introduced the modern definition of continuity in 1816,[11] but Bolzano's work did not become widely known until the 1870s. In 1821, Cauchy began to put calculus on a firm logical foundation by rejecting the principle of the generality of algebra widely used in earlier work, particularly by Euler. Instead, Cauchy formulated calculus in terms of geometric ideas and infinitesimals. Thus, his definition of continuity required an infinitesimal change in x to correspond to an infinitesimal change in y. He also introduced the concept of the Cauchy sequence, and started the formal theory of complex analysis. Poisson, Liouville, Fourier and others studied partial differential equations and harmonic analysis. The contributions of these mathematicians and others, such as Weierstrass, developed the (ε, δ)-definition of limit approach, thus founding the modern field of mathematical analysis. In the middle of the 19th century Riemann introduced his theory of integration. The last third of the century saw the arithmetization of analysis by Weierstrass, who thought that geometric reasoning was inherently misleading, and introduced the "epsilon-delta" definition of limit. Then, mathematicians started worrying that they were assuming the existence of a continuum of real numbers without proof. Dedekind then constructed the real numbers by Dedekind cuts, in which irrational numbers are formally defined, which serve to fill the "gaps" between rational numbers, thereby creating a complete set: the continuum of real numbers, which had already been developed by Simon Stevin in terms of decimal expansions. Around that time, the attempts to refine the theorems of Riemann integration led to the study of the "size" of the set of discontinuities of real functions. Also, "Cantor developed what is now called naive set theory, and Baire proved the Baire category theorem. In the early 20th century, calculus was formalized using an axiomatic set theory. Lebesgue solved the problem of measure, and Hilbert introduced Hilbert spaces to solve integral equations. The idea of normed vector space was in the air, and in the 1920s Banach created functional analysis. Important concepts Metric spaces In mathematics, a metric space is a set where a notion of distance (called a metric) between elements of the set is defined. Much of analysis happens in some metric space; the most commonly used are the real line, the complex plane, Euclidean space, other vector spaces, and the integers. Examples of analysis without a metric include measure theory (which describes size rather than distance) and functional analysis (which studies topological vector spaces that need not have any sense of distance). Formally, A metric space is an ordered pair (M,d) where M is a set and d is a metric on M, i.e., a function d \colon M \times M \rightarrow \mathbb{R} such that for any x, y, z \in M, the following holds: 1. d(x,y) = 0\, iff x = y\,     (identity of indiscernibles), 2. d(x,y) = d(y,x)\,     (symmetry) and 3. d(x,z) \le d(x,y) + d(y,z)     (triangle inequality) . By taking the third property and letting z=x, it can be shown that d(x,y) \ge 0     (non-negative). Sequences and limits A sequence is an ordered list. Like a set, it contains members (also called elements, or terms). Unlike a set, order matters, and exactly the same elements can appear multiple times at different positions in the sequence. Most precisely, a sequence can be defined as a function whose domain is a countable totally ordered set, such as the natural numbers. One of the most important properties of a sequence is convergence. Informally, a sequence converges if it has a limit. Continuing informally, a (singly-infinite) sequence has a limit if it approaches some point x, called the limit, as n becomes very large. That is, for an abstract sequence (an) (with n running from 1 to infinity understood) the distance between an and x approaches 0 as n → ∞, denoted \lim_{n\to\infty} a_n = x. Main branches Real analysis Real analysis (traditionally, the theory of functions of a real variable) is a branch of mathematical analysis dealing with the real numbers and real-valued functions of a real variable.[12][13] In particular, it deals with the analytic properties of real functions and sequences, including convergence and limits of sequences of real numbers, the calculus of the real numbers, and continuity, smoothness and related properties of real-valued functions. Complex analysis Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of mathematical analysis that investigates functions of complex numbers.[14] It is useful in many branches of mathematics, including algebraic geometry, number theory, applied mathematics; as well as in physics, including hydrodynamics, thermodynamics, mechanical engineering, electrical engineering, and particularly, quantum field theory. Complex analysis is particularly concerned with the analytic functions of complex variables (or, more generally, meromorphic functions). Because the separate real and imaginary parts of any analytic function must satisfy Laplace's equation, complex analysis is widely applicable to two-dimensional problems in physics. Functional analysis Functional analysis is a branch of mathematical analysis, the core of which is formed by the study of vector spaces endowed with some kind of limit-related structure (e.g. inner product, norm, topology, etc.) and the linear operators acting upon these spaces and respecting these structures in a suitable sense.[15][16] The historical roots of functional analysis lie in the study of spaces of functions and the formulation of properties of transformations of functions such as the Fourier transform as transformations defining continuous, unitary etc. operators between function spaces. This point of view turned out to be particularly useful for the study of differential and integral equations. Differential equations A differential equation is a mathematical equation for an unknown function of one or several variables that relates the values of the function itself and its derivatives of various orders.[17][18][19] Differential equations play a prominent role in engineering, physics, economics, biology, and other disciplines. Differential equations arise in many areas of science and technology, specifically whenever a deterministic relation involving some continuously varying quantities (modeled by functions) and their rates of change in space and/or time (expressed as derivatives) is known or postulated. This is illustrated in classical mechanics, where the motion of a body is described by its position and velocity as the time value varies. Newton's laws allow one (given the position, velocity, acceleration and various forces acting on the body) to express these variables dynamically as a differential equation for the unknown position of the body as a function of time. In some cases, this differential equation (called an equation of motion) may be solved explicitly. Measure theory A measure on a set is a systematic way to assign a number to each suitable subset of that set, intuitively interpreted as its size.[20] In this sense, a measure is a generalization of the concepts of length, area, and volume. A particularly important example is the Lebesgue measure on a Euclidean space, which assigns the conventional length, area, and volume of Euclidean geometry to suitable subsets of the n-dimensional Euclidean space \mathbb{R}^n. For instance, the Lebesgue measure of the interval \left[0, 1\right] in the real numbers is its length in the everyday sense of the word – specifically, 1. Technically, a measure is a function that assigns a non-negative real number or +∞ to (certain) subsets of a set X. It must assign 0 to the empty set and be (countably) additive: the measure of a 'large' subset that can be decomposed into a finite (or countable) number of 'smaller' disjoint subsets, is the sum of the measures of the "smaller" subsets. In general, if one wants to associate a consistent size to each subset of a given set while satisfying the other axioms of a measure, one only finds trivial examples like the counting measure. This problem was resolved by defining measure only on a sub-collection of all subsets; the so-called measurable subsets, which are required to form a \sigma-algebra. This means that countable unions, countable intersections and complements of measurable subsets are measurable. Non-measurable sets in a Euclidean space, on which the Lebesgue measure cannot be defined consistently, are necessarily complicated in the sense of being badly mixed up with their complement. Indeed, their existence is a non-trivial consequence of the axiom of choice. Numerical analysis Numerical analysis naturally finds applications in all fields of engineering and the physical sciences, but in the 21st century, the life sciences and even the arts have adopted elements of scientific computations. Ordinary differential equations appear in celestial mechanics (planets, stars and galaxies); numerical linear algebra is important for data analysis; stochastic differential equations and Markov chains are essential in simulating living cells for medicine and biology. Other topics in mathematical analysis Techniques from analysis are also found in other areas such as: Physical sciences The vast majority of classical mechanics, relativity, and quantum mechanics is based on applied analysis, and differential equations in particular. Examples of important differential equations include Newton's second law, the Schrödinger equation, and the Einstein field equations. Functional analysis is also a major factor in quantum mechanics. Signal processing When processing signals, such as audio, radio waves, light waves, seismic waves, and even images, Fourier analysis can isolate individual components of a compound waveform, concentrating them for easier detection and/or removal. A large family of signal processing techniques consist of Fourier-transforming a signal, manipulating the Fourier-transformed data in a simple way, and reversing the transformation.[23] Other areas of math Techniques from analysis are used in many areas of mathematics, including: See also 1. ^ Edwin Hewitt and Karl Stromberg, "Real and Abstract Analysis", Springer-Verlag, 1965 2. ^ a b Jahnke, Hans Niels (2003). A History of Analysis. American Mathematical Society. p. 7.   3. ^   4. ^ (Smith, 1958) 5. ^ Pinto, J. Sousa (2004). Infinitesimal Methods of Mathematical Analysis. Horwood Publishing. p. 8.   6. ^ Dun, Liu; Fan, Dainian; Cohen, Robert Sonné (1966). "A comparison of Archimdes' and Liu Hui's studies of circles". Chinese studies in the history and philosophy of science and technology 130. Springer. p. 279.  , Chapter , p. 279 7. ^ Zill, Dennis G.; Wright, Scott; Wright, Warren S. (2009). Calculus: Early Transcendentals (3 ed.). Jones & Bartlett Learning. p. xxvii.  , Extract of page 27 8. ^ Seal, Sir Brajendranath (1915), The positive sciences of the ancient Hindus, Longmans, Green and co.  9. ^ C. T. Rajagopal and M. S. Rangachari (June 1978). "On an untapped source of medieval Keralese Mathematics". Archive for History of Exact Sciences 18 (2): 89–102.  10. ^ Dunham, William (1999). Euler: The Master of Us All. The Mathematical Association of America. p. 17.  11. ^ *  12. ^   13. ^ Abbott, Stephen (2001). Understanding Analysis. Undergradutate Texts in Mathematics. New York: Springer-Verlag.   14. ^ Ahlfors.,Complex Analysis (McGraw-Hill) 15. ^ Rudin, W.: Functional Analysis, McGraw-Hill Science, 1991 16. ^ Conway, J. B.: A Course in Functional Analysis, 2nd edition, Springer-Verlag, 1994, ISBN 0-387-97245-5 17. ^ E. L. Ince, Ordinary Differential Equations, Dover Publications, 1958, ISBN 0-486-60349-0 18. ^ Witold Hurewicz, Lectures on Ordinary Differential Equations, Dover Publications, ISBN 0-486-49510-8 19. ^   20. ^ Terence Tao, 2011. An Introduction to Measure Theory. American Mathematical Society. 21. ^   23. ^ Theory and application of digital signal processing Rabiner, L. R.; Gold, B. Englewood Cliffs, N.J., Prentice-Hall, Inc., 1975. • Aleksandrov, A. D., Kolmogorov, A. N., Lavrent'ev, M. A. (eds.). 1984. Mathematics, its Content, Methods, and Meaning. 2nd ed. Translated by S. H. Gould, K. A. Hirsch and T. Bartha; translation edited by S. H. Gould. MIT Press; published in cooperation with the American Mathematical Society. • Apostol, Tom M. 1974. Mathematical Analysis. 2nd ed. Addison–Wesley. ISBN 978-0-201-00288-1. • Binmore, K.G. 1980–1981. The foundations of analysis: a straightforward introduction. 2 volumes. Cambridge University Press. • Johnsonbaugh, Richard, & W. E. Pfaffenberger. 1981. Foundations of mathematical analysis. New York: M. Dekker. • Nikol'skii, S. M. 2002. "Mathematical analysis". In Encyclopaedia of Mathematics, Michiel Hazewinkel (editor). Springer-Verlag. ISBN 1-4020-0609-8. • Rombaldi, Jean-Étienne. 2004. Éléments d'analyse réelle : CAPES et agrégation interne de mathématiques. EDP Sciences. ISBN 2-86883-681-X. • Rudin, Walter. 1976. Principles of Mathematical Analysis. McGraw–Hill Publishing Co.; 3rd revised edition (September 1, 1976), ISBN 978-0-07-085613-4. • Smith, David E. 1958. History of Mathematics. Dover Publications. ISBN 0-486-20430-8. • Whittaker, E. T. and Watson, G. N.. 1927. A Course of Modern Analysis. 4th edition. Cambridge University Press. ISBN 0-521-58807-3. • Real Analysis - Course Notes External links • Earliest Known Uses of Some of the Words of Mathematics: Calculus & Analysis • Basic Analysis: Introduction to Real Analysis by Jiri Lebl (Creative Commons BY-NC-SA)
2c0a653105b86004
March 20, 2006 Computational Chemistry advance: 100 times more accurate James Sims of NIST and Stanley Hagstrom of IU announced a new high-precision calculation of the energy required to pull apart the two atoms in a hydrogen molecule (H2). Accurate to 1 part in 100 billion, these are the most accurate energy values ever obtained for a molecule of that size, 100 times better than the best previous calculated value or the best experimental value. This advance could be useful for creating better computer simulations for molecular nanotechnology. The algorithmic improvement to get faster or more accurate solutions is adapted to use parallel processing (about 140 processors used for the calculation over a weekend). More computer systems are being developed that will help take advantage of this kind of algorithmic advance. 1000 processors via FPGAs for $100,000 later this year and next. Intel is promising hundreds of processor cores within ten years. Backgroun on supercomputer architectures 1. vector processors that can execute particular types of mathematical problems very quickly. (traditional Cray type machines) 2. large numbers of regular processors typically placed in a large number of networked computers. (Big Blue type supercomputers) 3. field-programmable gate arrays (FPGAs), chips that can be reconfigured on the fly to run specific programs very quickly. 4. Multithreaded chips Details on the algorithmic advance: The calculation requires solving an approximation of the Schrödinger equation, one of the central equations of quantum mechanics. It can be approximated as the sum of an infinite number of terms, each additional term contributing a bit more to the accuracy of the result. For all but the simplest systems or a relative handful of terms, however, the calculation rapidly becomes impossibly complex. Precise calculations have been done for systems of three components but this is for four. Their calculations were carried out to 7,034 terms. Two earlier algorithms were merged. They also developed improved computer code for a key computational bottleneck (high-precision solution of the large-scale generalized matrix eigenvalue problem) using parallel processing. The final calculations were run on a 147-processor parallel cluster at NIST over the course of a weekend. Форма для связи Email * Message *
9eb8980be5090b04
Pseudo-spectral method From Wikipedia, the free encyclopedia Jump to: navigation, search Pseudo-spectral methods,[1] also known as discrete variable representation (DVR) methods, are a class of numerical methods used in applied mathematics and scientific computing for the solution of partial differential equations. They are closely related to spectral methods, but complement the basis by an additional pseudo-spectral basis, which allows to represent functions on a quadrature grid. This simplifies the evaluation of certain operators, and can considerably speed up the calculation when using fast algorithms such as the fast Fourier transform. Motivation with a concrete example[edit] Take the initial-value problem i \frac{\partial}{\partial t} \psi(x, t) = \Bigl[-\frac{\partial^2}{\partial x^2} + V(x) \Bigr] \psi(x,t), \qquad\qquad \psi(t_0) = \psi_0 with periodic conditions \psi(x+2\pi, t) = \psi(x, t). This specific example is the Schrödinger equation for a particle in a potential V(x), but the structure is more general. In many practical partial differential equations, one has a term that involves derivatives (such as a kinetic energy contributions), and a multiplication with a function (for example, a potential). In the spectral method, the solution \psi is expanded in a suitable set of basis functions, for example plane waves, \psi(x,t) = \frac{1}{\sqrt{2\pi}} \sum_n c_n(t) e^{2\pi i n x} . Insertion and equating identical coefficients yields a set of ordinary differential equations for the coefficients, i\frac{d}{dt} c_n(t) = (2\pi n)^2 c_n + \sum_k V_{nk} c_k, where the elements V_{nk} are calculated through the explicit Fourier-transform V_{nk} = \frac{1}{2\pi} \int_0^{2\pi} V(x) \ e^{2\pi i (k-n) x} dx . The solution would then be obtained by truncating the expansion to N basis functions, and finding a solution for the c_n(t). In general, this is done by numerical methods, such as Runge–Kutta methods. For the numerical solutions, the right-hand side of the ordinary differential equation has to be evaluated repeatedly at different time steps. At this point, the spectral method has a major problem with the potential term V(x). In the spectral representation, the multiplication with the function V(x) transforms into a matrix multiplication, which scales as N^2. Also, the matrix elements V_{nk} need to be evaluated explicitly before the differential equation for the coefficients can be solved, which requires an additional step. In the pseudo-spectral method, this term is evaluated differently. Given the coefficients c_n(t), an inverse discrete Fourier transform yields the value of the function \psi at discrete grid points x_j = 2\pi j/N. At these grid points, the function is then multiplied, \psi'(x_i, t) = V(x_i) \psi(x_i, t), and the result Fourier-transformed back. This yields a new set of coefficients c'_n(t) that are used instead of the matrix product \sum_k V_{nk} c_k(t). It can be shown that both methods have similar accuracy. However, the pseudo-spectral method allows the use of a fast Fourier transform, which scales with O(N\ln N), and is therefore significantly more efficient than the matrix multiplication. Also, the function V(x) can be used directly without evaluating any additional integrals. Technical discussion[edit] In a more abstract way, the pseudo-spectral method deals with the multiplication of two functions V(x) and f(x) as part of a partial differential equation. To simplify the notation, the time-dependence is dropped. Conceptually, it consists of three steps: 1. f(x), \tilde{f}(x) = V(x)f(x) are expanded in a finite set of basis functions (this is the spectral method). 2. For a given set of basis functions, a quadrature is sought that converts scalar products of these basis functions into a weighted sum over grid points. 3. The product is calculated by multiplying V,f at each grid point. Expansion in a basis[edit] The functions f, \tilde f can be expanded in a finite basis \{\phi_n\}_{n = 0,\ldots,N} as f(x) = \sum_{n=0}^N c_n \phi_n(x) \tilde f(x) = \sum_{n=0}^N \tilde c_n \phi_n(x) For simplicity, let the basis be orthogonal and normalized, \langle \phi_n, \phi_m \rangle = \delta_{nm} using the inner product \langle f, g \rangle = \int_a^b f(x) \overline{g(x)} dx with appropriate boundaries a,b. The coefficients are then obtained by c_n = \langle f, \phi_n \rangle \tilde c_n = \langle \tilde f, \phi_n \rangle A bit of calculus yields then \tilde c_n = \sum_{m=0}^N V_{nm} c_m with V_{nm} = \langle V\phi_m, \phi_n \rangle. This forms the basis of the spectral method. To distinguish the basis of the \phi_n from the quadrature basis, the expansion is sometimes called Finite Basis Representation (FBR). For a given basis \{\phi_n\} and number of N+1 basis functions, one can try to find a quadrature, i.e., a set of N+1 points and weights such that \langle \phi_n, \phi_m \rangle = \sum_{i=0}^N w_i \phi_n(x_i) \overline{\phi_m(x_i)} \qquad\qquad n,m = 0,\ldots,N Special examples are the Gaussian quadrature for polynomials and the Discrete Fourier Transform for plane waves. It should be stressed that the grid points and weights, x_i,w_i are a function of the basis and the number N. The quadrature allows an alternative numerical representation of the function f(x), \tilde f(x) through their value at the grid points. This representation is sometimes denoted Discrete Variable Representation (DVR), and is completely equivalent to the expansion in the basis. f(x_i) = \sum_{n=0}^N c_n \phi_n(x_i) c_n = \langle f, \phi_n \rangle = \sum_{n=0}^{N} w_i f(x_i) \overline{\phi_n(x_i)} The multiplication with the function V(x) is then done at each grid point, \tilde f(x_i) = V(x_i) f(x_i). This generally introduces an additional approximation. To see this, we can calculate one of the coefficients \tilde c_n: \tilde c_n = \langle \tilde f, \phi_n \rangle = \sum_i w_i \tilde f(x_i) \overline{\phi_n(x_i)} = \sum_i w_i V(x_i) f(x_i) \overline{\phi_n(x_i)} However, using the spectral method, the same coefficient would be \tilde c_n = \langle Vf, \phi_n \rangle. The pseudo-spectral method thus introduces the additional approximation \langle Vf, \phi_n \rangle \approx \sum_i w_i V(x_i) f(x_i) \overline{\phi_n(x_i)}. If the product Vf can be represented with the given finite set of basis functions, the above equation is exact due to the chosen quadrature. Special pseudospectral schemes[edit] The Fourier method[edit] If periodic boundary conditions with period [0,L] are imposed on the system, the basis functions can be generated by plane waves, \phi_n(x) = \frac{1}{\sqrt{L}} e^{-\imath k_n x} with k_n = (-1)^n \lceil n/2 \rceil 2\pi/L, where \lceil\rceil is the ceiling function. The quadrature for a cut-off at n_{\text{max}} = N is given by the discrete Fourier transformation. The grid points are equally spaced, x_i = i \Delta x with spacing \Delta x = L / (N+1), and the constant weights are w_i = \Delta x. For the discussion of the error, note that the product of two plane waves is again a plane wave, \phi_{a} + \phi_b = \phi_c with c \leq a+b. Thus, qualitatively, if the functions f(x), V(x) can be represented sufficiently accurately with N_f, N_V basis functions, the pseudo-spectral method gives accurate results if N_f + N_V basis functions are used. An expansion in plane waves often has a poor quality and needs many basis functions to converge. However, the transformation between the basis expansion and the grid representation can be done using a Fast Fourier transform, which scales favorably as N \ln N. As a consequence, plane waves are one of the most common expansion that is encountered with pseudo-spectral methods. Another common expansion is into classical polynomials. Here, the Gaussian quadrature is used, which states that one can always find weights w_i and points x_i such that \int_a^b w(x) p(x) dx = \sum_{i=0}^N w_i p(x_i) holds for any polynomial p(x) of degree 2N+1 or less. Typically, the weight function w(x) and ranges a,b are chosen for a specific problem, and leads to one of the different forms of the quadrature. To apply this to the pseudo-spectral method, we choose basis functions \phi_n(x) = \sqrt{w(x)} P_n(x), with P_n being a polynomial of degree n with the property \int_a^b w(x) P_n(x) P_m(x) dx = \delta_{mn}. Under these conditions, the \phi_n form an orthonormal basis with respect to the scalar product \langle f, g \rangle = \int_a^b f(x) \overline{g(x)} dx. This basis, together with the quadrature points can then be used for the pseudo-spectral method. For the discussion of the error, note that if f is well represented by N_f basis functions and V is well represented by a polynomial of degree N_V, their product can be expanded in the first N_f+N_V basis functions, and the pseudo-spectral method will give accurate results for that many basis functions. Such polynomials occur naturally in several standard problems. For example, the quantum harmonic oscillator is ideally expanded in Hermite polynomials, and Jacobi-polynomials can be used to define the associated Legendre functions typically appearing in rotational problems. 1. ^ Orszag, Steven A. (1972). "Comparison of Pseudospectral and Spectral Approximation". Studies in Applied Mathematics 51 (1972): 253–259.  • Steven A. Orszag (1969) Numerical Methods for the Simulation of Turbulence, Phys. Fluids Supp. II, 12, 250-257 • D. Gottlieb and S. Orzag (1977) "Numerical Analysis of Spectral Methods : Theory and Applications", SIAM, Philadelphia, PA • J. Hesthaven, S. Gottlieb and D. Gottlieb (2007) "Spectral methods for time-dependent problems", Cambridge UP, Cambridge, UK • Lloyd N. Trefethen (2000) Spectral Methods in MATLAB. SIAM, Philadelphia, PA • Bengt Fornberg (1996) A Practical Guide to Pseudospectral Methods. Cambridge University Press, Cambridge, UK • Chebyshev and Fourier Spectral Methods by John P. Boyd. • Polynomial Approximation of Differential Equations, by Daniele Funaro, Lecture Notes in Physics, Volume 8, Springer-Verlag, Heidelberg 1992 • Javier de Frutos, Julia Novo: A Spectral Element Method for the Navier--Stokes Equations with Improved Accuracy • Canuto C., Hussaini M. Y., Quarteroni A., and Zang T.A. (2006) Spectral Methods. Fundamentals in Single Domains. Springer-Verlag, Berlin Heidelberg • Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 20.7. Spectral Methods". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8.
9ed0a8f84c03fae1
Take the 2-minute tour × In order to calculate the cross-section of an interaction process the following formula is often used for first approximations: $$ \sigma = \frac {2\pi} {\hbar\,v_i} \left| M_{fi}\right|^2\varrho\left(E_f\right)\,V $$ $$ M_{fi} = \langle\psi_f|H_{int}|\psi_i\rangle $$ Very often plane waves are assumed for the final state and therefore the density of states is given by $$ \varrho\left(E_f\right) = \frac{\mathrm d n\left(E_f\right)}{\mathrm d E_f} = \frac{4\pi {p_f}^2}{\left(2\pi\hbar\right)^3}\frac V {v_f} $$ I understand the derivation of this equation in the context of the non relativistic Schrödinger equation. But why can I continue to use this formula in the relativistic limit: $v_i, v_f \to c\,,\quad p_f\approx E_f/c$. Very often books simply use this equation with matrix element derived from some relativistic theory, e.g. coupling factors and propagators from the Dirac equation or Electroweak interaction. How is this justified? Specific concerns: • Is Fermi's golden rule still valid in the relativistic limit? • Doesn't the density of final states has to be adapted in the relativistic limit? share|improve this question add comment 1 Answer 1 up vote 5 down vote accepted Fermi's golden rule still applies in the relativistic limit, and can be rewritten in a Lorentz invariant fashion. Starting with the transition probability $$ W_{i\rightarrow f} = \frac{2\pi}{\hbar} |m_{if}|^2 \rho(E) \,,$$ to have $W$ Lorentz invariant we'd like both the matrix element $|m_{if}|^2$ and the density of final states $\rho(E)$ to be invariant. This can be done by shifting a few terms around. A little bit of handwaving to motivate it: The wave function $\psi$ (which is in the matrix element) has to be normalized by $\int |\psi|^2 dV = 1$, which gives us a density (of probability to encounter a particle) of $1/V$. Now, a boosted observer experiences length contraction of $1/\gamma$, which changes the density to $\gamma/V$. To obtain the correct probability again, we should re-normalize the wave function to $\psi' = \sqrt{\gamma}\,\psi $ by pulling the Lorentz factor out. So we intoduce a new matrix element $$|{\cal M}_{if}|^2 = |m_{if}|^2 \prod_{i=1}^n (2 \gamma_i m_i c^2) =|m_{if}|^2 \prod_{i=1}^n (2E_i)^2 $$ (this is for an $n$-body process). Now the transition probability (here in differential form) becomes: $$ dW = \frac{2\pi}{\hbar} \frac{|{\cal M}_{if}|^2}{ (2E_1)^2 (2E_2)^2 \cdots} \cdot \frac{1}{(2\pi\hbar)^{3n}} \, d^3p_1 \, d^3p_2 \, \cdots \delta({p_1}^\mu + {p_2}^\mu + \ldots - {p}^\mu ) $$ The delta function is there to ensure conservation of momentum and energy. Now we can regroup the terms: $$ \Rightarrow \quad dW = \frac{2\pi}{\hbar} \frac{|{\cal M}_{if}|^2}{ 2E_1 2E_2 \cdot \ldots} \cdot d_\mathrm{LIPS} $$ The density of states/"phase space" $d\rho$ is replaced by a relativistic version, sometimes called the Lorentz invariant phase space $d_\mathrm{LIPS}$, which is given by $$ d_\mathrm{LIPS} = \frac{1}{(2\pi\hbar)^{3n}} \prod_{i=1}^n \frac{d^3p_i}{ 2E_i } \delta\left(\prod_{i=1}^n {p_i}^\mu - {p}^\mu \right) \,. $$ The nice thing about the relativistic formula for $dW$ is that, in the case you are scattering particles off one another, it immediately shows us three important contributions: not only the matrix element and phase space, but also the flux factor $1/s$ (where $s = ({p_1}^\mu + {p_2}^\mu)^2$ is the Mandelstam variable, and in case the masses are negligible, $ s \approx 2 E $). This flux factor is responsible for the general $1/Q^2$ falling slope when you plot cross section over momentum transfer $Q = \sqrt{s}$, which comes entirely from relativistic kinematics. Hope this answers your questions. Here is a presentation (PDF) that sums it up, with an explicit proof that it is Lorentz invariant. share|improve this answer add comment Your Answer
0361b44525afb061
Take the 2-minute tour × I read in several papers that for a Harmonic Oscillator Hamiltonian in the time dependent Schrödinger equation a Gaussian wave packet remains Gaussian. Unfortunately I could not find any proof for this statement and trying to verify it myself I did not succeed. If I make a general ansatz with a spherically symmetric Gaussian wave packet with time dependent width and time and space dependent phase $$ \psi(t,\vec x) = (\pi a(t)^2)^{-3/4} \exp\left(-\frac{x^2}{2 a(t)^2} + i \phi(t,\vec x)\right)$$ and insert it into the Schrödinger equation $$ i \dot{\psi}(t,\vec x) = -\frac{1}{2m} \Delta\psi(t,\vec x) + \frac{k}{2} x^2 \psi(t,\vec x) $$ I get relatively complicated differential equations involving first time derivatives of a and $\phi$ as well as first and second order spatial derivatives of $\phi$. I failed to solve those equations or even show that a solution exists. Is there an easy way to show this? Is there any reference where this is shown? Which are the solutions for $a(t)$ and $\phi(t,\vec x)$ for a Gaussian (given initial conditions $a(0) = \sigma$ and $\phi(0,\vec x)=0$)? share|improve this question Perhaps you could try to go to Fourier space, using the spatial Fourier decomposition? –  Danu Mar 17 at 11:42 @Danu: Wouldn't that give me exactly the same equations? Since the Fourier transform of a Gaussian is again a Gaussian and in the Hamiltonian $x^2$ and $p^2$ appear symmetrically? –  André Mar 17 at 11:46 I've seen a Fourier transform simplify some calculations involving Gaussians before, but it's up to you if you want to give it a shot. –  Danu Mar 17 at 12:09 More on Gaussian wave packets: physics.stackexchange.com/search?q=Gaussian+wave+packet –  Qmechanic Mar 17 at 19:02 add comment 1 Answer 1 There is a bit more structure to be had in the problem, and I would recommend that you take advantage of it. In particular, you know that $\phi$ is really a linear function of $x$, or otherwise it is not a gaussian; and that the real and imaginary parts of its coefficient have distinct physical meaning, which you can get by taking appropriate expectation values. If you apply this, then, you can make your Ansatz be $$ \langle \vec x|\psi(t)\rangle= \psi(t,\vec x) = (\pi a(t)^2)^{-3/4} \exp\left(-\frac{(\vec x-\vec x_0(t))^2}{2 a(t)^2} + i \vec p_0(t)·\vec x/ħ\right).$$ Even better, you can easily calculate the expectation values of position and momentum to be $$ \langle\psi(t)|\vec x|\psi(t)\rangle=\vec x_0(t) $$ and $$ \langle\psi(t)|\vec p|\psi(t)\rangle=\vec p_0(t), $$ and you can apply Ehrenfest's theorem to get meaningful, easy-to-solve equations of motion for these quantities. This completely constrains your solution. Of course, this should leave you slightly uncomfortable because you haven't proved that $\psi(t,\vec x)$ is a solution of the TDSE... but finding that out is a few differentiations away. Since you know it's a solution anyway, you're on pretty safe ground. share|improve this answer add comment Your Answer
2350310614295384
Please pay $5 ($9 for international requests), the fee for access to this archived article, to by                  New York Review Books highly regarded Collections and new Classic Series Home ArchivesSubscriptions New York Review of Books Making up the Mind from a book review by Oliver Sacks The New York Review of Books, April 8, 1993, 42 - 49. Hyperlinks and corresponding editing by Jochen Gruber Local links have been added in case remote sites have rearranged referenced links. Bright Air, Brilliant Fire: On the Matter of the Mind Gerald M. Edelman. Basic Books, 280 pages, 1992 Abstract by Jochen Gruber With his Theory of Neuronal Group Selection (also called Neural Darwinism in analogy of the Darwinism in the immune system) Gerald Edelman presents a neurobiological theory of the mind. He and his colleagues at the Neurosciences Institute have been developing it over the past 15 years. He imagines a comprehensive theory of a dozen disciplines of neuroscience. The outline of the theory is as follows: After birth a set of inborn values (feelings) (definition, more on values) allows us to begin building the structure of the brain. The smallest entity of this structure is a group of neurons (map) (definition, more on maps ) in which internal links represent our experience. Maps are then used as new building blocks and interconnected with links into scenes (definition, more on scenes) representing what we experience as the present. Ever richer maps are constructed (more on that), ultimately maps of meaning. In our search for meaning our mind develops up the evolutionary ladder to consciousness until we form the new categories of "past" and "future". On this way, the building blocks acquire step by step more internal structure that can be accessed. A continuous stream of establishing and testing of hypotheses on the basis of the existing interconnections weakens or strengthens existing connections or builds new ones (Experiential Selection). The fittest maps and connections survive (thus the name neural Darwinism). These maps are dynamic in that they are continually redrawn (more on its definition) according to our perceptions (for more on that read the paragraphs here). For example, disappointments or major new insights at young age may ask for major changes of the map structure and might distroy a person's drive for survival if these changes appear too radical. Similarly, works of art or psychoanalysis might strengthen some and weaken other connections in and between our maps and therewith start a re-interpretation of our perception of reality (Freud's Nachträglichkeit is an example) At some point, the acquisition of a new kind of memory leads to a conceptual explosion. As a result, concepts of the self, the past, and the future (higher-order consciousness) can be connected to primary consciousness: selfconsciousness, culture and 'consciousness of consciousness' becomes possible. Bright Air, Brilliant Fire is a book of astonishing variety and range, which runs from philosophy to biology to psychology to neural modeling, and attempts to synthesize them into a unified whole. It helps us understand, guide and direct our own mind (including our psyche) in that it presents a structure with which to evaluate Since one way to experience (the world) is through works of art, Edelman's Neural Darwinism gives us a method of discussing the effect of the arts on our mind.  Brief overview of the first steps of the brains neural evolution. Schematic of Ladder of Evolution to Consciousness drawn by Jochen Gruber to help with reading Oliver Sacks's essay Schematic of Ladder of Evolution to Consciousness Table of Contents 1. Model of Basic Processes 1.1 Darwinian Selection in the Immune System and Brain 1.2 Values 1.3 Developmental Selection 1.4 Experiential Selection 1.5 Summary and Experimental Confirmation of Neuro-Evolution in Psychology 1.6 Basic Building Blocks of the Theory: Maps (Categorizations) and Their Communication (Re-entrant Signaling) 1.7 Visualize the Brain as an Orchestra without Conductor Playing its Own Music 2. Memory: A Biological Model of the Development of Consciousness 2.1 Primary Consciousness and Scenes 2.2 Higher-Order Consciousness: Selfconsciousness and Culture 3. Clinical Evidence 4. DARWIN and NOMAD, the Computer Creatures 1. Model of Basic Processes .... In his latest book, Bright Air, Brilliant Fire, the neuroscientist Gerald Edelman speaks of the fragmentation (of our views about the brain, Jochen Gruber) ... The picture of psychology was a mixed one, behaviorism, gestalt psychology, psychophysics, and memory studies in normal psychology; studies of the neuroses by Freudian analysis; clinical studies of the brain lesions and motor and sensory defects ... and a growing knowledge both of neuroanatomy and the electrical behavior of nerve cells in physiology ... Only occasionally were serious efforts made ... to connect these desparate areas in a general way. Gerald Edelman, Guilio Tononi, "Consciousness: How Matter Becomes Imagination", Part III: Mechanisms of Consciousness: The Darwinian Perspective, Chapter 7: Selectionism, Degeneracy (pp. 86, 87), Penguin Books, 2000. "All selectional systems share a remarkable property that is as unique as it is essential to their functioning: In such systems, there are typically many different ways, not necessarily structurally identical, by which a particular output occurs. We call this property degeneracy. Degeneracy is seen in quantum mechanics in certain solutions of the Schrödinger equation and in the genetic code, where, because of the degenerate third position in triplet code words, many different DNA sequences can specify the same protein. Put briefly, degeneracy is reflected in the capacity of structurally different components to yield similar outputs or results. In a selectional nervous system, with its enormous repertoire of variant neural circuits even within one brain area, degeneracy is inevitable. Without it, a selectional system, no matter how rich its diversity, would rapidly fail - • in a species, almost all mutations would be lethal; • in an immune system, too few antibody variants would work; and • in the brain, if only one network path was available, signal traffic would fail. Degeneracy can operate at one level of organization or across many. It is seen • in gene networks (e.g. combinations of different genes can lead to the same structure), • in the immune system (antibodies with different structures can recognize the same foreign molecule equally well), • in the brain, and • in evolution itself (different living forms can evolve to be equally well adapted to a specific environment). There are countless examples of degeneracy in the brain. The complex meshwork of connections in the thalamocortical system assures that a large number of different neuronal groups can similarly affect, in one way or another, the output of a given subset of neurons. For example, a large number of different brain circuits can lead to the same motor output or action. Localized brain lesions often reveal alternative pathways that are capable of generating similar behaviors. Therefore, a manifest consequence of degeneracy in the nervous system is that certain neurological lesions may often appear to have little effect, at least within a familiar envirnoment. Degeneracy also appears at the cellular level. Neural signaling mechanisms utilize a great variety of transmitters, receptors, enzymes, and so-called second messengers. The same changes in gene expression can be brought about by different combinations of these biochemical elements. Degeneracy is not just a useful feature of selectional systems; it is also an unavoidable consequence of selectional mechanisms. Evolutionary selective pressure is typically applied to individuals at the end of a long series of complex events. These events involve many interacting elements or muliple temporal and spatial scales, and it is unlikely that well-defined functions can be neatly assigned to independent subsets of elements or processes in biological networks. For example, if selection occurs for our ability to walk in a particular way, connections within and among many different brain structures and to the muscoleskeletal apparatus are all likely to be modified over time. While locomotion will be affected, many other functions, including our ability to stand or jump, will also be influenced as a result of the degeneracy of neural circuits. The ability of natural selection to give rise to a large number of nonidentical structures yielding simialr functions increases both the robustness of biological networks and their adaptability to unforseen environments." 1.1 Darwinian Selection in the Immune System and Brain Edelman's early work dealt not with the nervous system, but with the immune system, by which all vertebrates defend themselves against invading bacteria and viruses. It was previously accepted that the immune system "learned", or was "instructed", by means of a single type of antibody which molded itself around the foreign body, or antigen, to produce an appropriate, "tailored" antibody. These molds then multiplied and entered the bloodstream and destroyed the alien organisms. But Edelman showed that a radically different Darwinian selective mechanism was at work; that we possess not one basic kind of antibody, but millions of them, an enormous repertoire of antibodies, from which the invading antigen "selects" one that fits. It is such a selection, rather than a direct shaping or instruction, that leads to the multiplication of the appropriate antibody and the destruction of the invader. Such a mechanism, which he called a "clonal selection", was suggested in 1959 by MacFarlane Burnet, but Edelman was the first to demonstrate that such a "Darwinian" mechanism actually occurs, and for this he shared a Nobel Prize in 1972. Edelman then began to study the nervous system, to see Both the immune system and the nervous system can be seen as systems for recognition. ... The nervous system is roughly analogous, but far more demanding: it has How does an animal come to recognize and deal with the novel situation it confronts? How is such individual development possible? The answer, Edelman proposes, is that an evolutionary process takes place - not one that selects organisms and takes millions of years, but one that occurs within each particular organism and during its lifetime, by competition among cell groups in the brain. This for Edelman is "somatic selection".... 1.2 Values What is the world of a newborn infant animal like? Is it a sudden incomprehensible (perhaps terrifying) explosion of electromagnetic radiations, sound waves, and chemical stimuli which make the infant cry and sneeze? Or an ordered intelligible world, in which the infant discerns people, object, meanings and smiles? We know that the world encountered is not one of complete meaninglessness and pandemonium, for the infant shows selective attention and preferences from the start due to the genetic instructions and biases. Clearly there are some innate biases or dispositions at work; otherwise the infant would have no tendencies whatever, would not be moved to do anything, seek anything, to stay alive. These basic biases are among the first "values" we have, as Edelman calls them. Values -simple drives, instincts, intentionalities- serve as the tools we need for adaptation and survival, It needs to be stressed that "values" are experienced, internally, as feelings - without feeling there can be no animal life."Thus", in the words of the late philosopher Hans Jonas, "the capacity for feeling, which arose in all organisms, is the mother-value of all." 1.3 Developmental Selection The developmental selection takes place largely before birth. The genetic instructions in each organism provide general constraints for neural development, but they cannot specify the exact destination of each developing nerve cell - for these grow and die, migrate in great numbers and in entirely unpredictable ways: all of them are "gypsies", as Edelman likes to say. The vicissitudes of fetal development themselves produce in every brain unique patterns of neurons and neuronal groups. Even identical twins with identical genes will not have identical brains at birth: the fine details of corticular circuitry will be quite different. Such variability, Edelman points out, would be a catastrophe in virtually any mechanical or computational system, where exactness and reproducibility are of the essence. But in a system in which selection is central, the consequences are entirely different: here variation and diversity are themselves of the essence, are the basis on which Darwinism acts. 1.4 Experiential Selection Now, already possessing a unique and individual pattern of neuronal groups through developmental selection, the creature is born, thrown into the world, there to be exposed to a new form of selection, selection based on experience. Since the infant instinctively values food, warmth, and contact with other people (for example), this will direct its first movements and strivings. These "values" serve to differentially weight experience, to orient the organism towards survival and adaptation, to allow what Edelman calls "categorization on value", e.g. to form simple, basic categories such as "edible" and "nonedible" as part of the process of getting food. 1.5 Summary and Experimental Confirmation of Neuro-Evolution in Psychology Thus, in summary, at an elementary physiological level, there are various sensory and motor "givens", • from the reflexes that automatically occur (for example, in responses to pain) • to innate mechanisms in the brain, as, for example, the feature detectors in the visual cortex which, as soon as they are activated, detect verticals, horizontals, boundaries, angles, etc., in the visual world. We have such a certain amount of basic equipment; but, in Edelman's view, very little else is programmed or built in. It is up to the infant animal, given its elementary physiological capacities, and given its inborn values, • to create its own categories and • to use them to make sense of, to construct , a world - and it is not just a world that the infant constructs, but its own world, a world constituted from the first by personal meaning and reference. Such a neuro-evolutionary view is highly consistent with some of the conclusions of psychoanalysis and developmental psychology - in particular the psychoanalyst David Stern's description of "an emergent self". "Infants seek sensory stimulation" writes Stern. "They have distinct biases or preferences with regard to the sensations they seek. ... These are innate. From birth on, there appears to be a central tendency to form and test hypotheses about what is occurring in the world .. (to) categorize ... into conforming and contrasting patterns, events, sets, and experiences." Stern emphasizes how crucial are the active processes of connecting, correlating, and categorizing information, and how with these a distinctive organization emerges, which is experienced by the infant as the sense of a self. 1.6 Basic Building Blocks of the Theory: It is precisely such processes that Edelman is concerned with. He sees them as grounded in a process of selection acting on the primary neuronal units with which each of us is equipped. These units are not individual nerve cells or neurons, but groups ranging from about fifty to ten thousand neurons; there are perhaps a hundred million such groups in the entire brain. During the development of the fetus, a unique neuronal pattern of connections is created, and then, starting with the infant stage, experience acts upon this pattern, modifying it by selectively strengthening or weakening connections between neuronal groups, or creating entirely new connections. Thus experience itself is not passive, a matter of "impressions" or "sensedata", but active, and constructed by the organism from the start. Active experience "selects", or carves out, a new, more complexely connected pattern of neuronal groups, a neuronal reflection of the child's individual experience, of the procedures by which it has come to categorize reality. But these neuronal circuits are still at a low level - how do they connect with the inner life, the mind, the behavior of the creature? It is at this point that Edelman introduces the most radical of his concepts - the concepts of "map", as he uses the term, is not a representation in the ordinary sense, but The creation of maps, Edelman postulates, involves the synchronization of hundreds of neuronal groups. Some mappings (called "categorizations"), take place in the discrete and anatomically fixed (or "prededicated") parts of the cerebral cortex - thus color is "constructed" in an area called V4. The visual system alone, for example, has over thirty different maps for representing color, movement, shape, etc. But where perception of objects is concerned, the world, Edelman likes to say, is not "labeled", it does not come "already parsed (divided) into objects". We must make the objects, in effect, through our own categorizations: "Perception makes", Emerson said. "Every perception", says Edelman, echoing Emerson, "is an act of creation." In other words: our sense organs, as we move about, take samplings of the world, creating maps in the brain. Then a sort of neuronal "survival of the fittest" occurs, a selective strengthening of those mappings which correspond to "successful" perceptions - successful in that they prove the most useful and powerful for the building of "reality". In this view, there are no innate mechanisms for complex "personal" recognition, such as the "grandmother cell" postulated by researchers in the 1970's to correspond to one's perception of one's grandmother. Nor is there any "master area", or "final common path", whereby all perceptions relating (say) to one's grandmother converge in one single place. There is no such a place in the brain where a final image is synthesized, nor any miniature person or homunculus to view this image. Rather, the perception of a grandmother or, say, of a chair depends on the synchronization of a number of scattered mappings throughout the visual cortex - mappings relating to many different perceptual aspects of the chair (its size, its shape, its color, its "leggedness", its relation to other sorts of chairs - armchairs, keeling chairs, baby chairs, etc.). In this way the brain, the creature, achieves a rich and flexible percept of "chairhood", which allows the recognition of innumerable sorts of chairs as chairs (computers, by contrast, with their need for unambiguous definitions and criteria, are quite unable to achieve this). This perceptual generalization is dynamic, i.e. can change with time. Information coded into maps Such a correlation is possible because of the very rich connections between the brain's maps - connections which are reciprocal, and may contain millions of fibres. These extensive connections allow what Edelman calls "re-entrant signaling", meaning which enables a coherent construct such as "chair" to be made. This construct arises from the interaction of many sources. Stimuli from, say, touching a chair may affect one set of maps, stimuli from seeing it may affect another set. Re-entrant signaling takes place between the two sets of maps - and between many other maps as well - as part of the process of perceiving a chair. It must be emphasized once again: This construct of an object or a part of reality, • is not comparable to a single static image or representation - • it is, rather, comparable to a giant and continually modulating equation. The outputs of innumerable maps, connected by re-entry, not only complement one another at a perceptual level but are built up to higher and higher levels. For the brain, in Edelman's vision, makes maps of its own maps, or "categorizes its own categorizations", and does so by a process which can ascend indefinitely to yield ever more generalized pictures of the world. This re-entrant signaling is more than a feedback process that corrects errors. (Such simple feedback loops are common both in the technological world, as thermostats, governors, cruise controls, etc., and in the nervous system, where they control all of the body's automatic functions, such as temperature, blood pressure and fine control of movement.) At higher levels, where flexibility and individuality are all-important, and where new powers and new functions are needed and created, one requires "re-entrant signaling" to be a mechanism capable of constructing, not just controlling or correcting. The process of re-entrant signaling, with its multitude of reciprocal connections within and between maps, may be likened to a sort of neural United Nations, in which dozens of voices are talking together, while including in their conversation a variety of constantly inflowing reports from the outside world, and giving them coherence, bringing them together into a larger picture as new information is correlated and new insights emerge. There is, to continue the metaphor, no secretary general in the brain; the activity of re-entrant signaling itself achieves the synthesis. How is this possible? Edelman, who himself once planned to be a concert violinist, uses musical metaphors here. "Think", he said in a recent BBC radio broadcast, "if you had a hundred thousand wires connecting four string quartet players and that, even though they weren't speaking words, signals were going back and forth in all kinds of hidden ways (as you usually get them by the subtle nonverbal interactions between the players) that make the whole set of sounds a unified ensemble. That's how the maps in the brain work by re-entry." The players are connected. Each player, interpreting the music individually, constantly modulates and is modulated by the others. There is no final or "master" interpretation - the music is collectively created. (Gerald Edelman on re-entrant signaling, lecture given in 2006) This, then, is Edelman's picture of the brain, an orchestra, an ensemble - but without a conductor, an orchestra which makes its own music. Thus, two basic operations are the beginning of psychic development, and they far precede - yet they are a prerequisite for all of these, the beginning of an enormous upward path, and it can achieve remarkable power even in relatively primitive animals like birds. An example: If pigeons are presented with photographs of trees, or oak leaves, or fish, surrounded by extraneous features, they rapidly learn to "home in" upon these, and to generalize, so that they can thereafter recognize any trees, or oak leaves, or fish straightaway, however distracting or confusing the context may be. It is clear from these experiments that perception selects, or rather creates, "defining" features (what counts as "defining" may be different for each pigeon), and cognitive categories, without the use of language, or being "told" what to do. Such category-creating behavior (which Edelman calls "noetic") is very different from the rigid, algorithmic procedures used by robots. (These experiments with pigeons are described in detail in Neural Darwinism, pp. 247-251.) Perceptual categorization, whether of colors, movements, or shapes, is the first step, and it is crucial for learning, but it is not something fixed, something that occurs once and for all. On the contrary - and this is central to the dynamic picture presented by Edelman - there is then a continual re-categorization, and this itself constitutes memory. "In computers", Edelman writes, "memory depends on the specification and storage of bits of coded information." This is not the case in the nervous system. Memory in living organisms by contrast takes place through activity and continual re-categorization. "By its nature, memory . . . involves continual motor activity.... in different contexts. a given categorical response in memory may be achieved in several ways. Unlike computer-based memory, brainbased memory is inexact, but it is also capable of great degrees of generalization." 2. Memory: A Biological Model of the Development of Consciousness In the extended Theory of Neuronal Group Selection, which he has developed since 1987, Edelman has been able, in a very economical way, to accommodate all the "higher" aspects of mind - concept formation, language, consciousness itself - without bringing in any additional considerations. Edelman's most ambitious project, indeed, is to try to delineate a possible biological basis for consciousness. He distinguishes, first ("primary") from "higher-order" consciousness. 2.1 Primary Consciousness and Scenes The essential achievement of primary consciousness, as Edelman sees it, is to bring together into a scene the many categorizations involved in perception. The advantage of this is that "events that may have had significance to an animal's past learning can be related to new events." The relation established will not be a causal one, one necessarily related to anything in the outside world; it will be an individual (or "subjective") one, based on, what has had "value" or "meaning" for the animal in the past. Edelman proposed that the ability to create scenes in the mind depends upon the emergence of a new neuronal circuit during evolution, a circuit allowing for continual re-entrant signaling between, This "bootstrapping process" (as Edelman calls it) goes on in all the senses, thus allowing for the construction of a complex scene. The "scene", one must stress, is Mammals, birds, and some reptiles, Edelman speculates, have such a scene-creating primary consciousness; and such consciousness is "efficacious"; it helps the animal adapt to complex environments. Without such consciousness, life is lived at a much lower level, with far less ability to learn and adapt. Primary consciousness (Edelman concludes) is required for the evolution of higher-order consciousness. But it is limited -like our consciousness in a dream- to a small memorial interval around a time chunk I call the present. An animal with primary consciousness sees the room the way a beam of light illuminates it. Only that which is in the beam is explicitly in the remembered present; all else is darkness. This does not mean than an animal with primary consciousness cannot have long-term memory or act on it. Obviously, it can, but it cannot, in general, be aware of that memory or plan an extended future for itself based on that memory. Again, we know this from our dreams. 2.2 Higher-Order Consciousness: Selfconsciousness and Culture Only in ourselves - and to some extent in apes - does a higher-order consciousness emerge. Higher order consciousness arises from primary consciousness - it supplements it, it does not replace it. It is dependent on the evolutionary developmernt of language, along with the evolution of symbols, of cultural exchange; and with all this brings an unprecedented power of detachment, generalization, and reflection, so that finally selfconsciousness is achieved, the consciousness of being a self in the world, with human experience and imagination to call upon. Higher-order consciousness Works of art make use of our higher-order consciousness, by weakening or strengthening connections between scenes. The most difficult and tantalizing portions of Bright Air, Brilliant Fire are about how this higher-order consciousness is achieved and how it emerges from the primary consciousness. No other theorist I know of has even attempted a biological understanding of this step. To become conscious of being conscious, Edelman stresses, systems of memory must be related to representation of a self. This is not possible unless the contents, the "scenes," of primary consciousness are subjected to a further process and are themselves re-categorized. Though language, in Edelman's view, is not crucial for the development of higher-order consciousness - there is some evidence of higher-order consciousness and self-consciousness in apes - it immensely facilitates and expands this by making possible previously unattainable conceptual and symbolic powers. Thus two steps, two re-entrant processes, are envisaged here: The effects of this are momentous: "The acquisition of a new kind of memory," Edelman writes, "...leads to a conceptual explosion. As a result, concepts of the self, the past, and the future can be connected to primary consciousness. 'Consciousness of consciousness' becomes possible." At this point Edelman makes explicit what is implicit throughout his work - the interaction of "neural Darwinism" with classical Darwinism. What occurs "explosively" in individual development must have been equally critical in evolutionary development. Thus "at some transcendent moment in evolution", Edelman writes, there emerged "a variant with a re-entrant circuit linking value-category memory" to current perceptions". "At that moment", Edelman continues, "a memory became the substrate and servant of consciousness." And then, at another transcendent moment, by another, higher turn of re-enry, higher-order consciousness arose. There is indeed much paleontologial evidence that higher order consciousness developed in an astonishingly short space of time - some tens (perhaps hundreds) of thousands of years, not the many millions usually needed for evolutionary change. The speed of this development has always been a most formidable challenge for evolutionary theorists - Darwin himself could offer no detailed account of it and Wallace was driven back to thoughts of a grand design. But Edelman, drawing from his own observations of cell and tissue development detailed in his earlier book Topobiology, is able to suggest how it might have come about. The principles underlying brain development and the mechanisms outlined in the Theory of Neuronal Group Selection can, he argues, account for this rapid emergence, since they allow for enormous changes in brain size over the relatively short evolutionary period in which Homo sapiens emerged. According to topobiology, relatively large changes in the structure of the brain can occur through changes in the genes that regulate the brain's morphologyn - changes that can come about as the result of relatively few mutations. And the premises of the Theory of Neuronal Group Selection allow for the rapid incorporation into existing brain structures of new and enlarged neuronal maps with a variety of functions. This interweaving of concept and observation typifies the ambition and the grandeur of Edelman's thought. His two chapters on consciousness are the most original, the most exhilarating, and the most difficult in the entire book - but they achieve, or aspire to achieve, what no other theorist has even tried to do: a biologically plausible model of how consciousness could have emerged. 3. Clinical Evidence A sense of excitement runs through all of Edelman's books. "We are at the beginning of the neuroscientific revolution", he writes in the preface to Bright Air, Brilliant Fire "At its end, we shall-know how the mind works, what governs our nature, and how we know the world." This century, as he observes, has been rich in theories - going all the way from psychophysics to psychoanalysis - but all these have been partial. New theories arise from a crisis in scientific understanding, when there is an acute incompatibility between observations and existing theories. There are many such crises in neuroscience today. Edelman, with his background in morphology and development, speaks of the "structural" crisis, the now well-established fact that there is no precise wiring in the brain, that there are vast numbers of unidentifiable inputs to each cell, and that such a jungle of connections is incompatible with any simple computational theory. He is moved, as William James was, by the apparently seamless quality of experience and consciousness - the unitary appearance of the world to a perceiver despite (as we have seen in regard to vision) the multitude of discrete and parallel systems for perceiving it; and the fact that some integrating or unifying or "binding" must occur, which is totally inexplicable by any existing theory. Since the Theory of Neuronal Group Selection was first formulated, important new evidence has emerged suggesting how widely separated groups of neurons in the visual cortex can become synchronized and respond in unison when an animal is faced with a new perceptual task - a finding directly suggestive of re-entrant signaling. (I discussed this work in an earlier article, "Neurology and the Soul.", New York Book Review, November 20, 1990) There is also much evidence of a more clinical sort, which one feels may be illuminated, and perhaps explained, by the Theory of Neuronal Group Selection. I often encounter situations in day-to-day neurological practice which completely defeat classical neurological explanations, which cry out for explanations of a radically different kind, and which are clarified by Edelman's theory. (Some of these situations are discussed by Israel Rosenfield in his new book: The Strange, Familiar and Forgotten, wbere he speaks of "the bankruptcy of classical neurology".) Thus if a spinal anesthetic is given to a patient - as used to be done frequently to women in childbirth - there is not just a feeling of numbness below the waist. There is, rather, the sense that one terminates at the umbilicus, that one's corporeal self has no extension below this, and that what lies below is not-self, not-flesh, not-real, not-anything. The anesthetized lower half has a bewildering nonentity, completely lacks meaning and personal reference. The baffled mind is unable to categorize it, to relate it in any way to the self. One knows that sooner or later the anesthetic wear off, yet it is impossible to imagine the missing parts in a positive way. There is an absolute gap in primary consciousness which higher order consciousness can report, but cannot correct. This indeed is a situation I know well from personal no less than clinical experience, for it is what I experienced in myself after a nerve injury to one leg, when for a period of two weeks, while the leg lay immobile and senseless, I found it "alien," not me, not real. I was astonished when this happened, and unassisted by my neurological knowledge - the situation was clearly neurological, but classical neurology has nothing to say about the relation of sensation to knowledge and to "self"; about how, normally, the body is "owned"; and how, if the flow of neural information is impaired, it may be lost to consciousness, and "disowned" - for it does not see consciousness as a process Such body-image and body-ego can be fully understood, in Edelman's thinking, as breakdowns in local mapping, consequent upon nerve damage or disuse. It has been confirmed, further, in animal experiments that the mapping is not something fixed, but plastic and dynamic, and dependent upon a continual inflow of experience and use; and that if there is continuing interference with, say, one's perception of a limb or its use, there is not only a rapid loss of its cerebral map, but a rapid remapping of fhe rest of the body which then excludes the limb itself. Stranger still are the situations which arise when the cerebral basis of body-image is affected, especially if the right hemisphere of the brain is badly damaged in its sensory areas. At such times patients may show an "anosognosia," an unawareness that anything is the matter, even though the left side of the body may be senseless, and perhaps paralyzed, too. Or they may show a strange levity, insisting that their own left sides belong to "someone else." Such patients may behave (as an eminent neurologist, M.M. Mesulam, has written) ". . . as if one half of the universe had abruptly ceased to exist . . . as if nothing were actually happening [there] . . . as if nothing of importance could be expected to occur there." Such patients live in a hemispace, a bisected world, but for them, subjectivly, their space and world is entire. Anosognosia is unintelligible (and was for years misinterpreted as a bizarre neurotic symptom) unless we see it (in Edelman's term) as a "disease of consciousness," a total breakdown of high-level re-entrant signaling and mapping in one hemisphere - the right hemisphere, which, Edelman suggests, may have only primary but no higher-order consciousness - and a radical reorganization of consciousness in consequence. Less dramatic than these complete disappearances of self or parts of the self from consciousness, but still remarkable in the extreme, are situations in which, following a neurological lesion, a dissociation occurs between perception and consciousness, or memory and consciousness, cases in which there remains only "implicit" perception or knowledge or memory. Thus my amnesiac patient Jimmie ("The Lost Mariner") had no explicit memory of Kennedy's assassination, and would indeed say, "No president in this century has been assassinated, that I know of." But if asked, "Hypothetically, then, if a presidential assassination had somehow occurred without your knowledge, where might you guess it occurred: New York, Chicago, Dallas, New Orleans, or San Francisco?" he would invariably "guess" correctly, Dallas. Similarly, patients with visual agnosias, like Dr. P. ("The Man who Mistook his Wife for a Hat"), while not concsciously able to recognize anyone,often "guess" the identity of peoples faces correctly. And patients with cortical blindness, from massive bilateral damage to the primary visual areas of the brain, while asserting that they can see nothing, may also mysteriously "guess" correctly what lies before them - so-called "blindsight." In all these cases, then, we find that perception, and perceptual categorization of the kind described by Edelman, has been preserved, but has been divorced from consciousness. In such cases it appears to be only the final process, in which the re-entrant loops combine memory with current perceptual categorization, that breaks down. Their understanding, so elusive hitherto, seems to come closer with Edelman's "re-entrant" model of consciousness. Dissatisfaction with the classical theories is not confined to clinical neurologists; it is also to be found among theorists of child development, among cognitive and experimental psychologists, among linguists, and among psychoanalysts. All find themselves in need of new models. This was abundantly clear in May of 1992, at an exciting conference on "Selectionism and the Brain" held at the Neurosciences Institute in New York and attended by prominent workers in all of these fields. Particularly suggestive was the work of Esther Thelen and her colleagues at the University of Indiana in Bloomington, who have for some years been making a minute analysis of the development of motor skills - watching, reaching for objects - in infants. "For the developmental theorist," Thelen writes, "individual differences pose an enormous challenge.... Developmental theory has not met this challenge with much success." And this is, in part, because individual differences are seen as extraneous, whereas Thelen argues that it is precisely such differences, the huge variation between individuals, that allow the evolution of unique motor patterns. Thelen found that the development of such skills, as Edelman's theory would suggest, follows no single programmed or prescribed pattern. Indeed there is great variability among infants at first with many patterns of reaching for objects; but there then occurs, over the course of several months, a competition among these patterns, discovery or selection of workable patterns, or workable motor solztions. These solutions, though roughly similar (for there are a limited number of ways in which an infant can reach), are always dlfferent and individual, adapted to the particular dynamics of each child, and they emerge by degrees, through exploration and trial. Each child, Thelen showed, explores a rich range of possible ways to reach for an object and selects its own path, without the benefit of any blueprint or program. The child is forced to be original, to create its own solutions. Such an adventurous course carries its own risks - the child may evolve a bad motor solution - but sooner or later such bad solutions tend to destabilize, break down, and make way for further exploration, and better solutions. Similar considerations arise with regard to recovery and rehabilitation after strokes and other injuries.There are no rules, there is no prescribed path to recovery; every patient must discover or create his own motor and perceptual patterns, his own solutions to the challenges that face him; and it is the function of a sensitive therapist to help him in this. This is well understood in the practice of "functional integration," pioneered by Moshe Feldenkrais, and used increasingly both in rehabilitation after injury and in the training of dancers and athletes. "One cannot teach a person how to organize movement or how to perceive", writes Carl Ginsburg, a leading Feldenkrais teacher. "We need a system that organizes itself as it experiences. . . a system that has both stability and extraordinary plasticity to shift with changing circumstances. It is a system that is exceedingly difficult to model." Ginsberg feels that Theory of Neuronal Group Selection is closest to the model required ("The Roots of Functional Integration, Part III: The Shift in Thinking", The Feldenkrais Journal, No. 7 (Winter 1992), pp. 3447. When Thelen tries to envisage the neural basis of such learning, she uses terms very similar to Edelman's: she sees a "population of movements being selected or "pruned" by experience. She writes of infants "remapping" the neuronal groups that are correlated with their movements, and "selectively strengthening particular neuronal groups". She has, of course, no direct evidence for this, and such evidence cannot be obtained until we have a way of ~visualizing vast numbers of neuronal groups simultaneously in a conscious subject, and following their interactions for months on end. No such visualization is possible at the present time, but it will perhaps become possible by the end of the decade. Meanwhile, the close correspondence between Thelen's observations and the kind of behavior that would be expected from Edelman's theory is striking. If Esther Thelen is concerned with direct observation of the development of motor skills in the infant, Arnold Modell of Harvard, at the same conference, was concerned with psychoanalytical interpretations of early behavior; he too felt, like Thelen, that a crisis had developed, but that it might also be resolved by the Theory of Neuronal Group Selection - indeed, the title of his paper was "Neural Darwinism and a Conceptual Crisis in Psychoanalysis". The particular crisis he spoke of was connected with Freud's concept of Nachträglichkeit, the retranscription of memories which had become part of pathological fixations but were opened to consciousness, to new contexts and reconstructions, as a crucial part of the therapeutic process of liberating the patient from the past, and allowing him to experience and move freely once again. This process cannot be understood in terms of the classical concept of memory in which a fixed record or trace or representation is stored in the brain - an entirely static or mechanical concept - but requires a concept of memory as active and "inventive" (see Israel Rosenfeld, The Invention of Memory: A New View of the Brain (Basic Books, 1991). That memory is essentially constructive (as Coleridge insisted, nearly two centuries ago) was shown experimentally by the great Cambridge psychologist Frederic Bartlett. "Remembering," he wrote, is not the re-excitation of innumerable fixed, lifeless and fragmentary traces. It is an imaginative reconstruction, or construction, built out of the relation of our attitude toward a whole mass of organized past reactions or experience. It was just such an imaginative, context-dependent construction or reconstuction that Freud meant by Nachträglichkeit - but this, Modell emphasizes, could not be given any biological basis until Edelman's notion of memory as re-categorization. Beyond this, Modell as an analyst is concerned with the question of how the self is created, the enlargement of self through finding, or making, personal meanings. Such a form of inner growth, so different from "learning" in the usual sense, he feels, may also find its neural basis in the formation of ever-richer but always self-referential maps in the brain, and their incessant integration through re-entrant signaling, as Edelman has described it. Modell's ideas have been set out in full in Other Times, Other Realities (Harvard University Press, 1990), and in a forthcoming book, The Private Self(Harvard University Press, 1993). Others too - cognitive psychologists and linguists - have become intensely interested in Edelman's ideas, in particular by the implication of the extended Theory of Neuronal Group Selection which suggests that the exploring child, the exploring organism, seeks (or imposes) meaning at all times, that its mappings are mappings of meaning, that its world and (if higher consciousness is present) its symbolic systems are constructed of "meanings." When Jerome Bruner and others launched the "cognitive revolution" in the mid 195Os, this was in part a reaction to behaviorism and other "isms" which denied the existence and structure of the mind. The cognitive revolution was designed "to replace the mind in nature", to see the seeking of meaning as central to the organism. In a recent book, Acts of Meaning, (Harvard University Press, 1990), Bruner describes how this original impetus was subverted, and replaced by notions of computation, information processing, etc., and by the computational (and Chomskian) notion that the syntax of a language could be separated from its semantics. But, as Edelman writes, it is increasingly clear, from studying the natural acquisition of language in the child, and, equally, from the persistent failure of computers to "understand" language, its rich ambiguity and polysemy, that syntax cannot be separated from semantics. It is precisely through the medium of "meanings" that natural language and natural intelligence are built up. From Boole, with his "Laws of Thought" in the 1850s, to the pioneers of Artificial Intelligence at the present day, there has been a persistent notion that one may have an intelligence or a language based on pure logic without anything so messy as "meaning" being involved. That this is not the case, and cannot be the case, may now find a biological grounding in the Theory of Neuronal Group Selection. 4. DARWIN and NOMAD, the Computer Creatures None of this, however, can yet be proved - we have no way of seeing neuronal groups or maps or their interactions; no way of listening in to the re-entrant orchestra of the brain. Our capacity to analyze the living brain is still far too crude. Partly for this reason researchers in neuroscience, Edelman among them, have felt it necessary to simulate the brain, and the power of computers and supercomputers makes this more and more possible. One can endow one's simulated neurons with physiologically realistic properties, and allow them to interact in physiologically realistic ways. Edelman and his colleagues at the Neurosciences Institute have been deeply interested in such "synthetic neural modeling", and have devised a series of "synthetic animals" or artifacts designed to test the Theory of Neuronal Group Selection. Although these "creatures" - which have been named DARWIN I, II, III, and IV - make use of supercomputers, their behavior (if one may use the word) is not programmed, not robotic, in the least, but (in Edelman's word) "noetic." They incorporate both a selectional system and a primitive set of "values"- for example, that light is better than no light - which generally guide behavior but do not determine it or make it predictable. Unpredictable variations are introduced in both the artifact and its enviromnent so that it is forced to create its own categorizations. DARWIN IV or NOMAD, with its electronic eye and snout, has no "goal", no "agenda", but resides in a sort of pen, a world of varied simple objects (with different colors, shapes, textures, weights). Her follows an illustration showing N0DMAD, an adaptive device constructed by Gerald M. Edelman and his colleagues at the Neurosience Institute, in its environment. NOMAD is controlled by a computer simulated "selectionist nervous system.". It has a TV camera "eye" and a snout with eletrical "taste" sensors. Synapses in its simulated brain change with experience so that NOMAD learns to approach and taste blocks. After forming an assosiation between taste and color, it avoids bad-tasting blocks (blue) but collects tasty ones (red). True to its name, it wanders around like a curious infant, exploring these objects, reaching for them, classifying them, building with them, in a spontaneous and idiosyncratic way (the movement of the artifact is exceedingly slow, and one needs time-lapse photography to bring home its creatural quality). No two "individuals" show identical behavior - and the details of their reachings and learnings cannot be predicted, any more than Thelen can predict the development of her infants. If their value circuits are cut, the artifacts show no learning, na "motivation", no convergent behavior at all, but wander around in an aimlessway, like patients who have had their frontal lobes distroyed. Since the entire circuitry of these DARWINS is known, and can be seen functioning on the screen of a supercomputer, one can continuously monitor their inner workings, their internal mappings, their re-entrant signalings - one can see how they sample their environment, one can see how the first, vague, tentative peercepts emerge, and how, with hundreds of further samplings, they evolve and become recognizable, refined models of reality, following a process similar to that projected by Edelman's theory. Normally one is not aware of the brain's almost automatic generation of "perceptual hypotheses" (in Richard Gregory's terms) and their refinement through a process of repeated samplings and testing. But under certain circumstances, as in recovery after acute nerve injury, one may become vividly aware of these normally unconscious (and sometimes exceedingly rapid) operations. I give a personal example of this in A Leg to Stand On. Seeing the DARWINS, especially DARWIN IV, at work can induce a curious state of mind. Going to the zoo after my first sight of DARWIN IV, I found myself looking at birds, antelopes, lions, with a new eye: were they, so to speak, nature's DARWINS, somewhere up around DARWIN XII in complexity? And the gorillas, with higher-order consciousness but no language - where would they stand? DARWIN XIX? And we, writing about the gorillas, where would we stand? DARWIN XXVII perhaps? A particularly intriguing, sometimes frightening part of Bright Air, Brilliant Fire is its penultimate chapter, "Is It Possible to Construct a Conscious Artefact?" Edelman has no doubt of the possibility, but places it, mercifully, well on in the next century. Such then is the sweep of Bright Air, Brilliant Fire, and its central ambition of "replacing the mind in nature". It is a book of astonishing variety and range, which runs from philosophy to biology to psychology to neural modeling, and attempts to synthesize them into a unified whole. Neural Darwinism (or Neural Edelmanism, as Francis Crick has called it) coincides with our sense of "flow," that feeling we have when we are functioning optimally, of a swift, effortless, complex, ever changing, but integrated and orchestrated stream of consciousness; it coincides with the sense that this consciousness is ours, and that all we experience and do and say is, implicitly, a form of selfexpression, and that we are destined, whether we wish it or not, to a life of particularity and self-development; it coincides, finally, with our sense that life is a journey - unpredictable, full of risk and uncertainty, but, equally, full of novelty and adventure, and characterized (if not sabotaged by external constraints or pathology) by constant advance, an ever deeper exploration and understanding of the world. Edelman's theory proposes a way of grounding all this in known facts about the nervous system and testable hypotheses about its operations. Any theory, even a wrong theory, is better than no theory; and this theory - the first truly global theory of mind and consciousness, the first biological theory of individuality and autonomy - should at least stimulate a storm of experiment and discussion. Merlin Donald, at the end of his fine and far-reaching recent book Origins of the Modern Mind (Harvard University Press, 1991), speaks of this in his conclusion: Mental materialism is back, with a vengeance. It is not only back, but back in an unapologetic, out-of-the-closet, almost exhibitionistic form. This latest incarnation might be called "exuberant materialism." Changeux (1985), Churchland (1986), Edelman (1987), Young (1988), and many others have announced a new neuroscientific apocalypse. Optimism is basically more productive than pessisism, and exuberant materialists are certainly optimists. Neuroscience is in its adolescence, and the field is drunk ; with its own dizzying growth; how not to be optimistic? There is no better place to read about this than in Edelman's own works, dense and difficult though they frequently are. Bright Air, Brilliant Fire is the most wide-ranging and accessible. It is strenuous and sometimes maddening, and one must struggle to understand it; but if one struggles, if one reads and reads again, the stubborn paragraphs finally yield their meaning, and a brilliant and captivating new vision of the mind emerges. Oliver Sacks in September 2011 on Web of STORIES At the time he wrote this review, Oliver Sacks was Professor of Neurology at the Albert Einstein College of Medicine in New York. His books include Awakenings (in which he described some of his work at Beth Abraham Hospital, the Bronx, New York City. This is the hospital where he is now, in 1996), A Leg to Stand On, The Man Who Mistook His Wife for a Hat, and, published shortly before this review, Seeing Voices Recently he finished The Island of the Colorblind More on/of Oliver Sacks: New York Review of Books Home  Archives Subscriptions Books Mail For subscription inquiries please email our Subscriptions department or call (212) 757-8070. Please call with back issue requests. Version: Dec. 5, 2012 Location (URL) of this page Sciences Home Joachim Gruber
ff6929fad1b957e8
God Is Not Dead God Is Not Dead By Amit Goswami, Ph.D. Apart from simplistic and very diverse pictures of God that all religions give for popular appeal, at the esoteric core, all religions agree that apart from material interactions, there is another agent of causation in the world; and this is what they call God. Religions also agree that apart from the material level of reality, which we experience outside of us, there are other subtle levels of reality that we experience when we look inside. Religions also agree about a third very important aspect of divinity: We must try to manifest divine qualities—love, beauty, justice, truth, and good, for example—in our lives. When not so long ago, the philosopher Nietzsche declared, “God is dead,” he was lamenting that the popular religious renditions of God are so simplistic that they can no longer guide people to move toward Godliness. This is true. Yet to this day, many scientists beat a dead horse by trying to disprove the popular pictures of God. This is beating around the bush and not at all useful. The real questions, and these are all questions of science, are: (1) Is there causation in the world apart from material interactions? (2) Are there subtle non-material levels of reality? And (3) is there any scientific justification of ethics, which compels us to pursue Godliness in our lives? Most scientists today squarely say “No,” in answer to these questions because they contradict their metaphysics of scientific materialism according to which there is only matter and its interactions, nothing else is real. In my book, God Is Not Dead, I give answers also, and they are all in the affirmative. Yes, there is God. Because (1) there is an agent of causation apart from material interaction; (2) what we experience internally are subtle non-material worlds; and (3) not only should we pursue Godliness in our lives, our evolution is taking us toward better and better manifestations of Godliness. In my book I back up these assertions with both scientific theory and empirical evidence. Believe it or not, one of the most well known mathematical equations of science proves the existence of God if examined within the new context that we have set. It is called the Schrödinger equation named after one of its discoverers, and it is the fundamental equation of quantum physics. Physicists apply this equation for the study of many objects and many events; under these circumstances, the equation predicts (statistically) deterministic results and so most physicists miss God in the equation. The right question to ask is how does this equation apply to a single object in a single event, as it must? You see, the problem is that the Schrödinger equation depicts objects not as “determined things” of Newtonian vintage but as waves of possibility for consciousness to choose from? How do we know this? Because whenever we look at a quantum object, an electron for example, we don’t see possibilities—an electron in different places all at once—but an electron in one actual place, an actuality. So we must be choosing where the electron actualizes! Let’s go deeper. If we (our consciousness) are able to convert possibility into actuality, our consciousness cannot be a brain product or any other material object since all material objects obey quantum physics and must be possibilities only. So consciousness as a nonmaterial agent of choice is a causal agent! Have we discovered God? No, say the scientists, and they are right up to a point. The above raises the paradox of dualism if we think of the choosing consciousness or God, an agent separate from us, as popular religions do. To see this, ask the simple question, how does a nonmaterial God interact with the material world? It can’t without a mediator. But a mediator signal requires energy. And the energy of the physical world is a constant; energy never passes from the material world to a God world and vice versa. In the esoteric core, the masters of the various religions understood the situation perfectly. God is not separate from the material world, they declare at various places, times, and cultures. God is both transcendent and immanent. But what do they mean? Until recently, scientists and ordinary people alike, have not been able to penetrate the wisdom of these words. So scientists ignore them and ordinary people go on thinking about God as a dual agent of causation. Proper understanding of quantum physics resolves the logjam. The quantum concept that is truly radical and that is changing our world view is called nonlocality—signal less interaction. Matter consists of waves of possibility within consciousness, which is the ground of all being. Consciousness chooses one facet out of the multifaceted quantum possibility wave and converts possibility into the actuality of that chosen facet, but there is no dualism because consciousness does the choosing nonlocally without signal. It is choosing from itself. Is it like Waiting for Godot: We have been looking for God and it is us? It is each of us who chooses his or her own reality. Alas! This, too, is too simplistic, which is why your wishful thinking about manifesting a BMW for yourself does not usually work. There is a paradox here. Suppose you and your friend are approaching from perpendicular directions a “quantum” traffic light with two possible facets, red and green. Being busy people, you both want green, but who gets to choose? If you both get to choose, obviously there would be pandemonium. Or perhaps you are like the Hollywood woman who meets a friend on Sunset Boulevard and takes her to a coffee house to “catch up.” Over coffee, she starts talking and after an hour says, “Oh my God, I have been talking about myself all this time. Let’s now talk about you. What do you think of me?” To this woman, the only consciousness in the world is hers, and she is always the chooser. Such people are called solipsistic. But solipsism is obviously not the answer to our paradox. It has shifted the question, “Who gets to choose?” to, “Who gets to be the solipsistic head honcho of the situation? No more than that. The paradox remains. The authentic solution is this: The choosing nonlocal consciousness is not us in our ordinary ego, but a “transcendent” consciousness that is both us and beyond us, both transcendent and immanent. Makes sense, doesn’t it? And more. This nonlocality of our choosing consciousness is an experimentally verifiable idea. In fact, this nonlocality has been verified by five different experiments by five different groups at five different laboratories all showing the direct transfer (without signals) of electrical activity from one subject’s brain to another when the subjects are correlated through meditative intention. This is reported in God Is Not Dead. So the scientific evidence for God and God’s causal efficacy is already here. The evidence is definitive because nonlocality can never be simulated by material interactions that always occur via the intermediary of signals. This is not the only evidence. God’s choice is creative and manifests in our creative experience through discontinuous quantum leaps akin to electron’s leap from one atomic orbit to another without going through the intervening space. Creative experiences are subjective, you say. Not when such leaps heal a person from a life threatening disease, a phenomenon called quantum healing for which plenty of evidence exists. Objective evidence for such creative quantum leaps also show up in biological evolution and explains the puzzling phenomena of the fossil gaps (or missing links) which Darwinism cannot explain. How about subtle bodies? If matter consists of waves of possibility for consciousness to choose from and conscious choice leads to our experience of sensing, then it makes sense to posit that our internal experiences are also due to conscious choice from subtle domains of quantum possibilities. As the psychologist Carl Jung first codified, we have four kinds of experiences: sensing, feeling, thinking, and intuiting. In this way there must be four different compartments of conscious possibilities; the physical we sense, the vital energies we feel, the mental meaning we think, and the supramental archetypes—love etc.—we intuit. The empirical evidence for subtle bodies abound in health and healing, in dreams, in the phenomenon of biological morphogenesis, in survival after death and reincarnation, just to name a few. Again, scientific evidence for God is already here, so what should we do about it? For one thing, we should take the religious masters seriously and pay attention to ethics. The values—love, beauty, justice, truth, and goodness—that ethics talk about are what we intuit. And plenty of evidence exists (for example, in the phenomena of dreams, creativity, and reincarnation) for the importance and validity of ethics as discussed in God Is Not Dead. And more. When we recognize that Darwin’s theory of continuous evolution is incomplete and complement it with the creative discontinuous quantum leaps, we discover an astounding thing. Biological evolution’s direction from simple to complex organisms can be explained. We evolve from simplicity to complexity to be able to manifest our experiences of the subtle domains of possibilities better and better. In particular, right now we are evolving toward manifesting better and better Godly qualities. Someday, said the Jesuit philosopher Teilhard de Chardin, we shall harness . . . the energies of love.” Teilhard was right. That day is not very far away. In his private life, Goswami is a practitioner of spirituality and transformation. His forth coming book, The Everything Answer Book: How Quantum Science Explains Love, Death and the Meaning of Life will be published by Hampton Roads Publishing Company in April 2017.
90ba7be0c57adb2d
Take the 2-minute tour × I am trying to understand how complex numbers made their way into QM. Can we have a theory of the same physics without complex numbers? If so, is the theory using complex numbers easier? share|improve this question Complex numbers are fundamental and natural and QM can't work without them - or without a contrived machinery that imitates them. The commutator of two hermitian operators is anti-hermitian, so e.g. [x,p] when it's a c-number has to be imaginary. That's why either x or p or both have to be complex matrices - have complex matrix elements. Schrodinger's equation and/or path integral needs an $i$, too, to produce $\exp(i\omega t)$ waves with the clear direction-sign etc. See motls.blogspot.cz/2010/08/… –  Luboš Motl Jul 20 '12 at 5:14 Quite on the contrary, Dushya. Mathematically, complex numbers are much more fundamental than any other number system, smaller or greater. That's also linked to a theorem that happens to be called the fundamental theorem of algebra, en.wikipedia.org/wiki/Fundamental_theorem_of_algebra - because it is fundamental - that says that n-th order polynomials have n roots but only if everything is in the complex realm. You say that complex numbers may be emulated by real numbers. But it's equally true - and more fundamental - that real numbers may be emulated by complex ones. –  Luboš Motl Jul 20 '12 at 9:43 There are no number in Nature at all... –  Kostya Jul 20 '12 at 16:43 @Dushya ... the reference "fundamental" here for math is the fact that $\mathbb{C}$ is a field extension of $\mathbb{R}$, and not the other way around. There is nothing more to be said about this. –  Chris Gerig Jul 20 '12 at 17:40 In principle you can also use $2\times2$ matrices in the form $\begin{pmatrix} x & y \\ -y & x \end{pmatrix}$. (this remark is in the spirit of Steve B's answer) –  Fabian Aug 15 '12 at 17:37 10 Answers 10 up vote 8 down vote accepted The nature of complex numbers in QM turned up in a recent discussion, and I got called a stupid hack for questioning their relevance. Mainly for therapeutic reasons, I wrote up my take on the issue: On the Role of Complex Numbers in Quantum Mechanics It has been claimed that one of the defining characteristics that separate the quantum world from the classical one is the use of complex numbers. It's dogma, and there's some truth to it, but it's not the whole story: While complex numbers necessarily turn up as first-class citizen of the quantum world, I'll argue that our old friend the reals shouldn't be underestimated. A bird's eye view of quantum mechanics In the algebraic formulation, we have a set of observables of a quantum system that comes with the structure of a real vector space. The states of our system can be realized as normalized positive (thus necessarily real) linear functionals on that space. In the wave-function formulation, the Schrödinger equation is manifestly complex and acts on complex-valued functions. However, it is written in terms of ordinary partial derivatives of real variables and separates into two coupled real equations - the continuity equation for the probability amplitude and a Hamilton-Jacobi-type equation for the phase angle. The manifestly real model of 2-state quantum systems is well known. Complex and Real Algebraic Formulation Let's take a look at how we end up with complex numbers in the algebraic formulation: We complexify the space of observables and make it into a $C^*$-algebra. We then go ahead and represent it by linear operators on a complex Hilbert space (GNS construction). Pure states end up as complex rays, mixed ones as density operators. However, that's not the only way to do it: We can let the real space be real and endow it with the structure of a Lie-Jordan-Algebra. We then go ahead and represent it by linear operators on a real Hilbert space (Hilbert-Schmidt construction). Both pure and mixed states will end up as real rays. While the pure ones are necessarily unique, the mixed ones in general are not. The Reason for Complexity Even in manifestly real formulations, the complex structure is still there, but in disguise: There's a 2-out-of-3 property connecting the unitary group $U(n)$ with the orthogonal group $O(2n)$, the symplectic group $Sp(2n,\mathbb R)$ and the complex general linear group $GL(n,\mathbb C)$: If two of the last three are present and compatible, you'll get the third one for free. An example for this is the Lie-bracket and Jordan product: Together with a compatibility condition, these are enough to reconstruct the associative product of the $C^*$-algebra. Another instance of this is the Kähler structure of the projective complex Hilbert space taken as a real manifold, which is what you end up with when you remove the gauge freedom from your representation of pure states: It comes with a symplectic product which specifies the dynamics via Hamiltonian vector fields, and a Riemannian metric that gives you probabilities. Make them compatible and you'll get an implicitly-defined almost-complex structure. Quantum mechanics is unitary, with the symplectic structure being responsible for the dynamics, the orthogonal structure being responsible for probabilities and the complex structure connecting these two. It can be realized on both real and complex spaces in reasonably natural ways, but all structure is necessarily present, even if not manifestly so. Is the preference for complex spaces just a historical accident? Not really. The complex formulation is a simplification as structure gets pushed down into the scalars of our theory, and there's a certain elegance to unifying two real structures into a single complex one. On the other hand, one could argue that it doesn't make sense to mix structures responsible for distinct features of our theory (dynamics and probabilities), or that introducing un-observables to our algebra is a design smell as preferably we should only use interior operations. While we'll probably keep doing quantum mechanics in terms of complex realizations, one should keep in mind that the theory can be made manifestly real. This fact shouldn't really surprise anyone who has taken the bird's eye view instead of just looking throught the blinders of specific formalisms. share|improve this answer nice one for a questioning stupid hack, i say if more questioning stupid hacks do this, it is possible the general understanding will be raised by quite an amount –  Nikos M. Jul 5 at 8:47 Could you elaborate on the statement about $U(n)$, $Sp(2n, \mathbb{R})$, $O(n)$, $GL(n, \mathbb{C})$? –  qazwsx Aug 10 at 16:25 @Problemania: $U(n)=Sp(2n,\mathbb R)\cap O(2n)\cap GL(n,\mathbb C)$; however, the intersection of any 2 of the groups on the RHS is sufficient, and in particular $U(n)=Sp(2n,\mathbb R)\cap O(2n)$; complexity arises naturally when we deal with compatible symplectic and orthogonal structures; of course it's equally valid to say that symplectic structures arise naturally from compatible orthogonal and complex structures or orthogonal ones from compatible symplectic and complex ones; but complex structures are arguably less well motivated from a physical (or perhaps 'philosophical') point of view –  Christoph Aug 10 at 22:08 The complex numbers in quantum mechanics are mostly a fake. They can be replaced everywhere by real numbers, but you need to have two wavefunctions to encode the real and imaginary parts. The reason is just because the eigenvalues of the time evolution operator $e^{iHt}$ are complex, so the real and imaginary parts are degenerage pairs which mix by rotation, and you can relabel them using i. The reason you know i is fake is that not every physical symmetry respects the complex structure. Time reversal changes the sign of "i". The operation of time reversal does this because it is reversing the sense in which the real and imaginary parts of the eigenvectors rotate into each other, but without reversing the sign of energy (since a time reversed state has the same energy, not negative of the energy). This property means that the "i" you see in quantum mechanics can be thought of as shorthand for the matrix (0,1;-1,0), which is algebraically equivalent, and then you can use real and imaginary part wavefunctions. Then time reversal is simple to understand--- it's an orthogonal transformation that takes i to -i, so it doesn't commute with i. The proper way to ask "why i" is to ask why the i operator, considered as a matrix, commutes with all physical observables. In other words, why are states doubled in quantum mechanics in indistinguishable pairs. The reason we can use it as a c-number imaginary unit is because it has this property. By construction, i commutes with H, but the question is why it must commute with everything else. One way to understand this is to consider two finite dimensional systems with isolated Hamiltonians $H_1$ and $H_2$, with an interaction Hamiltonian $f(t)H_i$. These must interact in such a way that if you freeze the interaction at any one time, so that $f(t)$ rises to a constant and stays there, the result is going to be a meaningful quantum system, with nonzero energy. If there is any point where $H_i(t)$ doesn't commute with the i operator, there will be energy states which cannot rotate in time, because they have no partner of the same energy to rotate into. Such states must be necessarily of zero energy. The only zero energy state is the vacuum, so this is not possible. You conclude that any mixing through an interaction hamiltonian between two quantum systems must respect the i structure, so entangling two systems to do a measurement on one will equally entangle with the two state which together make the complex state. It is possible to truncate quantum mechanics (at least for sure in a pure bosnic theory with a real Hamiltonian, that is, PT symmetric) so that the ground state (and only the ground state) has exactly zero energy, and doesn't have a partner. For a bosonic system, the ground state wavefunction is real and positive, and if it has energy zero, it will never need the imaginary partner to mix with. Such a truncation happens naturally in the analytic continuation of SUSY QM systems with unbroken SUSY. share|improve this answer If you don't like complex numbers, you can use pairs of real numbers (x,y). You can "add" two pairs by (x,y)+(z,w) = (x+z,y+w), and you can "multiply" two pairs by (x,y) * (z,w) = (xz-yw, xw+yz). (If don't think that multiplication should work that way, you can call this operation "shmultiplication" instead.) Now you can do anything in quantum mechanics. Wavefunctions are represented by vectors where each entry is a pair of real numbers. (Or you can say that wavefunctions are represented by a pair of real vectors.) Operators are represented by matrices where each entry is a pair of real numbers, or alternatively operators are represented by a pair of real matrices. Shmultiplication is used in many formulas. Etc. Etc. I'm sure you see that these are exactly the same as complex numbers. (see Lubos's comment: "a contrived machinery that imitates complex numbers") They are "complex numbers for people who have philosophical problems with complex numbers". But it would make more sense to get over those philosophical problems. :-) share|improve this answer +1 on schmultiplication –  Emilio Pisanty Jul 20 '12 at 13:31 But doesn't just change his question to "QM without shmultiplication"? –  Alfred Centauri Jul 20 '12 at 14:00 I do like complex numbers a lot. They are extremely useful and convenient, in connection to the fundamental theorem of algebra, for example, or when working with waves. I'm just trying to understand. –  Frank Jul 20 '12 at 15:30 Alfred - yes. That would be the point. I was wondering if there could be, I don't know, a matrix formulation of the same physics that would use another tool (matrices) than complex numbers. Again, I have no problem with complex numbers and I love them. –  Frank Jul 20 '12 at 15:53 also note that you can model QM on a space of states on a sphere in $\mathbb{C}^n$ with radius $|x|^2+|y|^2+...=1$. These spheres have dimension $2n$ for the reals. –  kηives Jul 20 '12 at 16:53 Let the old master Dirac speak: "One might think one could measure a complex dynamical variable by measuring separately its real and pure imaginary parts. But this would involve two measurements or two observations, which would be alright in classical mechanics, but would not do in quantum mechanics, where two observations in general interfere with one another - it is not in general permissible to consider that two observations can be made exactly simultaneously, and if they are made in quick succession the first will usually disturb the state of the system and introduce an indeterminacy that will affect the second." (P.A.M Dirac, The principles of quantum mechanics, §10, p.35) So if I interpret Dirac right, the use of complex numbers helps to distinguish between quantities, that can be measured simultaneously and the one which can't. You would loose that feature, if you would formulate QM purely with real numbers. share|improve this answer @asmaier: I looked at the quote in the book,and I tend to interpret it as follows: in a general case, it is not possible to measure a complex dynamical variable. So I don't quite understand how you make your conclusion: "the use of complex numbers helps to distinguish between quantities, that can be measured simultaneously and the one which can't" –  akhmeteli Nov 3 '13 at 15:45 I'm not sure if that is a good example, but think about the wave function described by Schrödingers equation. One could split Schrödingers equation into two coupled equations, one for the real and one for the imaginary part of the wave function. However one cannot measure the phase and the amplitude of the wave function simultaneously, because both measurements interfere with each other. To make this manifest, one uses a single equation with a complex wave function, and generates the observable real quantity by squaring the complex wave function. –  asmaier Nov 3 '13 at 16:23 @asmaier: I still don't quite see how this supports your conclusion that I quoted. By the way, as you mentioned the Schrödinger equation, you might wish to see my answer to the question. –  akhmeteli Nov 3 '13 at 18:47 Frank, I would suggest buying or borrowing a copy of Richard Feynman's QED: The Strange Theory of Light and Matter. Or, you can just go directly to the online New Zealand video version of the lectures that gave rise to the book. In QED you will see how Feynman dispenses with complex numbers entirely, and instead describes the wave functions of photons (light particles) as nothing more than clock-like dials that rotate as they move through space. In a book-version footnote he mentions in passing "oh by the way, complex numbers are really good for representing the situation of dials that rotate as they move through space," but he intentionally avoids making the exact equivalence that is tacit or at least implied in many textbooks. Feynman is quite clear on one point: It's the rotation-of-phase as you move through space that is the more fundamental physical concept for describing quantum mechanics, not the complex numbers themselves.[1] I should be quick to point out that Feynman was disrespecting the remarkable usefulness of complex numbers for describing physical phenomena. Far from it! He was fascinating for example by the complex-plane equation known as Euler's Identity, $e^{i\pi} = -1$ (or, equivalently, $e^{i\pi} + 1 = 0$), and considered it one of the most profound equations in all of mathematics. It's just that Feynman in QED wanted to emphasize the remarkable conceptual simplicity of some of the most fundamental concepts of modern physics. In QED for example, he goes on to use his little clock dials to show how in principle his entire method for predicting the behavior of electrodynamic fields and systems could be done using such moving dials. That's not practical of course, but that was never Feynman's point in the first place. His message in QED was more akin to this: Hold on tight to simplicity when simplicity is available! Always build up the more complicated things from that simplicity, rather than replacing simplicity with complexity. That way, when you see something horribly and seemingly unsolvable, that little voice can kick in and say "I know that the simple principle I learned still has to be in this mess, somewhere! So all I have to do is find it, and all of this showy snowy blowy razzamatazz will disappear!" [1] Ironically, since physical dials have a particularly simple form of circular symmetry in which all dial positions (phases) are absolutely identical in all properties, you could argue that such dials provide a more accurate way to represent quantum phase than complex numbers. That's because as with the dials, a quantum phase in a real system seems to have absolutely nothing at all unique about it -- one "dial position" is as good as any other one, just as long as all of the phases maintain the same positions relative to each other. In contrast, if you use a complex number to represent a quantum phase, there is a subtle structural asymmetry that shows up if you do certain operations such as squaring the number (phase). If you do that do a complex number, then for example the clock position represented by $1$ (call it 3pm) stays at $1$, while in contrast the clock position represented by $-1$ (9pm) turns into a $1$ (3pm). This is no big deal in a properly set up equation, but that curious small asymmetry is definitely not part of the physically detectable quantum phase. So in that sense, representing such a phase by using a complex number adds a small bit of mathematical "noise" that is not in the physical system. share|improve this answer Complex numbers "show up" in many areas such as, for example, AC analysis in electrical engineering and Fourier analysis of real functions. The complex exponential, $e^{st},\ s = \sigma + i\omega$ shows up in differential equations, Laplace transforms etc. Actually, it just shouldn't be all that surprising that complex numbers are used in QM; they're ubiquitous in other areas of physics and engineering. And yes, using complex numbers makes many problems far easier to solve and to understand. I particularly enjoyed this book (written by an EE) which gives many enlightening examples of using complex numbers to greatly simplify problems. share|improve this answer I guess I'm wondering if those complex numbers are "intrinsic" or just an arbitrary computing device that happens to be effective. –  Frank Jul 20 '12 at 2:56 @Frank: you could ask the same thing about the real numbers. Who ever measured anything to be precisely $\sqrt 2$ meters, anyhow? –  Niel de Beaudrap Jul 20 '12 at 4:29 What does it mean though that complex numbers "appear" in AC circuit analysis? The essence of AC is a sinusoidal driving components. You could say the nature of these components come from geometry factors, made electrical from a dot product in generators. Once we have sinusoidal variables interacting in an electrical circuit, we know the utility of complex numbers. That, in turn, comes from the equations. What does that all mean though? –  Alan Rominger Jul 20 '12 at 13:36 It means that if the sources in the circuit are all of the form $e^{st}$, the voltages and currents in the circuit will be of that form. This follows from the nature of the differential equations that represent the circuit. The fact that we choose to set $s = j\omega$ for AC analysis and then select only the real part of the solutions as a "reality" constraint doesn't change the mathematical fact that the differential equations describing the circuit have complex exponential solutions. –  Alfred Centauri Jul 20 '12 at 13:49 Alan - it probably means nothing. It happens to be a tool that so far works pretty well. –  Frank Jul 20 '12 at 15:55 Yes, we can have a theory of the same physics without complex numbers (without using pairs of real functions instead of complex functions), at least in some of the most important general quantum theories. For example, Schrödinger (Nature (London) 169, 538 (1952)) noted that one can make a scalar wavefunction real by a gauge transform. Furthermore, surprisingly, the Dirac equation in electromagnetic field is generally equivalent to a fourth-order partial differential equation for just one complex component, which component can also be made real by a gauge transform (http://akhmeteli.org/wp-content/uploads/2011/08/JMAPAQ528082303_1.pdf (an article published in the Journal of Mathematical Physics) or http://arxiv.org/abs/1008.4828 ). share|improve this answer I am not very well versed in the history, but I believe that people doing classical wave physics had long since notes the close correspondence between the many $\sin \theta$s and $\cos \theta$s flying around their equations and the behavior of $e^{i \theta}$. In fact most wave related calculation can be done with less hassle in the exponential form. Then in the early history of quantum mechanics we find things described in terms of de Broglie's matter waves. And it works which is really the final word on the matter. Finally, all the math involing complex numbers can be decomposed into compound operations on real numbers so you can obviously re-formulate the theory in those terms there is no reason to think that you will gain anything in terms of ease or insight. share|improve this answer Can a complex infinite dimensional Hilbert space be written as a real Hilbert space with complex structure ? It seems plausible that it can be done but could there be any problems due to infinite dimensionality ? –  user10001 Jul 20 '12 at 2:45 The underlying field you choose, $\mathbb{C}$ or $\mathbb{R}$, for your vector space probably has nothing to do with its dimensionality. –  Frank Jul 20 '12 at 2:52 @dushya: There are no problems due to infinite dimensionality, the space is separable and can be approximated by finite dimensional subspaces. –  Ron Maimon Jul 20 '12 at 18:37 Just to put complex numbers in context, A.A. Albert edited "Studies in Modern Algebra" - from the Mathematical Assn of America. C is one of the Normed Division Algebras - of which there are only four: R,C,H and O. One can do a search for "composition algebras" - of which C is one. share|improve this answer Update: This answer has been superseded by my second one. I'll leave it as-is for now as it is more concrete in some places. If a moderator thinks it should be deleted, feel free to do so. I do not know of any simple answer to your question - any simple answer I have encountered so far wasn't really convincing. Take the Schrödinger equation, which does contain the imaginary unit explicitly. However, if you write the wave function in polar form, you'll arrive at a (mostly) equivalent system of two real equations: The continuity equation together with another one that looks remarkably like a Hamilton-Jacobi equation. Then there's the argument that the commutator of two observables is anti-hermitian. However, the observables form a real Lie-algebra with bracket $-i[\cdot,\cdot]$, which Dirac calls the quantum Poisson bracket. All expectation values are of course real, and any state $\psi$ can be characterized by the real-valued function $$ P_\psi(·) = |\langle \psi,·\rangle|^2 $$ For example, the qubit does have a real description, but I do not know if this can be generalized to other quantum systems. I used to believe that we need complex Hilbert spaces to get a unique characterization of operators in your observable algebra by their expectation values. In particular, $$ \langle\psi,A\psi\rangle = \langle\psi,B\psi\rangle \;\;\forall\psi \Rightarrow A=B $$ only holds for complex vector spaces. Of course, you then impose the additional restriction that expectation values should be real and thus end up with self-adjoint operators. For real vectors spaces, the latter automatically holds. However, if you impose the former condition, you end up with self-adjoint operators as well; if your conditions are real expectation values and a unique representation of observables, there's no need to prefer complex over real spaces. The most convincing argument I've heard so far is that linear superposition of quantum states doesn't only depend on the quotient of the absolute values of the coefficients $|α|/|β|$, but also their phase difference $\arg(α) - \arg(β)$. Update: There's another geometric argument which I came across recently and find reasonably convincing: The description of quantum states as vectors in a Hilbert space is redundant - we need to go to the projective space to get rid of this gauge freedom. The real and imaginary parts of the hermitian product induce a metric and a symplectic structure on the projective space - in fact, projective complex Hilbert spaces are Kähler manifolds. While the metric structure is responsible for probabilities, the symplectic one provides the dynamics via Hamilton's equations. Because of the 2-out-of-3 property, requiring the metric and symplectic structures to be compatible will get us an almost-complex structure for free. share|improve this answer You don't need polar form, just take the real and imaginary parts. –  Ron Maimon Jul 20 '12 at 10:00 The most convincing I've heard so far is that since there are "waves" in QM, complex numbers formulation happen to be convenient and efficient. –  Frank Jul 20 '12 at 15:56 protected by Qmechanic Nov 27 '13 at 15:42 Would you like to answer one of these unanswered questions instead?
86731e62d0c2fc3c
next up previous Next: THE BOLTZMANN COLLISION Up: Boundary Conditions for Open Previous: BOUNDARY CONDITIONS FOR In the semiconductor structures which originally motivated this work the charge carriers whose motion we seek to describe are really quasiparticles whose properties are determined by the energy band structure (or energy-momentum dispersion relation) of the semiconductor material. These carriers usually occupy states near an extremum of a band, and thus for the simpler cases of interest the band structure can be approximated as where is the energy at the edge of the band and is just the heterostructure potential used in Appendix 9, is the wavevector at which this extremum occurs, and is the ``effective mass'' which characterizes the curvature of the dispersion relation. This dispersion relation may be modeled by the effective mass Schrödinger equation where is the Hartree potential, which is assumed to be slowly varying. The wavefunction in (13.153) is strictly an envelope function for the true wavefunction. In the Wannier-Slater approach to effective-mass theory (Slater, 1949) is a discrete function (defined on the lattice points) giving the amplitude of the Wannier function at each point [though is approximated by a continuous function to derive the differential equation (13.153)]. In the approach of Luttinger and Kohn (1955) is a continuum but band-limited function which is multiplied by a perfectly periodic Bloch function to obtain the complete wavefunction. A semiconductor heterostructure is a single crystal which includes (deliberately introduced) local changes in the chemical composition. These introduce changes in the ``local band structure'' which must be incorporated into the effective mass equation (13.153) to obtain an accurate model of the quasiparticle dynamics in a heterostructure. For the sake of concreteness let us consider an abrupt heterojunction. The local band-edge energy will be shifted across the heterojunction, and this effect is easily incorporated into (13.153) by making a function of position. In general, the value of the effective mass will also change across a heterojunction, and this requires a more careful treatment of the kinetic energy term. (Another way to view this problem is to state the conditions for matching across an interface with discontinuous . Because the matching condition follows uniquely from the form of the Hamiltonian, we will focus upon the latter.) The problem is that many of the expressions one might write down [such as that which appears in (13.153)] become non-Hermitian when is taken to be a function of position. The simplest manifestly Hermitian form is: although other, more complicated expressions have been suggested (see Morrow and Brownstein, 1984). In general, it appears that (13.154), which might be termed the ``minimal Hermitian form,'' is an adequate approximation when the magnitude of the change in is small, as is typically true of equivalent energy bands in closely related materials. When the discontinuity is of a larger magnitude, as when inequivalent bands are involved, one probably needs to explicitly solve the multiband problem and infer the form of the effective-mass equation from the results (see, for example, Grinberg and Luryi, 1989). We can obtain different discrete approximations to (13.154) depending upon where we assume the heterojunction is actually located with respect to the mesh points. The most consistent scheme is to assume that the junction is located midway between two adjacent meshpoints. The discrete Hamiltonian (3.24) then becomes (Mains, Mehdi, and Haddad, 1989) which was used in all of the tunneling calculations presented here. If we use (13.154) to construct the kinetic-energy superoperator , how is the form of this superoperator (in the Wigner-Weyl representation) affected? We might hope that a simple expression would result, such as (This is the expression which was actually used in the calculations presented here.) Unfortunately, (13.156) holds only if , which holds only if the band structure varies slowly as a function of position. In general, a position-dependent effective mass will produce a nonlocal form for the kinetic-energy superoperator in the Wigner-Weyl representation (Barker, Lowe, and Murray, 1984). A more complete treatment, expressing the Wigner-Weyl transformation in terms of the Wannier and Bloch representations (rather than the position and momentum representations) has been developed by Miller and Neikirk (1990). This analysis also demonstrates a nonlocal kinetic-energy term. next up previous William R. Frensley Thu Jun 8 17:53:37 CDT 1995
0a07a7d32841e54a
Anderson localization From Wikipedia, the free encyclopedia Jump to: navigation, search In condensed matter physics, Anderson localization, also known as strong localization, is the absence of diffusion of waves in a disordered medium. This phenomenon is named after the American physicist P. W. Anderson, who was the first one to suggest the possibility of electron localization inside a semiconductor, provided that the degree of randomness of the impurities or defects is sufficiently large.[1] Anderson localization is a general wave phenomenon that applies to the transport of electromagnetic waves, acoustic waves, quantum waves, spin waves, etc. This phenomenon is to be distinguished from weak localization, which is the precursor effect of Anderson localization (see below), and from Mott localization, named after Sir Nevill Mott, where the transition from metallic to insulating behaviour is not due to disorder, but to a strong mutual Coulomb repulsion of electrons. In the original Anderson tight-binding model, the evolution of the wave function ψ on the d-dimensional lattice Zd is given by the Schrödinger equation i \hbar \dot{\psi} = H \psi~, where the Hamiltonian H is given by (H \phi)(j) = E_j \phi(j) + \sum_{k \neq j} V(|k-j|) \phi(k)~, with Ej random and independent, and interaction V(r) falling off as r−2 at infinity. For example, one may take Ej uniformly distributed in [−W,   +W], and V(|r|) = \begin{cases} 1, & |r| = 1 \\ 0, &\text{otherwise.} \end{cases} Starting with ψ0 localised at the origin, one is interested in how fast the probability distribution |\psi|^2 diffuses. Anderson's analysis shows the following: • if d is 1 or 2 and W is arbitrary, or if d ≥ 3 and W/ħ is sufficiently large, then the probability distribution remains localized: \sum_{n \in \mathbb{Z}^d} |\psi(t,n)|^2 |n| \leq C uniformly in t. This phenomenon is called Anderson localization. • if d ≥ 3 and W/ħ is small, \sum_{n \in \mathbb{Z}^d} |\psi(t,n)|^2 |n| \approx D \sqrt{t}~, where D is the diffusion constant. The phenomenon of Anderson localization, particularly that of weak localization, finds its origin in the wave interference between multiple-scattering paths. In the strong scattering limit, the severe interferences can completely halt the waves inside the disordered medium. For non-interacting electrons, a highly successful approach was put forward in 1979 by Abrahams et al.[2] This scaling hypothesis of localization suggests that a disorder-induced metal-insulator transition (MIT) exists for non-interacting electrons in three dimensions (3D) at zero magnetic field and in the absence of spin-orbit coupling. Much further work has subsequently supported these scaling arguments both analytically and numerically (Brandes et al., 2003; see Further Reading). In 1D and 2D, the same hypothesis shows that there are no extended states and thus no MIT. However, since 2 is the lower critical dimension of the localization problem, the 2D case is in a sense close to 3D: states are only marginally localized for weak disorder and a small spin-orbit coupling can lead to the existence of extended states and thus an MIT. Consequently, the localization lengths of a 2D system with potential-disorder can be quite large so that in numerical approaches one can always find a localization-delocalization transition when either decreasing system size for fixed disorder or increasing disorder for fixed system size. Most numerical approaches to the localization problem use the standard tight-binding Anderson Hamiltonian with onsite-potential disorder. Characteristics of the electronic eigenstates are then investigated by studies of participation numbers obtained by exact diagonalization, multifractal properties, level statistics and many others. Especially fruitful is the transfer-matrix method (TMM) which allows a direct computation of the localization lengths and further validates the scaling hypothesis by a numerical proof of the existence of a one-parameter scaling function. Direct numerical solution of Maxwell equations to demonstrate Anderson localization of light has been implemented (Conti and Fratalocchi, 2008). Experimental evidence[edit] Two reports of Anderson localization of light in 3D random media exist up to date (Wiersma et al., 1997 and Storzer et al., 2006; see Further Reading), even though absorption complicates interpretation of experimental results (Scheffold et al., 1999). Anderson localization can also be observed in a perturbed periodic potential where the transverse localization of light is caused by random fluctuations on a photonic lattice. Experimental realizations of transverse localization were reported for a 2D lattice (Schwartz et al., 2007) and a 1D lattice (Lahini et al., 2006). Transverse Anderson localization of light has also been demonstrated in an optical fiber medium (Karbasi et al., 2012) and has also been used to transport images through the fiber (Karbasi et al., 2014). It has also been observed by localization of a Bose–Einstein condensate in a 1D disordered optical potential (Billy et al., 2008; Roati et al., 2008). Anderson localization of elastic waves in a 3D disordered medium has been reported (Hu et al., 2008). The observation of the MIT has been reported in a 3D model with atomic matter waves (Chabé et al., 2008). Random lasers can operate using this phenomenon. 1. ^ Anderson, P. W. (1958). "Absence of Diffusion in Certain Random Lattices". Phys. Rev. 109 (5): 1492–1505. Bibcode:1958PhRv..109.1492A. doi:10.1103/PhysRev.109.1492.  2. ^ Abrahams, E.; Anderson, P.W.; Licciardello, D.C.; Ramakrishnan, T.V. (1979). "Scaling Theory of Localization: Absence of Quantum Diffusion in Two Dimensions". Phys. Rev. Lett. 42 (10): 673–676. Bibcode:1979PhRvL..42..673A. doi:10.1103/PhysRevLett.42.673.  Further reading[edit] • Brandes, T. & Kettemann, S. (2003). "The Anderson Transition and its Ramifications --- Localisation, Quantum Interference, and Interactions". Berlin: Springer Verlag  External links[edit]
6b2becd25d49d817
Physics & Math Home |Physics & Math | News Molecules from scratch without the fiendish physics A SUITE of artificial intelligence algorithms may become the ultimate chemistry set. Software can now quickly predict a property of molecules from their theoretical structure. Similar advances should allow chemists to design new molecules on computers instead of by lengthy trial-and-error. Our physical understanding of the macroscopic world is so good that everything from bridges to aircraft can be designed and tested on a computer. There's no need to make every possible design to figure out which ones work. Microscopic molecules are a different story. "Basically, we are still doing chemistry like Thomas Edison," says Anatole von Lilienfeld of Argonne National Laboratory in Lemont, Illinois. The chief enemy of computer-aided chemical design is the Schrödinger equation. In theory, this mathematical beast can be solved to give the probability that electrons in an atom or molecule will be in certain positions, giving rise to chemical and physical properties. But because the equation increases in complexity as more electrons and protons are introduced, exact solutions only exist for the simplest systems: the hydrogen atom, composed of one electron and one proton, and the hydrogen molecule, which has two electrons and two protons. This complexity rules out the possibility of exactly predicting the properties of large molecules that might be useful for engineering or medicine. "It's out of the question to solve the Schrödinger equation to arbitrary precision for, say, aspirin," says von Lilienfeld. So he and his colleagues bypassed the fiendish equation entirely and turned instead to a computer-science technique. Machine learning is already widely used to find patterns in large data sets with complicated underlying rules, including stock market analysis, ecology and Amazon's personalised book recommendations. An algorithm is fed examples (other shoppers who bought the book you're looking at, for instance) and the computer uses them to predict an outcome (other books you might like). "In the same way, we learn from molecules and use them as previous examples to predict properties of new molecules," says von Lilienfeld. His team focused on a basic property: the energy tied up in all the bonds holding a molecule together, the atomisation energy. The team built a database of 7165 molecules with known atomisation energies and structures. The computer used 1000 of these to identify structural features that could predict the atomisation energies. When the researchers tested the resulting algorithm on the remaining 6165 molecules, it produced atomisation energies within 1 per cent of the true value. That is comparable to the accuracy of mathematical approximations of the Schrödinger equation, which work but take longer to calculate as molecules get bigger (Physical Review Letters, DOI: 10.1103/PhysRevLett.108.058301). The algorithm found solutions in a millisecond that would take these earlier methods an hour. "Instead of having to wait years to screen lots of new molecules, you might have to wait weeks or a month," says Mark Tuckerman of New York University, who was not involved in the new work. The algorithm is still mainly a proof of principle. If it can learn to predict something else, such as how well a molecule binds to an enzyme, it could help with designing drugs, fuel cells, batteries or biosensors. "The applications can be as broad as chemistry," von Lilienfeld says. See graphic: "The not-so-simple Schrödinger equation" Issue 2851 of New Scientist magazine • New Scientist • Not just a website! • Subscribe to New Scientist and get: • New Scientist magazine delivered every week • Unlimited online access to articles from over 500 back issues • Subscribe Now and Save How will we behave? <i>(Image: Mark J. Winter/Science Photo Library)</i> How will we behave? (Image: Mark J. Winter/Science Photo Library) Getting an insight into Einstein's worlds 08:00 24 May 2015 Einstein and Schrödinger: The price of fame 10:00 23 May 2015 Supernova space bullets could have seeded Earth's iron core 22:00 20 May 2015 The big bang blip: Solving the mystery of why matter exists 20:00 20 May 2015 Latest news Dust from asteroid mining spells danger for satellites 22:00 27 May 2015 Machines are about to start making their own beautiful music 21:00 27 May 2015 Guilty pleasures: Which bad habits can you get away with? 16:35 27 May 2015 19:00 27 May 2015 © Copyright Reed Business Information Ltd.
572a3bcc7b972fd2
1  Digital Solution of the Mind-Body Problem  Ralph Abraham, Sisir Roy <abraham@vismath.org> (Department of Mathematics, Santa Cruz, U.S.A.)    Using the concepts of the mathematical theory of self-organizing systems in understanding the emergence of space-time at Planck scale, we proposed a digital solution of the mind-body problem. This will shed new light on the interconnection of consciousness and the physical world.  PL 2  The role of quantum cooperativity in neural signaling  Gustav Bernroider, Johann Summhammer <gustav.bernroider@sbg.ac.at> (Neurobiology, University of Salzburg, Salzburg, Salzburg, Austria)    According to the neural doctrine (1), propagating membrane potentials establish the basis for coding and communication in the nervous system. The physical representation of information is assumed to be contained in the spatio-temporal characteristic of propagating membrane potentials as originally described by Hodgkin and Huxley (HH, 2). Despite an uncountable number of correlation studies employing HH-type signals (action potentials, APs) and brain function , the underlaying equations of motion contain coupled dynamics of channel proteins and membrane voltage that still lack a consistent theoretical background. Generally, there is no fine grained level of precision in the correlation of action potentials with higher level brain functions and there are several inconsistencies behind experimental observations and HH type predictions. Action potentials are composed from the concerted flow of ions through aqueous membrane pores provided by a family of voltage sensitive membrane proteins. In a circular type of argumentation, selective permeability determines membrane voltage and membrane voltage determines permeability. There is no ‘window’ in the chain of events that could account for two indispensable features that are observed in ‘real’ neuronal ensembles and considered to be decisive in the exploration of cognitive processes: (i) large ongoing variability to repeated sensory representations as observed in the visual cortex more than ten years ago (3) and (ii) signal onset-rapidness in cortical neurons as shown previously (4). Both phenomena cannot be explained by classical HH type models. Further, in the view of recent advances in the atomic level reconstructions and molecular dynamics (MD) simulations, the originally proposed independence of within channel states (the ‘gating particles’ in the HH model) and independent gating states between channels seems to be untenable. In the present work we introduce quantum mechanical (QM) correlations (entanglement) into the dynamics of single channels and into the temporal evolution of multiple channel states. This is justified by at least two good reasons, (i) the gating transitions within channel proteins are established at the atomic scale, involving QM action orders at least over a certain number of vibrational periods of the engaged atoms, and ii) the states of the channel are not mutually independent as assumed in the classical model. Droping the assumption of independent gating transitions, we introduce a model where sub-domains of the protein responsible for selectivity and permeation are in a short entangled state. The entanglement of gating domains implies that their probabilistic switching behaviour will be governed by some coordinaton, while each gating domain itself still appears fully random. The underlaying model parameters can be tuned from independence, attaining the classical HH behaviour, to a two, three or more particle quantum mechanical entangled version. Our results show, that even with a very moderate assumption on the strength of entanglement that could resist the breaking power of the thermal bath to which the protein is exposed, the signal onset can be several times faster than predicted by the HH model and is in accord with the observed in-vivo response of cortical neurons (4). This is a particularly important result in the view of the persistant debate about the survival time of coherent states in the brain. Further,we show that quantum correlations of channel states allow for ongoing signal variations that are observed in evoked cortical responses. (1) Barlow, H (1972) Perception, 1, 371-394. (2) Hodgkin, A.L. and Huxley, A.F (1952) J Physiol (London), 117,500-544. (3) Arieli, A, Sterkin, A, Grinvald, A, Ad Aertsen (1996) Science, 273, 1868-1871. (4) Naundorf, B, Wolf F, M Volgushev (2006) Nature, 440, 1060-1063   PL 3  Schrodinger's Cat: Empirical research into the radical subjective solution of the measurement problem.  Dick Bierman, Stephen Whitmarsh <d.j.bierman@uva.nl> (PN, University of Amsterdam, Amsterdam, Netherlands)    The most controversial of all solutions of the measurement problem holds that a measurement is not completed until a conscious observation is made. In other words quantum physics is a science of potentialities and the measurement i.c. the conscious observation brings about the reality by reducing the state vector to one of the Eigen-states. In a series of experiments modeled after the famous experiment by the Shimony group we have explored the brain responses of observers of a quantum event. In about 50% of the exosures this quantum event had already been observed about one second earlier by another person. This random manipulation was unknown to the final observer. The first experiment along these lines gave suggestive evidence for a difference in brain responses dependent on the manipulation. In subsequent experiments quantum events were mixed with classical events and the results of these experiments that have been reported elsewhere were ambiguous. In a final experiment we are trying to solve the paradoxical results obtained so far. In this experiment the final observer receives detailed information about the type of event that (s)he observes. Also the experimental protocol is such that not only pre-observed events cannot be distinguished from not pre-observed events on the basis of their physical characteristic but neither on the basis of inter-event time distributions. Results will be presented at the conference.   PL 4  EEG Gamma Coherence Changes and Spiritual Experiences During Ayahuasca  Frank Echenhofer <fechenhofer@ciis.edu> (Clinical Psychology, California Institute of Integral Studies, Richmond, CA)    Ayahuasca is a psychedelic sacramental brew used possibly for more than a thousand years by many indigenous communities of the Brazilian and Peruvian Amazon and by several syncretic religions that originated in 20th century Brazil and that combine ayahuasca shamanism and Christianity. In the last decade, a growing number of North Americana and Europeans have combined ayahuasca shamanism with other religious cosmologies and practices. Some ayahuasca reports are similar to archetypal spiritual experiences at the core of many religions. Studies have shown that authentic non-drug induced spiritual experiences cannot be distinguished from psychedelic spiritual experiences. Religious studies have suggested that psychedelics may have inspired the formative revelations of many shamanic cosmologies, some Greek mystery religions, the Hindu Vedas, and several ancient South and Central American religious traditions. Archetypal spiritual experiences, such as experiencing mandalas, journeying to other worlds, and encountering entities, are documented in monotheistic religions, ayahuasca shamanism, and in ayahuasca reports of North Americans and Europeans. Most spiritual traditions agree that waking consciousness can be transformed to reveal a more comprehensive reality. Studying ayahuasca may provide a reliable laboratory approach to use neuroscience and systematic phenomenological methods to reveal the neural correlates of archetypal spiritual experiences. Our findings, using a multi-disciplinary approach integrating the methods of comparative religion, anthropology, and qEEG, will be presented. Recently psilocybin was reported to facilitate profoundly meaningful experiences in healthy individuals. A psilocybin clinical trial designed to facilitate spiritual experiences in terminal patients has shown initial positive results. Research with a Brazilian ayahuasca religion found that long term users of ayahuasca had overcome alcohol addiction and neuropsychological testing revealed no detrimental effects. Previous psychedelic EEG research found theta and alpha power decreased during mescaline, psilocybin, and LSD, while some individuals showed increased modal alpha frequency. It has been theorized that EEG gamma coherence “binds” different modalities of cortical information processing. Because ayahuasca reports emphasize that the sensory, affective, cognitive, and spiritual modalities of experiencing are more integrated, we hypothesized that ayahuasca would enhance gamma coherence. Our research found that after 45 minutes of ingesting ayahuasca, participants reported the most intense consciousness alterations, or “peaking”. Some reported very brilliant and unusual fast morphing visions comprised of dazzling colors, multiple layers, and exquisitely beautiful architectural structures. Some participants reported that music modulated the physiognomic aspects of the experiential display. Others experienced fear, being overwhelmed, and nausea and vomiting, all which are viewed in shamanism as bodily cleansing and healing. A few reported classical archetypal journey experiences, gaining entry to and exploring other realms of reality and communicated with intelligent entities. In eyes closed ayahuasca vs. baseline conditions, ayahuasca decreased alpha and theta power suggesting enhanced activation and information processing and enhanced gamma coherence suggesting increased “binding” of sensory, affective, and cognitive processes. Some participants showed significant coherence changes in other EEG frequencies suggesting the importance of examining individual differences in future research. Our findings suggest ayahuasca may enhance both binding and cognitive complexity exemplified in feelings of interconnectedness and meaningfulness during archetypal spiritual experiences.   PL 5  Why Quantum Mind to begin with? A Proof for the Incompleteness of the Physical Account of Behavior   Avshalom Elitzur <Avshalom.Elitzur@weizmann.ac.il> (Univ, Rehovot, Israel)    Should quantum mechanics be applied to the study of consciousness? For this workshop’s participants the answer is obvious, but mainstream science maintains that the burden of proof is on them. Penrose (1995) has put forward an ingenious argument that mathematical invention is non-algorithmic, but this argument failed to convince the mathematical community. This presentation presents a simpler argument of this kind. On the grounds of classical physics alone it is possible to prove that any physical description of behavior is, in principle, incomplete. Every simple analysis of a particular conscious experience, like that of a certain color or tone (a “quale“) reveals an ingredient that is not reducible to physical laws. While this is disturbing enough, worse consequences await any theory that allows these qualia to play any causal role in behavior. Chalmers (1996) has intensively studied the “zombie,” a hypothetical human being that acts only by physical laws without having qualia. He then purported to prove that such a being must manifest all the actions manifested by a conscious human, including the assertion that consciousness is not explained by physical law. This way Chalmers hoped to maintain the closure of the physical world without denying that consciousness is a genuine phenomenon. I present a logical proof that Chalmers’ argument is flatly wrong. Some form of dualism of the worst kind, namely interactive dualism, may be inescapable. I begin by showing that a zombie can never perceive a genuine contradiction between the physical mechanism underlying her perception and her immediate conscious experience. Zombies cannot – but humans do. From this difference it rigorously follows that consciousness, as something distinct by nature from any physical force, interferes with the brain’s operation. The ways out of this conclusion are very few: 1. Dismiss consciousness as illusory, due to some kind of misperception afflicting numerous thinkers and scientists. In this case, “misperception” being a physical phenomenon by the very tenets of physicalism, the burden of proof is now back on mainstream physics: Future neurophysiology must be able to point out the particular failure in the human brain’s operation which is responsible for many people’s belief that consciousness and brain mechanisms are not identical. 2. Concede that energy and/or momentum conservation laws do not always hold. This option ensures mainstream physics’ antagonism. 3. Concede that the second law of thermodynamics does not always hold. This option too is bound to be vehemently opposed by the physical community. Since option (1) is en empirical question, the entire issue is no longer confined to philosophy. The answer is bound to come from scientific research. Returning to quantum mechanics, it is striking that, despite its abandonment of many basic notions of classical physics, it has never seriously considered options (2) and (3). I propose no solution to this problem. My aim is only to show that the riddle of consciousness is much more acute than usually believed, yet it can be resolved scientifically.   PL 6  Realistic Superstring Mechanisms for Quantum Neuronal Behavior  John Hagelin <hagelinj@aol.com> (Physics, Maharishi International University, Fairfield, IA)    The abundance of "hidden sector" matter in the world today is a nearly inescapable conclusion of realistic superstring theories. Hidden sector matter provides a natural mechanism for macroscopic quantum coherent phenomena in biological systems, where characteristically high temperatures normally preclude such quantum behavior. String theory thus provides a plausible solution to the central challenge in quantum-mind research, namely, "how can the quantum-mechanical mechanisms one would naturally associate with consciousness possibly be supported by the human brain?" Elaboration: Many have speculated that aspects of conscious experience have their physical origin in quantum-mechanical mechanisms. The most challenging associated question has been, "How does the brain--a predominantly macroscopic organ immersed in a high-temperature, high-entropy environment--support quantum-mechanical mechanisms?" Whereas intracellular quantum-mechanisms have been proposed, it is probably essential that a complete quantum-mechanical understanding of consciousness will require quantum correlations that are inter-cellular--i.e., collective correlations among multiple neurons separated by macroscopic distances. Until now, fully viable quantum mechanisms have been elusive. We propose a plausible explanation for stable, large-scale quantum-mechanical coherence based on new physical mechanisms predicted by the superstring. All realistic string models contain "hidden sector" particles and forces, typically including a massless spin-1 "quasi-photon" and at least one light charged scalar meson. Whereas it had been previously assumed that these hidden sector particles interact only gravitationally with normal ("observable sector") fields, it now appears more likely that there is a weak electromagnetic coupling between the two worlds of matter. The hidden sector world is spatially and temporally coincident with ours, but due to its weak coupling, is only dimly observable through dedicated EM detectors currently under development. Also due to its weak coupling, hidden sector matter does not equilibrate thermally with ordinary matter, and thus the hidden sector ambient temperature is calculated to be a few degrees Kelvin--similar to the cosmic neutrino background. This has two important physical ramifications: 1) Hidden sector matter, despite its weak coupling, clings electrostatically to normal matter--especially to carbon-based biological matter. Its concentration in the cellular interior is predicted to be high. 2) Due to its low ambient temperature, hidden sector particles are expected to exhibit macroscopic quantum coherent effects, and provide a viable mechanism for short-circuiting synaptic communication and for sustaining large-scale quantum correlation among distant neurons. In this talk, we present what it currently known about hidden sector matter and its potential relevance to quantum-mechanical biological functioning, and suggest avenues of future empirical and theoretical research. We also present published experimental evidence for long-range "field effects" of consciousness, that provide empirical support for the aforementioned quantum effects, and that help to discriminate among competing quantum-mechanical models of consciousness.   PL 7  Schrödinger’s proteins: How quantum biology can explain consciousness   Stuart Hameroff <hameroff@u.arizona.edu> (Center for Consciousness Studies, University of Arizona, Tucson, Arizona)    Classical approaches to consciousness view brain neurons, axonal spikes/firings and chemical synaptic transmissions as fundamental information bits and switches in feed-forward and feedback networks of “integrate-and-fire” neurons. However this popular view 1) fails to account for unconscious-to-conscious transitions, binding, and the ‘hard problem’ of subjective experience, 2) forces the stark conclusion that consciousness is an epiphenomenal illusion and 3).conflicts with the two best correlates of consciousness: gamma synchrony EEG and anesthesia, both of which indicating that consciousness occurs primarily in dendrites (i.e. during collective integration - rather than fire - phases of integrate-and-fire). Gamma synchrony EEG requires dendro-dendritic gap junctions (lateral connections in hidden input layers of feed-forward network) and may require non-local quantum correlations to account for precise brain-wide coherence. Anesthetic gases selectively erase consciousness and gamma synchrony EEG, sparing evoked potentials, sub-gamma EEG, autonomic drives and axonal spike/firing capabilities. The anesthetic gases act solely by quantum London forces in non-polar pockets of electron resonance clouds within a subset of dendritic proteins. In the absence of anesthetic (i.e. consciousness), quantum superposition, coherence and non-local entanglement in these electron clouds are amplified to govern protein conformation and function, Thus anesthetic-sensitive proteins may act like quantum bits (“qubits”), engaging in quantum computation (“Schrödinger’s proteins”). Scientists since Schrödinger have suggested an intrinsic role for biomolecular quantum effects in life and consciousness. The Penrose-Hameroff Orch OR model proposes consciousness to be a sequence of gamma-synchronized discrete events, corresponding with quantum computations among entangled, superpositioned microtubule subunits in gap junction-connected dendrites (“dendritic webs”). Microtubule quantum computations self-collapse by Penrose objective reduction (OR), a proposed threshold tied to instability in spacetime geometry separations/superpositions. Thus Orch OR connects brain processes to fundamental spacetime geometry in which (according to Penrose) Platonic values are encoded. Classical microtubule states chosen with each Orch OR event can trigger axonal spikes and convey the content of conscious experience. Orch OR appears vulnerable to decoherence in the “warm, wet” brain. However evidence suggests 1) heat can pump (rather than destroy) biomolecular quantum processes, 2) quantum coherence involving proteins occurs biologically in photosynthesis, 3) quantum correlations may govern ion channel cooperativity, 4) psychoactive molecules interact with receptors by quantum correlations, 5) quantum computing occurs at increasingly warm temperatures, 6) microtubules appear to have intrinsic quantum error correction topology, and 7) “quantum protectorates” occur in regions of non-polar electron resonance clouds in proteins, membranes and nucleic acids. Further, atemporal quantum effects can account for the famous “backward time” found in the brain by Libet, and allow real-time control of our conscious actions, rescuing consciousness from epiphenomenal illusion. So what is consciousness? According to Orch OR, consciousness is a sequence of events in fundamental spacetime geometry, “ripples on the edge” between quantum and classical worlds. The spacetime events are amplified through quantum processes in non-polar electron resonance regions to causally influence biomolecular functions, perhaps connecting us to quantum gravity instantiations of Penrose Platonic values, Bohm’s “implicate order” or in some cases mystical, spiritual and/or altered state experiences. www.quantumconsciousness.org  PL 8  Do quantum phenomena provide objective evidence for consciousness?  Richard Healey <rhealey@email.Arizona.edu> (Philosophy, University of Arizona, Tucson, Arizona)    Kuttner and Rosenblum (2006a,b) argue that a theory-neutral version of the quantum two-slit experiment provides objective evidence for consciousness–indeed the only objective evidence. However, their description of the experiment is not theory neutral. Kuttner and Rosenblum’s argument that a particular experiment provides objective evidence for consciousness fails: their argument rests on dubious assumptions about the physical effects of consciousness for which we lack objective evidence. Reflecting on our current understanding of quantum theory is one nice way to illustrate this objection. Each of a variety of different interpretations of quantum theory rejects at least one key assumption of Kuttner and Rosenblum’s allegedly theory-neutral description. Moreover, these include interpretations within which consciousness plays no role. Perhaps none of those interpretations will prove acceptable. Quantum theory itself may one day be superseded by a superior theory. Neither eventuality would undermine my objection, which does not depend on quantum theory, under any interpretation. I suggest that if there is objective evidence for consciousness it will be manifested in a very different class of phenomena.   PL 9  Quantum Mechanical Implications for Mind-Body Issues  Menas Kafatos, S.Roy;K.H.Yang;R.Ceballos <mkafatos@crete.gmu.edu> (College of Science, George Mason University, Fairfax, VA)    Many authors have speculated on the importance of quantum theory to brain dynamics and even its relevance to consciousness. In particular, mind-body issues, by their very nature, imply non-classical physics apparoaches. Quantum mechanics, through the role of the observer, the measurement theory and recent laboratory evidence at the ion channel level, may have serious implications for these issues. In the present paper, we explore the relevance of Quantum Mechancis and some possible ontological as well as laboratory issues.  PL 10  Principles of Quantum Buddhism  Francois Lepine <info@quantumbuddhism.org> (Quantum Buddhism Association, St-Raymond, Quebec, Canada)    Science and religion have been opposed regarding consciousness since Descartes separated matter and mind: Cartesian dualism. Non-dualist approaches include scientific materialism in which matter produces mind, and idealism in which mind produces matter. On the other hand Buddhists (and neutral monists in western philosophy) believe mind and matter both derive from a deeper-lying common entity. In recent decades it has become evident that quantum physics and quantum gravity can provide a scientifically plausible accommodation of the Buddhist (and neutral monist) approach. In Buddhism the deeper-lying monistic entity is a pure Platonic wisdom of the Supreme Unified Consciousness which can give rise to matter and/or mind. In scientific terms it is the quantum geometry at the tiniest level (Planck scale) of the universe (quantum gravity), or the unified quantum field. Sir Roger Penrose proposed that Platonic forms including mathematical truth, ethical and aesthetic values (which Plato assumed to be abstract) exist as actual configurations of the Planck scale. Cosmic wisdom in Buddhist Supreme Unified Consciousness pervades the universe, involving, informing and interconnecting living and non-living beings. Planck scale quantum information encoding Platonic values – cosmic wisdom - is non-local and holographic, hence repeating everywhere, atemporally (“everywhen”) and at various scales. Buddhist Supreme Unified Consciousness manifests matter and/or mind. Quantum geometry gives rise to either matter or matter and mind, depending on whether quantum state reduction to classical states occurs via decoherence or measurement (in which case matter), or a type of threshold-based self reduction (e.g. Penrose objective reduction) giving matter and conscious mind. In Buddhism, conscious awareness in an individual – self consciousness - is a series of ripples on the universal pond of Supreme Unified Consciousness. In science, self-consciousness is a series of Penrose objective reductions, ripples in quantum geometry on the edge between the quantum world of multiple coexisting possibilities, and the classical world of definite states. In science, conscious ripples, or moments are coherently synchronized with gamma EEG brain waves, 40 or more conscious moments per second. In western philosophy these are Whitehead’s “occasions of experience”. Buddhism meditators report underlying flickering in their perception of reality, momentary collections of mental phenomena. Sarvaastivaadins described 6,480,000 "moments" in 24 hours (75 conscious moments per second), and other Buddhists as 50 per second. Meditating Tibetan Buddhist monks show highly coherent, high amplitude gamma synchrony EEG in the range of 80 per second, twice normal and more highly coherent. Samadhi is a Sanskrit word describing awareness in which sensory inputs, memory and self dissolve, a person’s consciousness becoming totally one with Supreme Unified Consciousness. Samadhi occurs during deep meditation. Scientifically, in altered states quantum brain activities may become more directly connected with the universal quantum geometry and its collective information. The Quantum Buddhism Association was founded in early 2007, and aims at providing a set of tools to develop a scientific-spiritual approach to the world, unburdened by traditional cultural ritualistic and dogmatic weight, where development of the self prevails to become a conscious scientific instrument.   PL 11  A new quantum gravitational model for consciousness based in geometric algebra   Javier Martin-Torres <fn.f.martin-torres@larc.nasa.gov> (Virtual Planetary Laboratory, AS&M, NASA, Hampton, VA)    A new mathematical model for Quantum Consciousness based in geometric algebra and its results are presented. Two of the basic pillars of the model are the use of: i) gravity as an Orch OR mechanism (Hameroff and Penrose, 1996) and ii) the collective electrodynamics approach developed by Caver Mead (Mead, 2000), in which electromagnetic effects, including quantized energy transfer, derive from the interactions of the wavefunctions of electrons behaving collectively. Between other processes, a new mechanism for acusto-conformational transformation (ACT) by which Micro Tubules (MT) communicate with each other, and a decoherence upper limit are proposed. The model presented establishes a theoretical basis for one of the important (and not yet explained) points in Hameroff and Penrose’s work for quantum consciousness: why the global quantum superposition is the default state. An isomorphism between mono-dimensional binary Cellular Automata and the Clifford Algebra Cl(8) and its applications to the modeling of the consciousness, together with the main implications of the proposed model will be discussed. References Hameroff, S. and Penrose, R., Orchestrated Reduction Of Quantum Coherence In Brain Microtubules: A Model For Consciousness?, In: Toward a Science of Consciousness - The First Tucson Discussions and Debates, eds. Hameroff, S.R., Kaszniak, A.W. and Scott, A.C., Cambridge, MA: MIT Press, pp. 507-540 (1996) Mead, C., Collective Electrodynamics: Quantum Foundations of Electromagnetism, The MIT Press; 1st edition (August 28, 2000).   PL 12  The Neuron: no longer the atom of neural computation  James Olds <jolds@gmu.edu> (Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA)    Subsequent to the 1906 shared Nobel Prize of Cajal and Golgi, the neuron doctrine has been accepted as dogma to the nascent field that became neuroscience. The approximate number of 10^10 neurons in the human brain is often used to reference the immense complexity of the central nervous system, and entire sub-fields are based on the notion of the neuron as computational machine, integrating massive inputs across the dendritic tree to reach a “decision” regarding whether or not to fire an action potential. Here we put forward the notion that neuroscience has now moved substantially beyond the neuron doctrine. Neurons themselves contain multiple hierarchical levels of internal computational machinery (e.g. the Trans Golgi Network, spines, glutamate receptors, potassium channels) all of which can be said to contribute to the overall emergence of intelligent behavior and cognition. We propose that the true complexity of the human brain is far greater than has previously been accepted, and conclude that this requires a modification of the current reductionist approaches to neuroscience. Integrative neuroscience combined with approaches that have been successful with regards to other complex adaptive systems may provide a fruitful scientific direction for the field.   PL 13  Minding Quanta and Cosmology  Karl Pribram <pribramk@gmail.com> (George Mason University, Fairfax , VA)    The revolution in science inaugurated by quantum physics made us aware of the role of observation in the construction of data. Eugene Wigner remarked that in quantum physics we no longer have observables (invariants) but only observations. Tongue in cheek I asked whether that meant that quantum physics is really psychology, expecting a gruff reply to my sassiness. Instead, Wigner beamed a happy smile of understanding and replied “yes, yes, that’s exactly correct.” David Bohm pointed out that, were we to look at the cosmos without the lenses of our telescopes, we would see a hologram. I have extended Bohm’s insight to the lens in the optics of the eye. The receptor processes of the ear and skin work in a similar fashion. Without these lenses and lens-like operations all of our perceptions would be entangled as in a hologram. Furthermore, the retina absorbs quanta of radiation so that quantum physics uses the very perceptions that become formed by it. In turn, the higher order systems send signals to the sensory receptors so that what we perceive is often as much a result of earlier rather than just immediate experience. This influence from “inside-out” becomes especially relevant to our interpretation of how we experience the contents and bounds of cosmology that come to us by way of radiation.   PL 14  Quantum jumps and explanatory gaps  Paavo Pylkkänen <paavo.pylkkanen@his.se> (Consciousness Studies Programme, University of Skövde, Skövde, Sweden)    One reason why researchers ignore quantum theory in the explanation of consciousness is the mysterious nature of the theory. If we cannot make sense of the paradoxical features of quantum theory (e.g. wave-particle duality, discontinuity of motion, non-locality, collapse of the wave-function), how could we possibly hope that this theory will be of any help when trying to understand another mysterious phenomenon, namely consciousness? We thus first need a coherent interpretation of quantum theory which resolves the various paradoxes and provides us with an intelligible view of quantum phenomena. Equipped with such a view, we can then explore whether the place of mind in nature could be understood in a new, better way. If you like, we first need to close the explanatory gap in quantum theory, before we can use this theory to tackle the better known explanatory gap between matter and consciousness. In this talk I will discuss some philosophical problems of mind and consciousness in the light of Bohm’s interpretation of quantum theory which includes new notions such as implicate order and active information. This interpretation is arguably one of the best candidates for a coherent interpretation of quantum theory, although debate about these issues is ongoing. Of course, the crucial question for any attempt to make use of quantum theoretical ideas in this context is whether there are aspects of mind and consciousness that cannot be adequately explained and understood in terms of “classical” explanatory frameworks – i.e. neural and/or computational frameworks which do not make any significant appeal to quantum theory or to the New Physics more generally. There are, in fact, many aspects of mind/consciousness which pose a mystery to “classical” frameworks, but might be better understood in “quantum” frameworks. There is the problem of mental causation: if mental states are non-physical, how could they possibly affect physical processes without violating the laws of physics? If we assume that mental states are physical it becomes easier to understand their causal effect upon physical processes. But there are serious problems of conceiving of mental states (especially conscious states) as physical states, if “physical” is understood in the spirit of classical physics. There are also paradoxical aspects to the phenomenal structure of conscious experience, for example “time consciousness”, at least when one understands time in the spirit of classical physics. My proposal is that quantum theory, especially under its Bohmian interpretation, changes our key concepts (such as “physical”, “causation”, “time”, “space”, “process”, “movement”, “information”, “order”) in such a way as to open up a new and better way of understanding features such as mental causation and time consciousness. Such changes in our fundamental concepts also make it possible to tackle the hard problem of consciousness in a fresh way. References Bohm, D. & Hiley B.J. (1993) The Undivided Universe. An Ontological Interpretation of Quantum Theory. London: Routledge. Hiley, B.J. & Pylkkänen, P. (2005) “Can Mind Affect Matter via Active Information”, Mind & Matter 3(2): 7-27. Pylkkänen, P. (2007) Mind, Matter and the Implicate Order. Heidelberg: Springer.   PL 15  Objective evidence for consciousness and free will in the quantum experiment   Bruce Rosenblum, Fred Kuttner <brucero@ucsc.edu> (Physics, University of California, Santa Cruz, Santa Cruz, CA)    In the absence of objective, third-person evidence of conscious experience, i.e. “qualia,” one can logically deny the very existence of consciousness beyond these correlates. Consciousness has, in fact, been claimed to be no more than the behavior of a vast assembly of nerve cells and their associated molecules. However, since the origins of quantum physics in the 1920s, consciousness has been seen by some to intrude into the physical world in a manner other than by its physiological and neural correlates. In this view, objective evidence for a physically efficacious consciousness actually exists. The experimental facts, at least, are undisputed. We will illustrate what can be considered a physical manifestation of consciousness with a theory-neutral description of a quantum mechanical thought experiment that can be realized in practice. We will argue that the only escape from our conclusion must be to deny one's ability to freely (or randomly) choose behavior. Moreover, such denial of "free will" must also involve a strange and unexplained connectivity between physical phenomena. Therefore the conclusion that consciousness itself, though yet unexplained, is physically efficacious is at least as modest a hypothesis as any other. This thesis is developed in our recent book, "Quantum Enigma: Physics Encounters Consciousness," Oxford University Press, 2006.   PL 16  Aspects of Cosmic Consciousness in the Non-material and Non-empirical Forms of Physical Reality.   Lothar Schäfer <schafer@uark.edu> (Department of Chemistry and Biochemistry, University of Arkansas, Arkansas, AR)    The quantum phenomena have shown that reality appears to us in two domains: one is open and empirical and forms the world of seemingly separated, material things. The other is hidden and non-empirical and consists of interconnected, non-material forms. The former is the realm of actuality; the latter, the realm of potentiality in physical reality. Discovering the realm of forms places contemporary physics into the center of powerful historic traditions of spirituality, in which non-material forms were considered as primary reality and connected with a Cosmic Consciousness out of which everything is emanating. The lecture will describe some of the parallels and explore to what extent the quantum phenomena support the view that the primary reality has aspects of mind. In the quantum structure of empirical systems, the non-material forms exist as empty states, called virtual by quantum chemists. The entire universe can be considered a quantum system. Its occupied states form the visible part of reality; its empty states, the non-empirical part. Everything that is visible is the actualization of some quantum states. Everything that is possible is deposited in virtual states. Thus, the complex order in the biosphere does not emerge out of nothing and is not created by chance, as Darwinians claim, but it emerges by the actualization of virtual states whose logical order already exists in the non-empirical part of reality before it is expressed in the empirical realm.   PL 17  Experiments in Retrocausation  Daniel Sheehan <dsheehan@sandiego.edu> (Physics, University of San Diego, San Diego, California)    The fundamental laws of physics are time symmetric, equally admitting time-forward and time-reversed solutions. That the former are readily observed while the latter are not presents perhaps the starkest asymmetry in nature: the unidirectionality (one-way arrow) of time. Common notions of causation are tightly bound with this asymmetry, as are also the phenomena of consciousness. While causation has long been taken for granted, retrocausation (the future influencing the past) has not. Over the last few decades, however, this situation has changed as theory has begun to admit more freely this possibility and experiments -- e.g., from orthodox quantum mechanics, physiology, and parapsychology -- have begun to provide quantitative evidence for retrocausal effects [1]. In this talk, seminal experiments purporting retrocausation will be reviewed and an attempt will be made to put them into a general theoretical framework. From this more decisive experiments should emerge. [1] "Frontiers of Time: Retrocausation -- Experiment and Theory," AIP Conference Proceedings, Vol. 863, D.P. Sheehan, editor (American Institute of Physics, Melville, NY, 2006).  PL 18  Whiteheadian Quantum Ontology: The emergance of participating conscious observers from an unconscious physical quantum universe.  Henry Stapp <hpstapp@lbl.gov> (Theoretical Physics, Lawrence Berkeley National Laboratory, Berkeley, CA )    The inability of classical physical concepts to accomodate consciousness is noted,and is contrasted to the way that orthodox von Neumann-Heisenberg quantum theory beautifully does so. Close parallels between the detailed structure of ontologically construed relativistic quantum field theory and the ontology proposed by Alfred North Whitehead are noted, and the way that Whiteheadian philosophy accounts for the natural emergence of local pockets of participatory consciousness from a physical world initially devoid of consciousness is explained.  PL 19  Quantum Ideas and Biological Reality: the Warm Quantum Computer?   Marshall Stoneham <ucapams@ucl.ac.uk> (London Centre for Nanotechnology and Physics and Astronomy, University College London , London, United Kingdom)    Quantum ideas take many forms. The recognition that matter is quantised as atoms underpins the chemical industry. The recognition that charge is quantised as electrons lies at the core of microelectronics. But the several phenomena we identify as “quantum” are subtle, encompassing exclusion, tunnelling, limits to measurement, and entanglement. These ideas are less intuitive and less tangible at the macroscopic (human) scale. Yet, when our science approaches the nanoscale, there is no way to avoid quantum phenomena. Moreover, as ideas spread from the purely physical sciences to the biosciences, it appears that nature already exploits quantum behaviour even at ambient temperatures in unexpected ways, e.g., in vision and in olfaction. There are also credible ideas for condensed matter processing of quantum information even at room temperature, and some are based on soft matter. These proposals and some experiments, exploiting entanglement, rightly contradict the widely-held physicist views that quantum information processing is possible only at cryogenic temperatures. Yet it is far less clear that the brain exploits quantum entanglement. Any suggestion that similar entanglement-based mechanisms might operate in the brain still has to meet plenty of challenges, first as to the actual atomic-scale processes exploited, and secondly as to how a quantum computer might handle problems more like a brain than like an enhanced classical computer.  PL 20  Why is consciousness soluble in chloroform ?  Luca Turin <lucaturin@mac.com> (Physics, University College London, London, England, UK)    It is now quite clear that the target of general anaesthetic gases is protein, and there is good evidence that neurotransmitter receptors are involved. Exactly which protein(s) anaesthetic gases act on, and by what mechanism, remains to be determined. I shall describe empirical and computational evidence in support of the idea that general anaesthetics act not allosterically, but by altering protein electron chemical potential. I shall discuss the relevance of this notion to both protein electronics and redox regulatory mechanisms.   PL 21  Electrodynamic signaling by the dendritic cytoskeleton: towards an intracellular information processing model.  Jack Tuszynski, Avner Priel; Horacio F. Cantiello <jtus@phys.ualberta.ca> (Physics, University of Alberta, Edmonton, Alberta, Canada)    A novel model for information processing in dendrites is proposed based on electrodynamic signaling mediated by the cytoskeleton. Our working hypothesis is that the dendritic cytoskeleton, including both microtubules (MTs) and actin filaments plays an active role in computations affecting neuronal function. These cytoskeletal elements are affected by, and in turn regulate, a key element of neuronal information processing, namely, dendritic ion channel activity. We present a molecular dynamics description of the C-termini protruding from the surface of a MT that reveals the existence of several conformational states, which lead to collective dynamical properties of the neuronal cytoskeleton. Furthermore, these collective states of the C-termini on MTs have a significant effect on ionic condensation and ion cloud propagation with physical similarities to those recently found in actin-filaments and microtubules. We report recent experimental findings concerning both intrinsic and ionic conductivities of microfilaments and microtubules which strongly support our hypothesis about an internal processing capabilities in neurons. Our ultimate objective is to provide an integrated view of these phenomena in a bottom-up scheme, demonstrating that ionic wave interactions and propagation along cytoskeletal structures impacts channel functions, and thus neuronal computational capabilities. Acknowledgements: This research was supported by NSERC, MITACS, PIMS, US Department of Defense, Technology Innovations, LLC and Oncovista, LLC.   PL 22  Dissipative many-body dynamics of the brain   Giuseppe Vitiello, Walter J. Freeman Affiliation: Department of Molecular and Cell Biology, University of California, Berkeley CA 94720-3206 USA <vitiello@sa.infn.it> (of Physics "E.R.Caianiello", University of Salerno, Italy, Baronissi, Salerno, Italy)    Imaging of scalp potentials and cortical surface potentials of animal and human from high-density electrode arrays has demonstrated the dynamical formation of patterns of synchronized oscillations in neocortex in the beta and gamma ranges (12-80 Hz). They re-synchronize in frames at frame rates in the theta and alpha ranges (3-12 Hz) and extend over spatial domains covering much of the hemisphere in rabbits and cats, and over domains of linear size of about 19 cm in human cortex with near zero phase dispersion [1]. The agency of the collective neuronal activity is neither the electric field of the extracellular dendritic current nor the magnetic fields inside the dendritic shafts, which are much too weak, nor is the chemical diffusion, which is much too slow. By resorting to the dissipative quantum model of brain [2], we describe [3] the field of activity of immense number of synaptically interactive cortical neurons as the phenomenological manifestation of the underlying dissipative many-body dynamics such as the one responsible of the formation of ordered patterns and phase transitions in condensed matter physics in quantum field theory. We stress that neurons and other brain cells are by no means considered quantum objects in our analysis. The dissipative model explains two main features of the electroencephalogram data: the textured patterns correlated with categories of conditioned stimuli, i.e. coexistence of physically distinct synchronized patterns, and their remarkably rapid onset into irreversible sequences resembling cinematographic frames. Each spatial pattern is described to be consequent to spontaneous breakdown of symmetry triggered by external stimulus and is associated with one of the unitarily inequivalent ground states. Their sequencing is associated to the non-unitary time evolution in the dissipative model. The dissipative model also explains the change of scale from the microscopic quantum dynamics to the macroscopic order parameter field, and the classicality of trajectories in the brain state space. The dissipative quantum model enables an orderly description that includes all levels of the microscopic, mesoscopic, and macroscopic organization of the cerebral patterns. By repeated trial-and-error each brain constructs within itself an understanding of its surround, the knowledge of its own world that we describe as its Double [4]. The relations that the self and its surround construct by their interactions constitute the meanings of the flows of information exchanged during the interactions. [1] W. J. Freeman, Origin, structure, and role of background EEG activity. Part 1 & 2, Clin. Neurophysiol. Vol. 115, 2077 & 2089 (2004); Part 3 Vol. 116, 1118 (2005) ; Part 4. Vol.117, 572 (2006). [2] G. Vitiello, Dissipation and memory capacity in the quantum brain model, Int. J. Mod. Phys. B 9, 973 (1995). quant-ph/9502006. [3] W. J. Freeman and G. Vitiello, Nonlinear brain dynamics as macroscopic manifestation of underlying many-body dynamics, Phys. of Life Reviews 3, 93 (2006), q-bio.OT/0511037. Brain dynamics, dissipation and spontaneous breakdown of symmetry, q-bio.NC/0701053v1 [4] G. Vitiello, My Double Unveiled. Amsterdam: John Benjamins, 2001.   PL 23  Subcellular processing related to memory and consciousness by microtubules and MAP2  Nancy Woolf <nwoolf@ucla.edu> (Psychology, University of California, Los Angeles, CA)    Among the various parts of the neuron, dendrites are arguably the best candidates for being key to higher cognitive function because they alone integrate large numbers of inputs. The neuronal membrane is the initial site of response to inputs from other neurons, but what lies beneath the neuronal membrane controls the level of synaptic response by computing new inputs relative to information stored in memory. Dendrites are enriched with microtubules and microtubule-associated proteins (MAPs); yet we do not fully know the purpose of these proteins. Accumulating evidence suggests that microtubules and MAPs play critical roles in memory and consciousness, as well as in neuronal transport. Microtubule-associated protein-2 (MAP2) is a dendrite-specific cytoskeletal protein that also acts as a signal transduction molecule, mediating internal chemical responses following synaptic release of neurotransmitters glutamate and acetylcholine. MAP2 and microtubules bind together to form a matrix that stores memory: as new memories form, MAP2 and tubulin proteolysis or breakdown occurs followed by a new subcellular architecture, structured as a modified microtubule matrix (Woolf, NJ, Progress in Neurobiology, 55:59-77,1998). Information stored in the microtubule matrix is then accessed upon the release of certain neurotransmitters, such as acetylcholine and glutamate. Acetylcholine controls the level of consciousness mainly through its muscarinic receptor resulting in downstream activation of kinases PKC and CaMKII, both of which phosphorylate MAP2 and participate in memory. Phosphorylation of MAP2 affects its interaction with microtubules, leading to possible alterations in the protein conformation of tubulin subunits and subsequently to the ability of microtubules to transport receptors, cytoskeletal proteins, and mRNA to synapses. Because of their downstream activation by neurotransmitters, microtubules are in a position to compute current synaptic inputs in the context of previous synaptic activity, and then to increase transport of certain learning-related molecules to synapses. No synapse acting in isolation can bring about a mental state of consciousness: it is instead necessary to have co-activation of a large number of synapses for conscious activity to arise. En masse transport of essential synaptic proteins by microtubules is needed to sustain enhanced synaptic activity, and it is possible that quantum level computations play a role in directing coherent transport both locally and non-locally. We have previously proposed that acetylcholine facilitates quantum computations in microtubules by phosphorylating MAP2 (Woolf NJ & Hameroff SR, Trends in Cognitive Science, 5:472-8, 2001). In this presentation, I propose that the pattern of MAP2 binding to the microtubule forms a gel-based contour which represents information stored by the learning mechanism and provides a physical basis for realizing that stored information (Woolf, NJ, Journal of Molecular Neuroscience, 30:219-22, 2006). When MAP2 is phosphorylated, this gel-based contour expands along a given microtubule and affects the propagation of information longitudinally down the microtubule, and tangentially, the contour affects the state of neighboring microtubules. In these two ways, physically activated microtubules transmit a particular pattern related to a barrage of current inputs in the context of information stored in memory resulting in a coherent response spanning multiple synapses.   PL 24  The Truth-Observable: A link between logic and the unconscious  Paola Zizzi <zizzi@math.unipd.it> (Mathematics, University of Padova, Padova, Italy)    In Quantum Mechanics, an external measurement of the physical state of a closed quantum system is described mathematically in terms of quantum operators, by which one defines physical observables satisfying the completeness relation: summing up the observables yields the identity. The logical meaning of the completeness relation is that the logical truth splits into partial truths, each of them corresponding to an act of measurement from outside. This is due to the physical fact that any external measurement is an irreversible process, which destroys quantum superposition. Then, an external observer can grasp only fragments of an inner, global truth. Only an internal observer would be able to achieve the global truth at once, as a whole, by making an internal measurement [1], as inside the closed quantum system, he can perform only reversible transformations, described by unitary operators U. The uniqueness and unitarity of such measurement operators allow defining a unique quantum observable that is just the identity: the truth-observable [2]. Notice that in quantum computing [3], U is a quantum logic gate. Then, in this case, an internal measurement corresponds to a quantum computational process. In the theory of a quantum-computing mind [4], we believe that there exists a deepest unconscious state that cannot be known directly from outside. We argue that it is the deep unconscious, which can achieve the "truth" as a whole; the conscious mind can grasp only partial "truths". Quantum information is processed by the unconscious and then is made available to our conscious mind as classical information. As a quantum computer is (due to quantum parallelism) much faster than its classical counterpart, the task done by the unconscious is fundamental to prepare our classical reasoning. The unconscious, endowed with global knowledge (the truth-observable), is rich enough to originate creativity. Global knowledge and creativity together is what enables us to use metalanguage, which makes us so different from (classical) computers, imprisoned in their object language. But also, the truth-observable might be placed at the heart of the logical study of the most severe mental diseases (like schizophrenia) which are very hard to be cured psychoanalytically. On the other hand, less deep unconscious states (pre-conscious) are psychoanalytically interpretable from outside. For example, subjective experiences, which cannot be directly communicated (but only interpreted) should be included in the pre-conscious, not in consciousness. In fact, a shared knowledge (in Latin: cum-scio from which derives the English consciousness) is impossible without communication. References [1] P. Zizzi, “Qubits and Quantum Spaces”, International Journal of Quantum Information Vol. 3, No.1 (2005): 287-291. [2] P. Zizzi, “Theoretical setting of inner reversible quantum measurements”, Mod. Phys. Lett. A, Vol. 21, No.36 (2006): 2717-2727. [3] M. A. Nielsen, I. L. Chuang, Quantum Computation and Quantum Information, Cambridge University Press (2000). [4] S. Hameroff, R. Penrose, “Orchestrated reduction of quantum coherence in brain microtubules: a model for consciousness”. In: Toward a Science of Consciousness. The First Tucson discussions and Debates. Eds. S. Hameroff, A. kaszniak, and A. Scott. MIT Press, Cambridge, MA (1996).   PL 25  Moiré wave patterns as the own language of the brain   Alexey Alyushin <aturo@mail.ru> (Philosophical Faculty, Moscow Lomonosov State University, Moscow, Russia)    My hypothesis is that the own language of the brain is the dynamical geometry of bioelectrical wave patterns of the moiré origin. The moiré effect is produced by superposing of two or more periodical structures, like hardbody or graphical lattices or oscillatory wave sets, launching them into move in relation to each other, and obtaining an emergent (called alias) structure out of this superposition in move. There are a number of regular wave oscillations in brain, comprising the whole set of wave bands. Brain oscillations correspond to sequences of frames, being the synchronized in firing, although spatially dispersed, transient constellations of neurons (F. Varela). Given the existence of several oscillatory wave structures and the corresponding flows of frames in the brain, the suggestion is due that multiple overlays of rhythmical oscillations or frame flows should produce moiré patterns within their entire manifold. The question is what might be the function of these patterns. I suggest that moiré patterns are far not the distortive noise within a system, as they are commonly approached to in the TV and photo imaging technique; and they are not just empty by-products of some master process within the brain. They themselves are driving gears of brain working, the meaning-containing and meaning-processing units. The function of the lower-order brain oscillations is to bring about and to keep active the higher-order moiré patterns. The most important thing about moiré patterns is that they are emergent structures in respect to those oscillatory patterns that underlay them. They are emergent in a sense that their structure is not contained in either of the underlying patterns; they are entities in themselves. Although with the change or fading of underlying oscillatory patterns the emergent pattern also changes or vanishes. I go further and suggest that the emergent moiré pattern might steer the underlying oscillations for the sake of its own self-sustention. It can well be so that at the early stages of the brain evolution only the lower-order oscillations were present in primitive brains providing for the basic perceptive data processing. But as the brain was developing into a more complex unit and proceeded to generate and to serve the higher mental functions, the formerly derivative and rudimentary moiré phenomena unveiled their abilities and acquired the master control. Enduring and self-sustained wave formations of the moiré origin in the brain are good candidates for being considered as the neural correlates of cognitive and mental structures, including consciousness. If we compare the moiré model with the holographic model of the brain (K. Pribram and others), the first will look advantageous for introducing dynamics. The holographic model is mostly static, dealing with distribution of wave interferences in space, whereas the moiré model stresses the temporal aspect of interaction of wave structures. As a matter of fact, it also deals with interferences, but in their temporal dynamics. Therefore, the holographic model and the moiré model could productively accompany each other. (Some visual moiré patterns will be generated and demonstrated during the presentation by means of computer simulation).   C 26  What could possibly count as a physical explanation of consciousness ? the view from the inside and the Bekenstein bound.   Uzi Awret <uawret@cox.net> (Falls Church, Va.)    In 1992 in the “Times Literary Supplement” Jerry Fodor laments. “Nobody has the slightest idea how anything material could be conscious. Nobody even knows what it would be like to have the slightest idea about how anything material could be conscious. So much for the philosophy of consciousness.” 20 years later in an article destined for the ‘Encyclopedia of Cognitive Science’ Ned Block claims that: “There are two reasons for thinking that the Hard Problem has no solution. 1. Actual Failure. In fact, no one has been able to think of even a highly speculative answer. 2. Principled Failure. The materials we have available seem ill suited to providing an answer. As Nagel says, an answer to this question would seem to require an objective account that necessarily leaves out the subjectivity of what it is trying to explain. We don’t even know what would count as such an explanation.” The purpose of this paper is to respond to Fodor and Block’s challenge by producing a highly speculative physical theory that can count as a possible physical explanation of consciousness. The biggest problem in attempting to conceive of a physical explanation of consciousness is not the irreducible need to sweep certain difficult issues under the carpet. That is true to some degree for any physical explanation. The problem is to conceive of the carpet. The approach taken by this paper will be to: 1)Establish the possible existence of physical singularities in the brain assumed to be created by informational self interaction and informational self collapse by taking advantage of the shifting and vague line of demarcation separating physical interaction and information theoretic communication. 2)Adopt John Wheeler and Bryce DeWitt’s ‘black hole bounce’ which allows for the possibility of a whole new universe in the singularity at the center of certain black holes. This will provide us with a ‘view from the inside’ that is completely inaccessible from an ‘outside’ that has no room for it. 3) Subject questions about the nature of that space, especially the possibility of a phenomenal nature, to a radical suspension. A radical suspension is not a temporary suspension employed for tactical reasons but a more permanent suspension of the type that physicists or mathematicians adopt in the exploration of singularities. 4)Use our knowledge of neural architecture and the physics of brains to establish the conditions that would enable the emergence of such singularities based on 1). For example, if some brain region with a volume of one cubic centimeter was made to contain more than 10exp(60) bits of information it would have to be a singularity because of the Bekenstein Bound. 5)Conceive of an experiment that is capable of verifying 4) in real brains and establish the existence of such singularities as a minimal NCC. (Neural Correlates of Consciousness.) This paper claims that if 1) through 5) are satisfied than it is possible to furnish at least one possible physical explanation of consciousness despite the radical suspension imposed by 3) precisely because singularities can be explored from the outside in the same way that physics can determine the Chandrasekhar Limit and the Schwarzschild Radius of black holes from the outside. This approach is compatible with Kant’s Transcendental Epistemology which seeks to determine the scope and limits of knowledge from the inside. (See Janic and Toulmin’s Wittgenstein’s Viena.) A mature science is one which explores its own limitations. Instead of attempting to establish the general conditions of possibility that would have to be satisfied in order to produce a scientific explanation of consciousness the paper will end with a putative token singularity based physical theory of consciousness that is capable of satisfying 1) to 5).   C 27  Identifying the Interaction between the Quantum and Classical World as the Blue Print for Conscious Activity in Cognitive Vision Systems.   Wolfgang Baer <baer@nps.edu> (Informaton Sciences, Naval Postgraduate School, Monterey, California)    I present a physically viable mind/body model based upon Whiteheads assumption that events called “actual occasions” are conscious and fundamental building blocks of the universe. This building block is a process connecting first person experience with its explanation and is independent of any belief system defining reality for an individual. I will select quantum theory as a physically viable reality belief and will show that in this case consciousness is identified within its measurement and state preparation cycle. I generalize this result by identifying the architecture of the interaction between the quantum and classical world is the blue print for conscious activity. According to this theory consciousness itself can be modeled by a cycle of activity required to transform a description of experience into a description of the physical reality causing the experience in any model of reality we chose to believe. It is not the specific model of physical reality but rather the activity of reading from and writing into the model that captures the essence of consciousness phenomena, and such activities can be found in all systems and from microscopic to cosmological scales. As a practical application I will then identify the conscious process in cognitive vision systems being developed to support Unmanned Aerial Vehicle operations at the Naval Postgraduate School in Monterey Ca. By recognizing the conscious process executed by man-in-the loop systems and identifying the cognitive algorithms being executed, we can automate the process by systematically transferring human to machine operations. I will conclude by presenting the results of target mensuration and vision understanding experiments utilizing sensor report to database explanation transforms that implement Whiteheads actual occasions   C 28  Characteristics of Consciousness in Collapse-Type Quantum Mind Theories  Imants Baruss <baruss@uwo.ca> (Psychology, King's University College, London, Ontario, Canada)    Whereas there has been considerable effort expended to develop the technical aspects of quantum mind theories, little attention has been paid to what must be the nature of consciousness for such theories to be true. The purpose of this paper is to rectify that imbalance by looking at some of the apparent characteristics of consciousness in some of the theories in which consciousness is said to collapse the state vector (Baruss, in press, for a review of such theories), on the understanding that decoherence can not entirely solve the measurement problem (Adler, 2003). Three characteristics become immediately apparent. The first is a volitional aspect of the mind that needs to be distinguished from awareness or observation (Baruss, 1986; Walker, 2000). Some insights about this notion of will can be gleaned also from evidence outside the quantum mind context that intention can affect physical systems (e.g., Jahn & Dunne, 2005). The second characteristic is the stratification of consciousness so that the experiential stream that goes on privately for a given person needs to be distinguished from a universal deep consciousness, somewhat akin to David Bohm’s implicate order (Bohm & Hiley, 1993), that might underlie ordinary consciousness. Thus, the question arises regarding quantum mind theories of the relative contributions of deliberately intentional acts that occur within one’s experiential stream (cf. Stapp, 2004; 2005) and nonconscious coordinated intentions implicit in deep consciousness (cf. Goswami, 1993, 2003; Walker, 1970, 2000). Support for introducing such stratification also comes from modelling anomalous human-machine interactions such as the M5 theory of Robert Jahn and Brenda Dunne (2001) as well as from reports of apparently direct participation in such deep consciousness (e.g., Baruss, 2003, Merrell-Wolff, 1994, 1995). Third, in transferring the notion of the collapse of the state vector from the context of observation in experimental physics to manifestation of everyday life, the temporally discrete nature of such collapse is usually retained so that ordinary waking state consciousness would actually be discontinuous. This suggests the possibility of a flickering universe (cf. Matthews, 2000) whereby physical reality, including its spatial features, arises from a pre-physical substrate, perhaps at the rate of once per Planck time. This idea is consistent with efforts to liberate quantum theory from classical restrictions (e.g., Durr, 2005; Aerts & Aerts, 2005; Mukhopadhyay, 2006) and with speculations about Planck-scale physics (cf. Ng, 2003; Ng & van Dam, 2005). Although these particularly need to be judged critically, there are also some reports of the direct apperception of the discontinuous arising of physical reality from a pre-physical substrate in altered states of consciousness (e.g., Wren-Lewis, 1988; 1994). A volitional aspect of mind, the stratification of consciousness, and discontinuity of the ordinary waking state are some of the characteristics of consciousness implicit in some collapse-type quantum mind theories.  C 29  A four-dimensional hologram called consciousness  James Beichler <jebco1st@aol.com> (Physics, Division of Natural Science and Mathematics, West Virginia University at Parkersburg, Belpre, Ohio)    The reality of a fourth spatial dimension is now being established in science. The fourth dimension of space is magnetic in nature and thus offers a suitable medium for the storage of memories in mind and consciousness. Consciousness also emerges as a holographic magnetic potential pattern in the fourth dimension. When the passage of time is added to the picture, consciousness becomes a holomovement in five-dimensional space-time. The magnetic potential pattern is induced in the higher dimension by the electrical activity of microtubules (MTs). Each MT is an individual quantum magnetic inductor. When successive MTs inside an axon ‘fire’ in sequence they induce a unique and complex magnetic potential pattern in the higher-dimensional extension of the three-dimensional material brain. This pattern of magnetic potential in the higher-dimensional field constitutes holographically stored memories that can be retrieved by the brain through a reverse process. The vast complexity of the different stored memory patterns constitutes the consciousness of an individual. On the other hand, MTs within different neurons, neuron bundles and neural nets also act coherently to form individual thoughts and streams of thought within the brain. Coherence is established as the inductor-MTs in individual neurons act in concert with axon wall-capacitors to form a complex of microscopic LRC (tuning) circuits. Each MT-axon wall circuit resonates with similar MTs in a complex pattern of neurons, thus establishing and maintaining coherence within the brain.   C 30  Disambiguation in conscious cavities  James Beran <jimberan@earthlink.net> (Richmond, Virginia)    Using information-based causal principles to work back from our conscious experience, we can develop models of how consciousness might be produced. This paper discusses one such model that can be tied to features found in cerebral cortex and possibly also in other parts of the brain. In this model, neural signals with ambiguous sensory information are received at an input level of a multi-level structure, and, in response, output neural signals, which can be thought of as disambiguated results, are provided at an output level of the structure; between or around the input and output levels is a region in which neural signals interact with conscious information to disambiguate the sensory information and obtain the results. This combination of features can be modeled as a cavity, by rough analogy to certain optical cavities. Disambiguation has mathematical similarities to separation or collapse of an entangled system (referred to herein as "disentanglement") [1], and these similarities suggest that the disambiguating interactions could include disentanglement events that affect disambiguated results. This paper compares disentanglement effects with other mechanisms that could plausibly affect disambiguation in such a cavity, such as action potentials traveling along lateral axons or electromagnetic effects resulting from action potentials. One point of comparison is whether each type of interaction is consistent with known features of cerebral cortex and other parts of the brain. Another is whether evolution could and did produce neural structures in which conscious information could have each type of interaction; this paper therefore examines mutations that might have enabled DNA to produce such neural structures. Even though we may not find a sharp evolutionary divide between our non-conscious and conscious ancestors, the emergence of such neural structures would suggest when earlier forms of consciousness emerged. [1] Bohm, D. and Hiley, B.J., The Undivided Universe, 1993.  C 31  A General Quantum-Gravitational Scaling Strategy Connecting Different-Dimensional Fluxes   Bernd Binder <binder@quanics.com> (Quanics.com, Salem, Germany)    The paper will present a unique view about the scaling of different-dimensional quantum fluxes and wave functions, which allows to understand and predict the geometric structure and dynamics of (neuronal) networks able to interact via local and non-local quantum-gravitational processes. It is nowadays commonly agreed that the weakness of gravity can in general be assigned to extra-dimensions (holographic principle). Further, it can be argued that an extra-dimensional interface can provide for the necessary coherence and stability (cooling) for lower-dimensional topologies and structures in a thermodynamic sense. To connect, adjust, or transform different-dimensional flux topologies it will be shown that it is the intrinsic unit scale (and not the semi-classical Planck scale) that can build the reference-bridge between the scaling laws of different fields. Therefore, defining the quantum-gravitational fields carrying this intrinsic unit scale dynamics, insures that any power law scaling with or without extra-dimensions will intersect at this scale (since any power of 1 is 1). In this manner it can be shown that different-dimensional interaction fluxes follow a general spatio-temporal scaling scheme on all scales, which can be found on the cosmic scale as Kepler’s 3rd law and on the quantum scale as Compton’s law. The necessary transformations of the general spatio-temporal scaling scheme can be quantified on a pure geometric ground, where relevant physical properties are the signal dynamics given by the spatio-temporal metric adjusted to the proper number and mass scaling encoding a closed holographic system. Finally, it will be shown that living things, brains, cells, and molecular clusters in the mid-scale are well-designed to focus, transform, and project weak extra-dimensional and non-local gravitational fluxes onto strong low-dimensional currents in (neuronal) network channels pumping, driving, and triggering local electromagnetic processes.  C 32  Combining prototypes: quantal macrostates and entanglement  Reinhard Blutner <blutner@uva.nl> (ILLC, University of Amsterdam, Amsterdam, The Netherlands)    Classical truth-functional semantics and almost all of its modifications have a serious problem in treating prototypes and their combination. Though some modelling variants can fit many of the puzzling empirical observation, their explanatory value is seldom noteworthy. I will argue that the explanatory inadequacy is due to the Boolean characteristic of the underlying semantics, which only allows mixing possible words but it excludes the idea of superposition crucial for geometrical models of meanings. In the main part, I will present a quantal model of combining prototypes. The model elaborates a recent proposal by Aerts & Gabora (2005) and systematically explores an orthoalgebraic approach to propositions as subspaces of an underlying Hilbert space. The quantum model is a minimalist variant of a classical possible world approach and rest on four general assumptions: (1) concepts are superpositions of linearly independent base states that conform to possible worlds; (2) typicality is represented by quantum probabilities; (3) combinations of concepts are calculated as tensor products; (4) there is a diagonalization operation involved, which leads to states that entangle the prototypical properties of the involved concepts. I demonstrate that the model can predict the basic findings on combined prototypes without further stipulations. Firstly, this concerns the existence of the “conjunction effect of typicality” (goldfish is a poorish example of a fish, and a poorish example of a pet, but it's quite a good example of a pet fish) and secondly the strength of this effect (in case of "incompatible conjunctions" such as pet fish or brown apple the conjunction effect is greater than in "compatible conjunctions“ such as red apple). In the final part, I will reflect the philosophical background and look for possible generalizations. In agreement with Aerts & Gabora (e.g. 2005), Chalmers (1995), beim Graben & Atmanspacher (2006) I suppose that the emergence of quantal macrostates does not necessarily require the reference to corresponding quantal microstates. Instead, complementary observables (traditionally restricted to quantum systems) can arise in classical systems as well. Crucial is the concept of generating partitions in the theory of nonlinear dynamical systems: a partition is generating if it divides the state space into regions prescribed by the dynamics of the system, thus permitting the definition of states that are stable under the dynamics. Complementary observables can arise in classical systems whenever the partitioning of the corresponding state space is not generating (Graben & Atmanspacher, 2006). The composition of classical systems with generating partitions can lead to a complex system with quantal characteristics. That is true for conjoined prototypes, and it’s perhaps also true for semantic systems that combine the effects of contexts and possible worlds (see Kaplan’s (1979) two-dimensional semantics of demonstratives). Interestingly, diagonalization is admitted in this case too, whereas certain other operations (“monsters”) are forbidden. Quantum Theory can explain the admission of constraints due to the unitary character of quantal evolution.   C 33  toward a new subquantum integratio approac to sentient reality   Robert Boyd, Dr. Adrian Klein, MDD <rnboyd@iqonline.net> (Princeton Biotechnologies, Inc., Knoxville, TN)    Recent experimental results have proved intractable to explanation by resorting to existing physics paradigms. This fact, along with certain fallacies inherent in mainstream physical-cognitive theories of mind, have encouraged the authors of this paper to transcend the currently operative limits of investigation, thus to explore the abyssal depth of the still uncharted, but highly rewarding, SubQuantum regimes. The subquantum is herein assumed to co-existentially accommodate proto-units for matter, energy and Information, which are thereby brought onto an equal ontological footing, in the subquantum domains. Devolving its argumentation and orientation from the Nobel Prize winning Fractional Quantum Hall Effect, which opened the perspective toward a further divisibility of the Quantum domain, hitherto considered as an irreducably fundamental description of nature, the hereby proposed inter-theoretic model claims to satisfy advanced scientific and philosophic requests, as reformulated for a conceptually new working hypotheses. Subquantum potentials evolving in the Prime Radiation Matrix result in organizing functions able to interfere with classical local determinacy chains, operating at the Quantum levels of randomity inherent in space-time-like matter configurations, leading to highly complex representational patterns, linked to their phenomenal correlates in macroscopically detectable systems. Our model is strongly rooted in an overwhelming experimental evidence derived from multidisciplinary contexts. Our basic understanding identifies the Quantum Potential as a superluminal SubQuantum Information-carrying aether able to interact with matter and physical forces at well defined space-time positions, injecting their Information content into our world of observables by modulating the event potential. This interaction is possible as soon as matter is defined by an n-degree entanglement state of SQ complexity. Absolute void refers to lack of matter which equals to a space-time sequence contending Information in its nascent, non-aggregative form (the Sub quantum plenum) as observed from our Space-Time perspective. It contains implicated layers of increasingly subtle pre-quantum domains, where each manifestation range may be organized into complete worlds, such as our own, each of them ranging until its own "absolute void", the transition state to the next implication level of reality. Pre-quantum tenets rely upon experimentally testable assessments. Our proposal has a strong outreach into unprecedented explanatory options for anomalous output data distribution in non-conventional exploration fields, whose statistically significant results become logically integrated into epistemologically sustainable blueprints. Our views are perfectly consistent both with conventional empirical treatment of space-time defying representational variables, and their causal primacy upon Quantum implementation systems of their content, in the integral range of their polyvalent manifestation. Detailed descriptions of mind/matter entanglement patterns are supplied, as running in the holistic superimplicative sentient reality domains, under the overarching regulation of Cosmic Harmony, underpinning a continuous creation cosmogenetic process. As our analysis addresses a pre-temporal range, the thus defined endless time vector allows ab-initio existing inherent resonance links in any SQ subtlety domain to turn into fluxes and organization effects leading to sequential entelechial self-contended worlds. These primeval harmonic SQ resonances are the very pattern of our overarching cosmic harmony just mentioned, the source of all conceivable manifestation and interconnectedness.  C 34  The Big Condensation-Not the Big Bang  R.W. Boyer <rw.boyer@yahoo.com> (Fairfield, IA)    R. W. Boyer Girne American University Girne, Northern Cyprus According to the consensus cosmological theory of the inflationary ‘Big Bang,’ the universe originated, presumably instantaneously from nothing, as an inherently dynamic, randomly fluctuating, quantum particle-force field that eventually congealed into stars, planets, and organisms such as humans complex enough to generate consciousness. This fragmented, reductive materialistic view is associated with a bottom-up matter-mind-consciousness ontology, in which the whole is created from combining the parts. In this view, consciousness is an emergent property of random bits of energy/matter that somehow bind into unitary biological organisms mysteriously developing control over their parts. On the other hand, the holistic perspective in Vedic science is a top-down consciousness-mind-matter ontology, in which the parts manifest from the whole. In that perspective, the origin of the universe is better characterized as the ‘Big Condensation’ rather than ‘Big Bang.’ Phenomenal existence remains within the unified field and manifests, limits itself, or condenses into subjective mind and objective matter. The holistic perspective of ultimate unity and its sequential unfoldment is contained in the structure of Rik Veda.1 Vedanta is from the experiential perspective of unity, and the sequential unfoldment of phenomenal levels of nature within unity is articulated, for example, in Sankhya and Ayurveda. The holistic perspective is more consistent with developing understanding in unified field theories, spontaneous symmetry breaking, quantum decoherence, the ‘arrow of time,’ and the 2nd law of thermodynamics, which imply the universe originated from a lowest entropy, super-symmetric, even perfectly orderly, super-unified state. The holistic perspective in Vedic science provides means for resolving fundamental paradoxes in the reductive, materialistic, bottom-up ontology? including the ‘hard problem’ of consciousness, order emerging from fundamental random disorder, life emerging from non-life, free will, and everything emerging from nothing.2   C 35  Examining the Effect of Physiolgical Temperature on the Dynamics of Microtubules  Travis Craddock, Jack A. Tuszynski <tcraddoc@phys.ualberta.ca> (Physics, University of Alberta, Edmonton, Alberta, Canada)    The leading objection against theories implicating quantum processes taking place within neuronal microtubules states that the interactions of a microtubule system with an environment at physiological temperature would cause any quantum states within the system to decohere, thus destroying quantum effects. Counter arguments state that physiologically relevant temperatures may enhance quantum processes, and that isolation of microtubules by biological mechanisms, such as actin gel states or layers of ordered water, could protect fragile quantum states, but to date no conclusive studies have been performed. As such working quantum based models of microtubules are required. Two quantum-based models are suggested and used to investigate the effect of temperature on microtubule dynamics. First, to investigate the possibility of quantum processes in relation to information processing in microtubules a computer microtubule model inspired by the cellular automata models of Smith, Hameroff and Watt, and Hameroff, Rasmussen and Mansson is used. The model uses a typical microtubule configuration of 13 protofilaments with its constituent tubulin proteins packed into a seven-member neighbourhood in a tilted hexagon configuration known as an A-Lattice. The interior of the tubulin protein is taken to contain a region of two areas of positive charge separated by a barrier of negative charge and is based on electrostatic maps of the protein interior. The interior arrangement constitutes a double well potential structure within which a mobile electron is used to determine the states of an individual tubulin dimer. Dynamics of the system are determined by the minimization of the overall energy associated with electrostatic interactions between neighbouring electrons as well as thermal effects. Classically the model allows transitions for electrons with sufficient energy to overcome the potential barrier in which the new configuration lowers the system’s energy, or if the configuration raises the system’s energy, with a finite probability. Quantum mechanically the model allows the electron to tunnel through the potential barrier allowing transitions for which the system’s energy is lowered even if the electron does not possess the necessary energy to overcome the potential barrier, or for configurations that raise the system’s energy with the same finite probability as in the classical scenario. The emergence of self-organizing patterns that are static, oscillating, or propagating in time are taken as the determining factors of the system’s capability to process information. Second, to further the investigation of quantum processes taking place in microtubules, an exciton model of the microtubule is used. Tubulin monomers are taken as quantum well structures containing an electron that exists in its ground state, or 1st excited state. Following previous work that models the mechanisms of excition energy transfer in Scheibe aggregates the issues of determining the strength of excition and phonon interactions, and the effect on the formation and dynamics of coherent excition domains within microtubules are discussed. Also estimates of energy and time scales for excitons, phonons, their interactions and thermal effects are presented.   C 36  Consciousness As Access To Active Information: Progression, Rather Than Collapse, Of The Quantum Subject  Jonathan Edwards <jo.edwards@ucl.ac.uk> (Medicine, University College London, London, England)    The link between consciousness and quantum theory often draws on the views of von Neumann on wave function collapse. From a biological standpoint several arguments favour a different approach. Any quantum mechanical process involved needs to link in to classical biophysics and the most plausible route is through the correspondence principle (as Feynman’s QED life history of a photon scales up to classical diffraction by Young’s slits). In this scaling up, wave function collapse loses significance, the dynamics being dictated by the laws of linear progression (von Neumann type 2, rather than type 1). Moreover, wave function collapse is not required by all interpretations of QM, a widespread view being that it is neither useful nor meaningful to divide the quantum system into arbitrarily defined ‘sub-processes’. There are also severe difficulties in defining the boundaries of the ‘quantum system’ with wave function collapse or decoherence approaches. Linear progression through a physical environment (Young’s slits, brain) involves an interaction with the environment which entails access by the quantum system (e.g. photon) to what Bohm and Hiley usefully call ‘active information’ about its environment. Access to information is both an indivisible and a bounded phenomenon. Since consciousness appears to be a state of access to a rich, indivisible, yet bounded, pattern of information this makes access to active information at the quantum level an attractive explanation. In macroscopic structures the life histories of quantum systems represented by particles with rest mass, such as electrons, with wavelengths close to the size of atoms, are both too 'fine-grained' and too biologically irrelevant to be plausible as ‘quantum-dynamic subjects’ accessing the active information that would be our experience of the world. However, massless bosons such as photons and acoustic phonons, with much longer wavelengths, might be candidates. Fields or modes of large numbers of such bosons can mediate classical mechanical effects and lose nothing of their indivisibility of acquisition of information in doing so. No form of phase coherence is required for this aspect of QM to apply on a large scale. The implied identity of the ‘quantum-dynamic subject’ might upset philosophers, but that can happen with biology. Phononic modes in cell membranes may be attractive candidates for quantum-dynamic subjects because their functional wavelengths could match the micron scale at which electrical information is held in neuronal dendrites and the known piezoelectric properties of the membrane would allow coupling of electrical information (and not irrelevant ‘cell housekeeping’ processes) to the phononic mode. Recent thermodynamic reassessment of the action potential suggests that electromechanical coupling may be integral to membrane excitability. Electromechanically coupled modes are documented in neurons in the inner ear. Whether such modes can, or should, involve groups of cells is uncertain. Relevant phononic modes in cortical neurons would be at or beyond the limit of current direct detection methods but might be probed indirectly with e.g. anaesthetics or calcium levels. Standing wave modes based on local longitudinal ‘dendritic telescoping’, possibly linked to cytoskeletal microtubules, might be the most plausible.   C 37  Existence and consciousness  Peter Ells <peterells@hotmail.co.uk> (Oxford, UK)    Stephen Hawking (1988) wrote, “What is it that breathes fire into the equations and makes a universe for them to describe. The usual approach of science of constructing a mathematical model cannot answer the questions of why there should be a universe for the model to describe. Why does the universe go to the bother of existing?” This paper cannot answer these “What” or “Why” questions. Instead it asks, “What do we mean when we say that our universe actually exists, and how does this concept of actual existence take us beyond mere mathematical existence?” The paper considers various types of existence: experiential existence of experiential beings possessing subjective, qualitative, perceptual states (that do not necessarily amount to thinking states); Physical existence of external objects that can be inferred by collating the percepts of experiential beings; Material existence of entities obeying physical laws without reference to experiential beings; Finally mathematical existence, which is merely formal description that is logically consistent. There might not be any life elsewhere in our universe, and it is quite conceivable that, had the history of our planet been slightly different, life might never have emerged here. In these circumstances, our universe would have completed its history lifeless, and thus (according to the dominant viewpoint) only ever have contained entities with material existence. In such circumstances the problem arises that material existence, (as will be shown), collapses into mere mathematical existence. We can be very confident that we and our universe have more than mere mathematical existence, and so something must be wrong. The solution I argue for here is that all material existence must in fact be experiential existence, and so all matter is subjective and experiential in its essence. From a study of what it means for a universe actually to exist, I thus arrive at panpsychism. A dodecahedral universe is used as an example to show how conceptually simple experiential beings might be. Finally, I sketch in very general terms how the well-known, problematic characteristics of quantum theory are in harmony with panpsychism. Hawking, S. (1988), A brief history of time (London: Bantam Press).   C 38  Does microbial information processing by interconnected adaptive events reflect a pre-mental cognitive capacity?  Gernot Falkner, Kristjan Plaetzer; Renate Falkner <Gernot.Falkner@sbg.ac.at> (Organismic Biology, University of Salzburg, Salzburg, Austria)     We dicuss possible cognitive capacities of bacteria, using a model of microbial information processing that is based on a generalized conception of experience, from which all traits characteristic for higher animals (such as consciousness and thought) have been removed. This conception allows relating the experience of an organism to the phenomenon of physiological adaptation, defined as a process in which energy converting subsystems of a cell are conformed – in an interconnected sequence of adaptive events – to an environmental alteration, aimed at attainment of a state of least energy dissipation. In adaptive events the subsystems pass, via an adaptive operation mode from one adapted state to the next. An adaptive operation mode occurs, when a subsystem is disturbed by an environmental alteration. In this mode the environmental change is interpreted in respect to a reconstruction that appears to be useful in the light of previous experiences. Connectivity exists between adaptive events in that the adapted state resulting from an adaptive operation mode stimulates adaptive operation modes in other subsystems. When in these systems adapted states have been attained, the originally attained adapted states are no longer conformed and have to re-adapt, and so on. In this way adaptive events become elements of a communicating network, in which, along a historic succession of alternating adapted states and adaptive operation modes, information pertaining to the self-preservation of the organism is transferred from one adaptive event to the next: the latter “interprets” environmental changes by means of distinct adaptive operation modes, aimed at preservation of the organism. The result of this interpretation is again leading to a coherent state that is passed on to subsequent adaptive events. A generalization of this idea to the adaptive interplay of other energy converting subsystems of the cell leads to the dynamic view of cellular information processing in which an organism constantly observes its environment and re-creates itself in every new experience. This model of cellular information processing is exemplified in the adaptive response of cyanobacteria to external phosphate fluctuations. It is shown that adaptive processes have a temporal vector character in that they connect former with future events. One the one hand they are influenced by antecedent adaptations, so that in this respect a cellular memory is revealed in adaptive processes. On the other hand they bear an anticipatory aspect, since adaptation to a new environmental situation occurs in a way that meets the future requirements of the cell. A computer model of the intracellular communication about experienced environmental influences allowed simulating the experimentally observed adaptive dynamics, when during the simulation the program altered the parameters of the model in response to the outcome of its own simulation. Falkner R., Priewasser M., & Falkner G. (2006): Information processing by Cyanobacteria during adaptation to environmental phosphate fluctuations. Plant Signaling and Behaviour, 1, 212-220. Plaetzer K., Thomas S. R., Falkner R., & Falkner G. (2005): The microbial experience of environmental phosphate fluctuations. An essay on the possibility of putting intentions into cell biochemistry. J. Theor. Biol. 235, 540-554.   C 39  Mind backward paths: from ascons to dendrites passing through quantum memories   Alberto Faro, Giordano Daniela <albfaro@gmail.com> (Ingegneria Informatica e Telecomunicazioni, Univesita' di Catania, Catania, Italy)    Neural networks in the brain convey forward signals from dendrites to axons, whereas backward paths have not been identified yet. This makes it difficult to explain how the mind, an open system mutually dependent on the environment, reaches equilibrium states with the surrounding context. In a previous work the authors have proposed five hypotheses envisaging a model (i.e., the Frame Model of the Quantum Brain) in which the adaptation between self and environment is regulated by a high order cybernetics loop without entailing any “entity” in mind. This paper refines the five hypotheses proposing that quantum memories have a role in implementing the backward paths from axons to dendrites as follows: • Human activity is sustained by two quantum fields, i.e., the cortical and ordering fields produced by the vibrations of the myriad of dipoles existing at neuronal and cytoplasm level, allowing the subjects to enact each action (coded by a Humezawa’s corticon) of a scene (coded by a Faro&Giordano’s orderon) depending on the performed actions and the planned ones. Awareness of the scene is only achieved a-posteriori, when the scene has been concluded without contradicting the initial hypothesis. This extends the notion of “backward time referral”. • The orderons are classified according to their regularities by a Clustering Quantum Field (CQF) produced by the vibrations of dipoles at dendrites level. This generates an ontological space whose axes are coded by CQF particles (i.e., Faro&Giordano’s clusterons). • The problem at hand and some external representation activate selectively the mRNAs on the dendrites which on their turn activate the axons of the related neuronal groups. The excitation of the postsynaptic potentials generates a global EEG profile together with the emission of photons specific for the given input. These photons activate a set of orderons (coded by vacuum states). This explains why the received inputs address the attention towards areas of the ontological space containing scenes having some analogy with the situation hypothesized by the subject. The collapse of the activated vacuum states towards the state representing the prevailing scene produces the emission of photons that inhibit or reinforce the synthesis of the proteins on the dendrites. This loop evolves until the stimuli received and the codification of the information perceived by the self in correspondence to these stimuli are one the mirror of the other in DQBM (Dissipative Quantum Brain Model) sense. • If the subjects recognize not being experienced to deal with the current situation, a new scene and related orderon is created consciously by cross-over and mutation of relevant existing scenes. The inputs of the new scene will reactivate in future similar situations the zones of the ontological space containing the scenes originating the new one. • The external representations mediate the communication of the scenes among people in order to create conventions and rituals that are at the basis of a social life. Empirical evidences at the basis of the model and hypotheses to be tested will be pointed out, thus identifying the lines of the future work.   C 40  Differentials of Deep Consciousness: Deleuze, Bohm and Virtual Ontology  Shannon Foskett <foskett@uchicago.edu> ( , University of Chicago, London, Canada)    This paper will explore the relevance for the study of consciousness of the surprising relationship between David Bohm’s Implicate Order and the ontological thought of late French philosopher Gilles Deleuze. The uncanny connection between Bohm’s thought and the oft-misrepresented work of various “postmodern” philosophers such as Derrida or Lacan has been addressed most notably by mathematician and cultural theorist Arkady Plotnitsky. Plotnitsky’s work, however, stops short of looking at Deleuze and does not consider the relationship to consciousness. I would like to suggest the mutual relevance of Deleuze and Bohm for scholars of their work, but also, and more importantly, the new flexibility that their combined vision might offer for theorizing consciousness in wider disciplinary contexts and in conjunction with existing notions of consciousness in the humanities. This ability to address more prevalent conceptions of consciousness in the academic community will be in increasing demand as empirical research on consciousness matures. Fortunately, there already exists an intuitive understanding on the part of some humanities scholars of an implicit relationship between quantum theory and ideas within what can be loosely considered as “postmodern” thought. Bohm’s “holomovement” and “implicate order” express much the same ideas as the notion of intensive depth in Deleuze. Both sets of terminology describe being as a process of (en)folding and unfolding. Deleuze even uses the same descriptor, referring to intensive depth as “an implicated order of constitutive differences.” This depth corresponds to the infinite nature of the wave form of each potential particle. In a quantum field theory context, the situation is described in terms of an infinite overlapping of fields, where the field replaces the sub-atomic particle as the “ultimate, fundamental concept in physics, because quantum physics tells us that particles (material objects) are themselves manifestations of fields.” This set of all matter waves is nothing but Deleuze’s pure spatium, from which “emerge at once the extensio and the extensum, the qualitas and the quale.” Being, in its intensive depths, is drawn out, or explicated, through a motion of different/ciation that produces it as extensity. This causes intensity to appear “outside itself and hidden by quality.” For Bohm, the explicate order is also a merely limited case of the implicate order. I will argue that Deleuze’s unique concept of the Idea as a particular point of intensity within the Implicate may be a theoretical placeholder for phenomena in quantum-based models of consciousness. Finally I will discuss how Deleuze’s model contributes to Bohm’s with an understanding of what role of chance processes might play within various levels of consciousness.   C 41  Intensity of awareness and duration of nowness   Georg Franck, Harald Atmannspacher <franck@iemar.tuwien.ac.at> (Digital Methods in Architecture and Planning, Vienna University of Technology, Vienna, Austria)    It has been proposed to translate the mind-matter distinction into terms of mental and physical time. In the spirit of this idea, we hypothesize a relation between the intensity of awareness in mental presence and a crucial time scale (some ten milliseconds) relevant for information updates in mental systems. This time scale can be quantitatively related to another time scale (some seconds) often referred to as a measure for the duration of nowness. This duration is experimentally accessible and offers, thus, a suitable way to characterize the intensity of mental awareness. Interesting consequences with respect to the idea of a generalized notion of mental awareness, of which human consciousness is a special case, will be outlined.   C 42  Overcoming Discontinuity and Dualism in Modern Cosmology  Mary Fries <mfries@ciis.edu> (Philosophy, Cosmology, and Consciousness, California Institute of Integral Studies, Oakland, California)    Begun as an explanation for the stepwise emittance and absorption of energy observed in physical systems, quantum mechanics, by its very name, asserts the discontinuity of matter, a modern atomism that influences the development of current attempts to unite quantum mechanics and general relativity. The ensuing schemata of superstring theory and loop quantum gravity reinforce our tendency to objectify the foundations of an evolving reality, and while, via these ideas, we have transcended the billiard-ball notion of point-like particles, we have in no way evaded reductive abstraction. The spatiotemporal-limitations of human form justify this natural tendency toward generalization, yet this predisposition still recurrently hinders scientific progress. While formulaic abstractions do no harm in so far as we recognize them as limitations of our assumptions, in order to truly integrate quantum mechanics and relativity, we will need to overcome our expectation of subatomic happenings to mirror the behavior of macroscopic bodies. According to modern theory, spin nets or strings (depending on the model used), the supposed 'fundamental particles' of reality, form the very fabric of the universe. They do not embed themselves within space-time; they define space-time. Hence, a supposition of their discreteness implies discreteness of both time and space. Planck's contribution of a 'smallest size' and a 'smallest time', Planck length and Planck time respectively, fortifies the discretization of reality, as does Heisenberg's uncertainty principle by placing a lower limit on our capability to conduct measurement. But do a handful of constants and a threshold to our investigations justify delimiting our work by potentially premature quantification of the natural universe? History abounds with cases of simplifications of mind being finally overturned by less intuitive explanations. The redefinition of Bohr's atomic model, the discovery of cosmic inflation, and perhaps the most popularized realization of the earth as a round satellite of the sun all required significant mental reorientation to the cosmos. Quantum mechanics continues to baffle those seeking to assimilate its implications into minds predisposed to entirely different logic and causal relationships. As every abstraction is by definition a limitation, it may well be the case that, in much the same way, our attachment to quanta holds us back from an integration of the four forces. But would such a re-envisagement of the 'fundamental particles' necessarily imply a continuous universe instead? Perhaps, but while certain problems are more easily formulated from within the framework of such a dualism, it may well be the case that the much-anticipated union will occur to those who refuse to be bound, to those who come to view reality as organism, perhaps with a mixture of continuity and breaks such as black holes and the seeming origin of the universe, as a universe that favors its own direction over constructions of the human mind. Within a more accommodating model, the flexibility of the wave and the stability of the particle may be formulated in a higher-order abstraction with broader limitations and wider reconciliations wherein mind can be finally integrated as a fundamental component of reality.   C 43  Modeling Consciousness in Complex Spacetime Using Methodology of Quantum and Classical Physics.   Anatoly Goldstein <a_goldshteyn@yahoo.com> (Voice Center, Massachusetts General Hospital, Boston, MA)    It is argued that even if quantum mechanical formalism does not directly apply to consciousness mechanisms, the methodology used for solution of Schrödinger equation and its interpretation may be very useful for modeling of consciousness. According to I. Thompson (2002) Hamiltonian and wave function of Schrödinger equation resulting in probabilities of observation outcomes correspond to conscious activities such as intentions and thoughts resulting in actions. R. Penrose & W. Rindler (1984) indicated that "space-time geometry, as well as quantum theory, may be governed by an underlying complex rather than real structure". A geometric model of consciousness (E. Rauscher & R. Targ, 2001) shows importance of imaginary space and time coordinates in interpretation of non-local consciousness phenomena such as remote viewing and precognition. The current author is suggesting to model information dynamics of consciousness with a complex function in complex spacetime. This automatically accounts for the ability of consciousness/awareness to access imaginary coordinates of complex spacetime. Max Born formula shows how one can extract real-valued observable data from the complex-valued function that might be applicable to modeling of consciousness. Consciousness is commonly considered to be directly related to vibration processes such as brainwaves, electrical activity in neural membranes. It is suggested to model these processes with a linear combination of complex exponents (CE), similar to complex form of Fourier expansion, see K. Pribram (2003). A single CE represents a solution of classical harmonic oscillator problem in complex spacetime. If we assume that human intention focus can be in zero approximation modeled by a virtual particle that we call intenton and describe the behavior of intenton in human brain/body with a known quantum mechanical model of a particle in 3D box, we are also arriving at a solution containing CE. Group theoretic aspects of modeling consciousness-related vibrations with CE are considered. If we assume that human consciousness is supported in part by tachyons rotating around human body, then precognition may be possible due to the ability of the superluminal tachyon to cross its own past light cone (move backwards in time). This hypothesis is consistent with results of M. Davidson's (2001) numerical simulation of tachyon circular (in space) & helical (in spacetime) movement based on Feynman-Wheeler electrodynamics seemingly confirmed in its J. Cramer's (1986) version by S. Afshar (2004) experiment. Role of entropy, information, and symmetry in modeling of moral aspects of consciousness is considered. The author is suggesting a mechanism of reverse psychology (reactance) based on Faraday's law of electromagnetic induction applied to interaction of two or more minds. Following A.& A. Fingelkurts (2001), the minds in the suggested mechanism are represented by human brain biopotential fields. Based on K. Pribram's (1987) holonomic brain theory the current author suggests that neural oscillations interference may be responsible not only for the memory mechanisms of image storage/retrieval, but also potentially for the very essence of active operational function of consciousness. Specifically if we attempt to establish a correspondence between waves (characterized by frequency, amplitude and phase) and elementary ideas (e.g., an idea of a number) then we can conclude that interference of coherent waves in brain may be responsible for, or, at least, closely related to the ability of consciousness to add numbers, while interference of pi-phase-shifted brainwaves might support the conscious operation of subtraction. It remains to be seen whether a natural author's hypothesis that brain math, logic and information processing/thinking in general are based on interference of neural oscillations and on K. Pribram's storage in/retrieval from memory of resulting interference patterns.  C 44  Quantum Mechanics, Cosmology, Biology and the seat of Consciousness   Maurice Goodman <maurice.goodman@dit.ie> (School of Physics, Dublin Institute of Technology, Dublin 8, Ireland)    All fundamental particles and structures obey the uncertainty principle. If we ignore particles and structures traveling at close to the speed of light (c) (i.e. >0.9c) the maximum uncertainty in momentum is of order mc where m is the mass of the structure/particle. This implies there is a minimum region of space such particles and structures can be confined to without violation of the uncertainty principle. Furthermore the mass of key structures found in nature generally varies in proportion to R^2, where R is size, and not R^3 as might be expected. By assuming all fundamental particles also obey this relation a sequence of “minimum” masses (M) can be calculated, one from another using M(n+1) = h/cRn (n = 0, +/-1, +/-2…), where h is Planck’s constant. These coincide with the fundamental particle/structure masses found in nature over 80 orders of magnitude of mass. This allowed a prediction for the neutrino mass, 20 years ago, that recent experimental results agree with. The above mass sequence insists on a direct link between Biology and the cell on the one hand and the neutrino and the weak force on the other. No one can seriously buy into the notion that the millions of millions of complex molecules within a cell exchange information, and organize themselves by nearest neighbour interactions only. The “hand in glove” sine qua non of all molecular transfers of information in biology is simply not sufficient to explain overall co-ordination within and between cells. There must also be, almost instantaneous, long-range communication to prevent chaos. Quantum coherence is an attractive candidate here. The range (r) at which quantum coherence ceases is given by r = h/(3mkT)^0.5, where m is the mass of the particles involved, T is the absolute temperature and k is Boltzmann’s constant. The lightest particle associated with chemical processes is the electron and this limits r to less than 10^-8 m. for all electromagnetic processes at room temperature. This is too short for cellular and intercellular communication and information transfer. The equivalent range (r) for neutrinos at room temperature is less than 10^-4 m, which is the scale on which neurological processes occur. Therefore, if quantum effects are at the root of consciousness, in the mind, then they are more likely to relate to the neutrino and weak force rather than the electron and the electromagnetic force. Neutrino’s would also provide the two necessary characteristics of the substrate for quantum computation i.e. insulation from the cell sap (electromagnetic processes) to allow for quantum entanglement and, the possibility of intercellular continuity to allow for multicellular quantum coherent states. While the input/output signals to/from the mind are clearly electromagnetic processes the “processing” of these signals could conceivably be based on the half spin “quantum bit” neutrino. The linchpin between the electromagnetic inputs/outputs and the processing in the mind would be spin. In short, the mind may exhibit consciousness as a result of the weak force and neutrino and not the electromagnetic force and the electron.   C 45  Time Reversal Effects in Visual Word Recognition  Anastasia Gorbunova, Gorbunova, Anastasia A.; Levin, Samuel. <gorbunov@email.arizona.edu> (Psychology, University of Arizona, Tucson, AZ)    The present study investigated time-reversal effects in visual word recognition using a traditional technique called lexical decision with masked priming. In this paradigm the subject is presented with strings of letters of various durations on a computer screen. The first string is a forward mask (usually a sequence of non-linguistic symbols such as hash-marks), which is followed by the target letter sequence. The subject's task is to decide whether the target letter sequence is a word or not. A prime, usually related (e.g. one letter different from the target) or unrelated (e.g. all letters different from the target), is presented briefly after the forward mask and before the target. The subject is usually unaware of the prime. In this type of experiments, it has been shown that presentation of a related prime facilitates the processing of the target thereby producing faster reaction times when compared to trials where the target is preceded by an unrelated prime. The current study attempted to move beyond conventional applications of this paradigm by introducing a post-prime that followed the target in addition to the common pre-prime that precedes the target. The latter addition was aimed at exploring some of the current ideas of time and retro-causation by comparing the amount of priming obtained in the following conditions: (i) a 50 ms either identical or unrelated pre-prime with a dummy post-prime (presented as a row of x's), (ii) a 30 ms identical pre-prime with either a 30 ms identical or a 30 ms unrelated post-prime, (iii) a 30 ms unrelated pre-prime with either a 30 ms identical or a 30 ms unrelated post-prime, and (iv) a 50 ms either identical or unrelated post-prime with a dummy pre-prime. Additionally, half of the words in this experiment were emotional (e.g. murder) and the other half were neutral (e.g. garden). This was done to test whether emotional words would produce more priming either in the pre-prime, the post-prime, or both conditions, than neutral ones. The results of this study are intended to shed light on the influences of emotional states on visual word recognition, as well as provide evidence for small-scale temporal reversal effects in conscious and unconscious processes.  C 46  Integral Aspects Of The Action Principle In Biology And Psychology: The Ultimate Physical Roots Of Consciousness Beyond The Quantum Level  Attila Grandpierre <grandp@iif.hu> (Konkoly Observatory of the Hungarian Academy of Sciences, Budapest, Zebegeny, Hungary)    During the last centuries it became more and more clear that the highest achievement of modern physics is its most fundamental law, the action principle. The action principle itself is not understood, its physical content is obscure, and its integral character is ignored. Here we consider the nature of action and found it having a biological nature. We point out that the action principle usually takes a minimum value in physical systems, while in biological organism it usually takes its maximal value. Therefore, we could recognize in the already established action principle’s most general form the first principle of biology. We show that biological organisms employ first its maximum version and determine the biological endpoint using the maximal form, and when the endpoint is determined on a biological basis, the realization of the physical trajectory occurs on the basis of the minimum version. We demonstrate that it is the till now ignored integral character of the action principle which serves as the ontological basis of the unity of living organisms, offering a wide variety of physical processes not considered yet because of their biological and teleological nature. We found a new interpretation of the classic two-slit experiment of quantum mechanics, offering a new, causal interpretation of quantum physics that connects it on a fundamental way with biological processes. We show that the biological form of the action principle acts in the realm beyond quantum physics and represents a new frontier of science. It offers integral principles and quantitative methods to determine biological equations of motion of living organisms, therefore making it possible to extend the range of modern science and develop a real theoretical biology. We present fundamental equations of biology, numerical methods and examples, propose new experiments, and presents experimental predictions. We derive from the biological principle such fundamental life phenomena as self-initiated spontaneous macroscopic activity, regeneration, regulation, homeostasis, and metabolism. We present detailed evidences on the concrete physical aspects of elementary consciousness of quanta, like instantaneous quantum orientation of quanta in their environment, behaving “as if” they “know” about the whole situation, having collective memory, and show ability of learning. Clarifying the concrete physical aspects of consciousness, science becomes able to approach consciousness and self-consciousness on a mathematical, physical and biological basis. In this way, it seems we can enter to a new era of quantitative biology and psychology above the molecular level, based on biology meeting physics below the quantum level.  C 47  Neuro-quantum associative memory for letter-strings and faces   Tarik Hadzibeganovic, Chu Kiong Loo (Faculty of Engineering and Technology, Multimedia University, Melaka, Malaysia) <ta.hadzibeganovic@uni-graz.at> (Language Development & Cognitive Science, University of Graz, Graz, Austria)    We present an integrative, two-stage complex-valued neuro-quantum hybrid model of face-specific and letter-string-specific neural activations, consistent with the recent report of Tarkiainen, Cornelissen, and Salmelin (2002). In the first stage, at about 100 ms following the stimulus onset, the low-level visual feature analysis in the occipital cortex (V1) is represented by the natural production of Gabor-like receptive fields. This processing stage was, as showed by Tarkiainen et al. (2002), common to both the analysis of letter-strings (words) and faces. In the second stage, about 150 ms after the stimulus presentation, we show that the object-level analysis in the inferior occipito-temporal cortex is representable by the Hebbian-like multiple self-interference of the resulting, quantum-implemented Gabor wavelets (Perus, Bischof, & Loo, 2005). With some differences in hemispheric distribution, both letter-strings and faces activate largely overlapping areas in the inferior occipito-temporal cortex, with practically identical onset and peak latencies (Tarkiainen, 2003). We reflect on these equalities in activation and the corresponding processing similarities of words and faces with our quantum associative network model by obtaining similar face and letter-string reconstruction (recognition) quality functions. Our modeling results argue in favor of a quantum-like nature of conscious visual information processing in the human brain.   C 48  A steady state EEG phase synchrony model of consciousness: insights from transcendental meditation practice   Russell Hebert, Rachel Goodman; Fred Travis; Alarik Arenander; Gabriel Tan <tmeeg@aol.com> (Neuroscience, Maharishi University of Management, Houston, Tx)     This presentation adopts these perspectives: that a fully developed consciousness theory is compatible with quantum field theory, that the theory of consciousness must be holistic (non-reductionistic); it must include a concept of the “self”; it must address the origin of consciousness and it must resolve the “binding” problem. In the presented research (Hebert et al., 2005) two approaches have been taken: subjective and objective. The subjective, theoretical approach is derived from Maharishi Vedic Science, an ancient model of consciousness with modern applications. The objective approach involves research utilizing EEG alpha phase synchrony analysis. Maharishi Vedic Science describes consciousness as inner and outer. The inner (transcendental) value explains consciousness as an unbounded field underlying and informing human experience. When the individual accesses this state, it is called self-referral consciousness, or below as “unified wholeness”. When the individual experiences the perception of thoughts and objects, this type of conscious awareness is termed object-referral consciousness (or below as “unified diversity”). Both the “ground state” of the universe in quantum physics and the properties of the self-referral state of consciousness are described as: unmanifest, de-excited, holistic, unified and field-like (see Hagelin, this volume). Hagelin states that the ground state of the universe is also comprised of resonant vibrational modes which can also be referred to as standing waves. Both from the research conducted, and the theoretical background we conclude that alpha standing waves may connect individual consciousness to the quantum level of Nature’s functioning. In line with this idea, Chris King (Tuszynski, ed., 2006) suggests a plausible link “between EEG phase coherence in global brain states and anticipatory boundary conditions in quantum systems…” (p.407). New research has shown that the phase behavior of alpha controls global cortical excitability ((Klimesch et al., 2007). Our study agrees with this hypothesis. We suggest further however that global and instantaneous shifts of excitability can only occur in stationary environments. Alpha standing waves found in our study are the epitome of the globally de-excited cortex, a “ground” state of consciousness corresponding to John’s (2001) field theory postulations. This, in relation to quantum physics, is a possible description of the origin of consciousness. Recent developments agree with our proposal that alpha phase synchrony may also provide the solution of the binding problem. Palva and Palva (2007) suggest that alpha-gamma cross-frequency phase synchrony (“unified diversity”) orchestrates the creation of each “snapshot” of discrete perception. The emerging picture is that changing modes of alpha regulate perceptual frames within the boundaries of time and space (the binding problem) and that alpha, as well, frames the timeless infinity of self-referral consciousness described as “unified wholeness”. Palva and Palva (2007) “New Vistas for alpha band oscillations” Trends in Cognitive Neuroscience 34(4), 150-8. Hebert et al., Enhanced EEG alpha phase synchrony during Transcendental Meditation. Signal Processing Journal(2005)85, 2213-2232 Klimesch et al (2007) “EEG oscillations: the inhibition-timing hypothesis” Brain Research Reviews 53(1) 63-88 E.R.John, 2001 “A field theory of consciousness” Cons. and Cogn 10, 184-213 King, In “The Emerging Physics of Consciousness” (Tuszynski, ed., 2006 Springer, Berlin)   C 49  The Role of Consciousness as Universal (Classical) and Contextual (Quantum) Meaning-Maker   Patrick Heelan <heelanp@georgetown.edu> (Philosophy, Georgetown University, Washington, DC)    Thesis: Human consciousness is the Governor of Mental Life {1} through its function of constituting the world of human experience by meaning-making or – to use Husserl’s term - intentional constitution. The forms of meaning-making are syntheses of experience through the formal modeling of individual perceptual objects under a categorial description. These formal models are extensional (space-like) symmetries based on a group-theoretic similarity of common qualitative (meaningful intensional) features that fulfill the same kind of cognitive model as characterizes quantum physics, namely, Hilbert Space. Individual perceptual objects are recognized interpretatively on the basis of common meaningful qualitative features organized in a group-theoretic synthesis of a manifold of profiles, that are then accepted by the perceiver as having a common categorial description named in language. Having a common categorial description is for something to be recognized as belonging to a symmetry group of particular exemplars. Both individual and categorial descriptions involve group-theoretic ways of organizing the interpretation of the flowing inputs from the sensory field in a constructed synthesis that functions in sustaining and developing the quality of human life. As such, both individual and categorial syntheses serve human life, and do so through the organization of human decision-making and activity, some under universal (classical) group-theoretic symmetries and others under contextual (quantum-like) group-theoretic symmetries. As in the quantum theory; part of this process is unconscious and part is dialogical, social, deliberate, and linguistic (in the sense known as systemic functional linguistics, Tomasello, Halliday, Thibault, et al.). Karl Pribram’s notion of a Windowed Fourier transformation within the dendritic fibers could well be the quantum neurological aspect of this process (2). Notes: (1)This term is used by Donald, Merlin, A Mind So Rare, Chap. 3 (New York: Norton, 2001); Pribram calls it ‘central processing complement.’ In Pribram, K. Brain and Perception (Hilsdale, NJ: Erlbaum,1991), p. 96. (2)Pribram, K. (1991) Brain and Perception; Holonomy and Structure in Figural Procession (Hillsdale, NJ: Erlbaum), pp. 26-27.   C 50  Experimental Approach to Quantum Brain: Evidence of Nonlocal Neural, Chemical, Thermal and Gravitational Effects  Huping Hu, Maoxin Wu <hupinghu@quantumbrain.org> (Biophysics Consulting Group, Stony Brook, New York)    Many if not most scientists do not believe that quantum effects play any role in consciousness. Thus, to gain credibility and make real progress, any serious attempt at a quantum brain should also stress experimental work besides theoretical considerations. Therefore, we has recently carried out experiments from the perspective of our spin-mediated consciousness theory to test the possibility of quantum-entangling the quantum entities inside the brain with those of an external chemical substance. We found that applying magnetic pulses to the brain when an anesthetic was placed in between caused the brain to feel the effect of said anesthetic as if the test subject had actually inhaled the same. Through additional experiments, we verified that the said brain effect was indeed the consequence of quantum entanglement. These results defy common belief that quantum entanglement alone cannot be used to transmit information and support the possibility of a quantum brain. More recently, we have carried out experiments on simple physical systems and we have found that: (1) the pH value of water in a detecting reservoir quantum-entangled with water in a remote reservoir changes in the same direction as that in the remote water when the latter is manipulated under the condition that the water in the detecting reservoir is able to exchange energy with its local environment; (2) the temperature of water in a detecting reservoir quantum-entangled with water in a remote reservoir can change against the temperature of its local environment when the latter is manipulated under the condition that the water in the detecting reservoir is able to exchange energy with its local environment; and (3) the gravity of water in a detecting reservoir quantum-entangled with water in a remote reservoir can change against the gravity of its local environment when the latter was remotely manipulated such that, it is hereby predicted, the gravitational energy/potential is globally conserved. These non-local effects are all reproducible, surprisingly robust and support a quantum brain theory such as our spin mediated consciousness theory. Perhaps the most shocking is our experimental demonstration of Newton's instantaneous gravity and Mach's instantaneous connection conjecture and the relationship between gravity and quantum entanglement. Our findings also imply that the properties of all matters can be affected non-locally through quantum entanglement mediated processes. Second, the second law of thermodynamics may not hold when two quantum-entangled systems together with their respective local environments are considered as two isolated systems and one of them is manipulated. Third, gravity has a non-local aspect associated with quantum entanglement thus can be non-locally manipulated through quantum entanglement mediated processes. Fourth, in quantum-entangled systems such as biological systems, quantum information may drive such systems to a more ordered state against the disorderly effect of environmental heat. We urge all interested scientists and the like to do their own experiments to verify and extend our findings.  C 51  Consciousness, Coherence and Quantum Entanglement  James Hurtak, AFFS, Basel, Switzerland; Prof. Desiree Hurtak, SUNY-Purchase College, New York <affs@affs.org> (AFFS, Wasserburg , GERMANY)     Coherence as a universal, organizing principle that opposes the increase of entropy, is present throughout the basic field properties of our natural system. Coherence can be applied not only to local, but nonlocal, atemporal interactions. Understanding a coherent system would help to examine the number of quantum entanglement measures that quantify the total state as has been demonstrated by studies on photons, atoms and electrons (Chou, 2005; Bao, 2003). An explanation of the basic coherent properties can also be applied to the behavior of living systems and not only to the physics of matter. Here both the biological and the psychological experience are effected. For the biological experience we see how there exists a high degree of coherence of a quantum state in the order of living systems, because otherwise any mass movement within the environment would create, instead, “increasing” random effects. Regarding the psychological experience which includes cognition, memory, intention, intuition, perception and reasoning, we see coherence working as a “stream” of consciousness flow which manages and focuses life through linear adaptability and the organization of thoughts, events, and actions. However, to apply quantum entanglement in a living coherent systems, we need to address both the “mind-body” problem and that of “bioentanglement”. The latter claims that quantum entanglement only becomes applicable to particles that have previously interacted, that is, for neurons to be entangled, there must be some prior physical interaction in the brain. No doubt, the structural world comprises various fields and waves structures. The brain process, as it is, with neurons, dendrites and molecules (Hameroff, 2006), merely plays an overlapping role, along side quantum entanglement which exists throughout nature. The brain exists in its own coherent-entangled field within the larger space-time. Because there is an interaction of structures by forces, in essence there is an exchange of virtual particles that works with the stream of consciousness playing out in our physical existence. This paper will examine recent research and models of entanglement as they apply to coherence (and decoherence) in the nature of biological and psychological systems. Chou, CW, et al. (2005) “Measurement-induced entanglement for excitation stored in remote atomic ensembles” in Nature. 2005; 438(7069):828-32. Jiming Bao, et.al (2003) “Optically induced multispin entanglement in a semiconductor quantum well.” in Nature Materials 2, 175–179. Hameroff, Stuart (2006) “Consciousness, Neurobiology and Quantum Mechanics: The Case for a Connection” in The Emerging Physics of Consciousness, edited by Jack Tuszynski, Springer-Verlag, pp. 206-215.   C 52  Quantum stochasticity and neuronal computations  Peter Jedlicka <jedlicka@em.uni-frankfurt.de> (Institute of Clinical Neuroanatomy, J.W. Goethe-University, Frankfurt, Germany)    The nervous system probably cannot display macroscopic quantum (i.e. classically impossible) behaviours such as quantum entanglement, superposition or tunnelling (Koch and Hepp, Nature 440:611, 2006). However, in contrast to this quantum ‘mysticism’ there is an alternative way in which quantum events might influence the brain activity. The nervous system is a nonlinear system with many feedback loops at every level of its structural hierarchy. A conventional wisdom is that in macroscopic objects the quantum fluctuations are self-averaging and thus not important. Nevertheless this intuition might be misleading in the case of nonlinear complex systems. Because of a high sensitivity to initial conditions, in chaotic systems the microscopic fluctuations may be amplified upward and thereby affect the system's output. In this way stochastic quantum dynamics might sometimes alter the outcome of neuronal computations, not by generating classically impossible solutions, but by influencing the selection of many possible solutions (Satinover, Quantum Brain, Wiley & Sons, 2001). I am going to discuss recent theoretical proposals and experimental findings in quantum mechanics, complexity theory and computational neuroscience suggesting that biological evolution is able to take advantage of quantum-computational speed-up. I predict that the future research on quantum complex systems will provide us with novel interesting insights that might be relevant also for neurobiology and neurophilosophy.  C 53  Consciousness as a quantum-like representation of classical unconsciousness  Andrei Khrennikov <Andrei.Khrennikov@vxu.se> (International Center for Mathematical Modeling in Physics, Economy and Cognitive Science, Vaxjo University, Vaxjo, Sweden)    We present a quantum-like (QL) model in that contexts (complexes of e.g. mental, social, biological, economic or even political conditions) are represented by complex probability amplitudes. This approach gives the possibility to apply the mathematical quantum formalism to probabilities induced in any domain of science. In our model quantum randomness appears not as irreducible randomness (as it is commonly accepted in conventional quantum mechanics, e.g., by von Neumann and Dirac), but as a consequence of obtaining incomplete information about a system. We pay main attention to the QL description of processing of incomplete information. Our QL model can be useful in cognitive, social and political sciences as well as economics and artificial intelligence. In this paper we consider in a more detail one special application -- QL modeling of brain's functioning. The brain is modeled as a QL-computer. Our model finely combine classical neural dynamics in the unsconscious domain with the QL-dynamics in the consciousness. The presence of OBSERVER collecting information about systems is always assumed in our QL model. Such an observer can be of any kind: cognitive or not, biological or mechanical. Such an observer is able to obtain some information about a system under observation. In general this information is not complete. An observer may collect incomplete information not only because it is really impossible to obtain complete information. (We mention that according to Freud's psychoanalysis human brain can even repress some ideas, so called hidden forbidden wishes and desires, and send them into the unconsciousness.) It may occur that it would be convenient for an observer or a class of observers to ignore a part of information, e.g., about social or political processes. In the present QL model of brain's functioning the brain plays the role of such a (self)-observer. [1] A.Yu. Khrennikov, Quantum-like brain: Interference of minds. BioSystems 84, 225--241 (2006).   C 54  Process-Philosophy and Mental Quantum Events   Spyridon Koutroufinis <koutmsbg@mailbox.tu-berlin.de> (Philosophy, Technical University of Berlin (TU-Berlin), Berlin, Germany)    The paper investigates the usefulness of the ideas of Alfred North Whitehead for a natural philosophy of organismic processes in general and for the dynamics of the nervous system in particular. Taking the physics of non linear dynamic systems and basic considerations of the philosophy of consciousness as a starting point, we expound fundamental principles and concepts of Whitehead’s process philosophy. Using these principles, the possibility of integrating modern system theoretical methods and findings into a new theory of mental and neural events is elaborated in a way that avoids vitalism and reductionism.   C 55  Memory and Time: Spatial-Temporal Organization of Episodic Memory Analyzed from Molecular Level Perspective  Michael Lipkind <lipkind@macam.ac.il> (Unit of Molecular Virology, Kimron Veterinary Institute, Bet Dagan, Israel)    The human episodic (biographical) memory including remembrance, storage and retrieval can be represented as a spatial-temporal arrangement of neural correlates of a current stream of perceived and memorized events accumulated in the brain during an individual’s lifetime and constituting the bulk of an individual’s “I”. While the spatial part of the arrangement is in principle conceivable, any hypothetical mechanism of the temporal part is unimaginable, yet during recollection we know what occurred earlier and what occurred later. The existing theories of neural correlates of memorization are based on two analytical levels: the level of circuits of inter-neuronal connections and the level of intracellular molecular substrate of the brain cortex neuronal massifs. The former looks incompatible with the idea of temporal arrangement of memorized events: any current temporal “assortment” of such events in principle cannot correlate with combinations of rigid anatomical inter-neuronal connections. As to the molecular level, the idea of both the spatial and temporal organizations of the episodic memory does not seem inconceivable. Hence, the temporal chain of currently memorized events, each one interconnecting with the previously memorized events to be further connected with those to be memorized in future, must relate to an integral continuum of the brain intracellular molecular substrate. However, the mechanism of such temporal arrangement remains obscure: What (“Where”) on the intracellular level is that “magic” time axis, according to which the multiple currently memorized events are “strung” (threaded, saved, stored)? Within the existing physical-chemical concepts, the problem seems to be unsolvable. The situation could lead to the assumption that the apprehended temporal succession of memorized events results merely from their mental confrontation and systematization, suggesting that any existence of a genuine temporal arrangement of the currently memorized events is an illusion. The suggested way out of the deadlock is based on the idea of an integral field as a carrier of the memorization. Since the concept of field is compatible with the time parameter, it can be employed as a competent dynamic correlate of the current temporal memorization. Accordingly, memorization of any particular event is correlated with respective change of the field “configuration” expressed as a dynamic state determined by the field parameters’ values. However, if the postulated field is grounded on any known physical fields, e.g. electromagnetic, it must originate from the physical-chemical properties of the brain molecular substrate as its source. Since such “circular”, evidently tautological conclusion has no causal value, a concept of an autonomous field irreducible to the established physical fundamentals is suggested as a correlate of memorization. Published models of the autonomous fields as carriers of consciousness (Libet, Searle, Sheldrake) were criticized as tautological, metaphoric, or esoteric (Lipkind, 2005). The suggested theory of memorization based on the theory of irreducible biological field by Gurwitsch (1944) was elaborated (Lipkind, 2003, 2007), the present communication being its further development. Thus, the episodic memory (biographical events) and semantic memory (individual’s store of knowledge) are represented by molecular “traces” left by afferent to-be-perceived stimuli projected upon the brain’s autonomous field-determined intracellular molecular continuum.  C 56  Cortical Based Model of Object-recognition: Quantum Hebbian Processing with neurally shaped Gabor wavelets.  Chu Kiong Loo, Mitja Perus <ckloo@mmu.edu.my> (Faculty of Engineering and Technology, Multimedia University, Bukit Beruang, Melaka, Malaysia)    This paper presents a computationally implementable of cortical based model of object recognition using quantum associative memory. The neuro-quantum hybrid model incorporates neural processing up to V1 of the visual cortex, which imput arrives from the retina with the intermediation of the Lateral Geniculate Nucleaus. The initial image is lifted by the simple cells of V1 to a surface in the rototraslation group followed by quantum associative processing in V1, achieving together an object-recognition result in V2 and ITC. Results of our simulation of the central quantum-like parts of the bio-model, receiving neurally pre-processed inputs, are presented. This part contains our original simulated storage by multiple quantum interference of image-encoding Gabor Wavelets done in a Hebbian way.  C 57  Why panpsychism falls into dualistic metaphysical framework?  Jaison A. Manjaly <jmanjaly@gmail.com> (Centre for Behavioral and Cognitive Sciences, University of Allahabad, Allahabad, UP, India)    Galen Strawson (2006) claims that real physicalism entails panpsychism. This paper aims to assess the ontological merits and demerits of this claim. I argue that although there are certain explanatory advantages for pansychism over emergentism, it does not contribute anything novel to strengthen the physicalsitic thesis. For, the concept of panpsychism is rooted in a metaphysical misconception of ‘experience’. I further show that, because of this misconception, panpsychism cannot be held without falling into a dualistic metaphysical framework. Moreover, Strawson’s version of panpsychism brings back the burdens of causal interaction and non-Cartesian substance dualism.  C 58  The Subject of Physics  Donald Mender, NA <solzitsky@aol.com> (Psychiatry, Yale University, Rhinebeck, NY)    Physicists today embrace theoretical parsimony and experimental accuracy as guides toward progress in the understanding of natural objects. Yet, beyond these criteria, it is also historically true that large paradigmatic leaps forward at the foundations of physics have repeatedly entailed reevaluations of the human subject's place within nature. In particular, revolutionaries have transformed the physical sciences by knocking the subjective center of orthodox perspectives off balance in some unexpected new way, rather than by merely altering the objects under scrutiny. Copernicus simplified astronomy by uprooting Ptolemaic astronomers from their geocentric ground; Einstein relativized the motion of a light source by democratizing the sensorium of the physical observer; Heisenberg captured the phenomenology of the subatomic microcosm by injecting jitter into an experimenter's act of measurement. Hence it may make sense to look for future foundational advances, for example in the quest to unify quantum mechanics and general relativity, via even more radically "decentered" shifts of the scientific subject's anchor within nature, rather than in more and more baroque revisions of yet undetected physical objects, such as transformations of particles into strings and branes, of classical space-time into a topological weave of "loops," of bosons and fermions into bosinos and sfermions, and of phase transitions into Higgs fields. Instead, a more productive route toward the next synthetic breakthrough in physics may be to decenter the very plurality of the physical observer, beyond the statistical influence of second quantization on connections merely among wavefunctional objects. Specfiically, the structure of quantum gravitational operators may morph to include not only linearly independent individual acts of measurement implied by the superpositional probabilities of path integration, but also fungibly collective and frangibly fragmented measuring agencies instantiated respectively through Bose-Einstein and Fermi-Dirac statistics embedded intrinsically within relationships among the operators themselves. Such a "decentered" perspective on quantum gravitational measurement could offer several potential advantages. First, its locus on the observer's side of the measurement "cut" could replace supersymmetrical partners in the objective domain, offering an explanation if bosinos and sfermions are not found in future high-energy accelerator experiments. Second, provision of differing statistically "inertial" (i. e. equilibrated) reference frames for a diverse multiplicity of observing subjects could obviate any need for spontaneous symmetry breaking as an explanation for departures from invariance should Higgs particles fail to manifest themselves. Third, nonlinearizing effects on the probability sums of perturbative series could serve as a natural improvement upon renormalization procedures. Fourth and finally, a "decentering" of pluralities applicable to the quantum-gravitational observer might offer new ways of understanding scientific subjectivity per se in terms of polysemy across a range of collective, individual, and component properties relevant to gravitonic processes in the measuring agent's brain. A hermeneutic expansion of the Penrose-Hameroff hypothesis might thus ensue. Empirical testing of such an enhanced theoretical perspective might follow from detailed predictions of emergent resonances among multiple acts of quantum gravitational measurement.  C 59  The origin of non-locality in consciousness  Ken Mogi <kenmogi@csl.sony.co.jp> (Fundamental Research Laboratory, Sony Computer Science Laboratories, Shinagawa-ku, Tokyo, Japan)    Quantum mechanics, being an inseparable element of reality, naturally enters into the consideration of every phenomenon that occurs in the physical universe. As far as consciousness is an integral part of the reality as we understand it, quantum mechanics needs to be ultimately involved either directly or indirectly in its origin. In particular, the apparent non-locality and integrity in the phenomenology of consciousness and its physical correlates is suggestive of a quantum involvement. Here I examine the nature of non-locality in the physical correlates of consciousness and its relation to quantum mechanics. The concept of the neural correlates of consciousness (Crick and Koch 2003), when pursued beyond the currently prevalent role as a practical framework in which to analyze neuropsychological data, logically necessitates a non-trivial emergence through the mutual relation between physical entities and events that constitute cognitive processes in the brain (Mach's principle in perception, Mogi 1999). Since from this standpoint the spatio-temporal histories sustaining the cognitive processes, including, but not necessarily restricted to, the action potentials of the neurons are the essential correlates of consciousness, non-locality becomes a logical necessity in the ingredients of consciousness. Non-locality has been known to be an essential property of quantum mechanics since its early period (e.g., Einstein, Podolsky, & Rosen 1935). However, the combination of high temperature and large number of degrees of freedom involved in brain activities are usually regarded as definitely precluding any possible quantum effects. However, there exists possible routes of quantum involvement in macroscopic and "warm" phenomena such as brain processes. The key is in the fact that macroscopic objects, although ostensively obeying equations of Newtonian dynamics, rely on quantum effects for the very stability that makes them classic objects in the beginning. Analysis of an information processing system usually starts from the assumption that its essence can be captured by following those parameters explicitly covarying with the information the system supposedly handles. Quantum mechanical effects hardly enter the picture when only explicitly varying parameters are considered. On the other hand, the implicitly sustaining structures that do not covary with the processed information can contribute to the phenomenal aspects of information, such as qualia and self-awareness. The ubiquitous role of metacognition, the origin of subjective time, and the way spatio-temporally distributed activities are "compressed" into percepts in conscious experience, are discussed in the context of the implicit and explicit in cortical information processing. References Einstein, A., Podolsky, B., and Rosen, N. (1935) Can quantum-mechanical description of physical reality be considered complete? Phys. Rev. 47 777-780. Mogi, K. (1999) Response Selectivity, Neuron Doctrine, and Mach's Principle. in Riegler, A. & Peschl, M. (eds.) Understanding Representation in the Cognitive Sciences. New York: Plenum Press. 127-134. Crick, F. and Koch, C. (2003) A framework for consciousness. Nat. Neurosci., 6, 119-126. Taya, F. and Mogi, K. (2004) The variant and invariant in perception. Forma, 19, pp.25-37.   C 60  Teleological mechanism for the simulation argument  James Nystrom <jnystrom@shepherd.edu> (Computer Science, Math and Engineering, Shepherd University, Shepherdstown, WV)    I begin the talk by providing an overview of Bostrom’s now seminal 2003 paper “Are You Living in a Computer Simulation?”. Herein I summarize Bostrom's simulation argument (where one possibility is that we are living in a simulation – specifically as part of an ancestor simulation created by a posthuman society). I take issue with Bostrom's functionalist position on Mind and present a modified simulation disjunction (MSD) wherein I utilize a dualism close in concept to a funda-mentalism of the Penrose-Hameroff variety. Here I eschew Bostrom's ancestor simulations as a type of functionalist masquerade. However, I do maintain the possibility that we are living in a (complete Universe) simulation, created by posthuman simulators (PHS). I note that if we are in a simulation without a functionalist model of Mind, we need structures in the simulation that can support and/or capture Mind activities (e.g., a brain). Here Mind takes on a Gnostic characteristic, in that Mind itself would need to fall down (if you will) from some non-spatio-temporal habitation (a Richard Rorty term) as in the supposed doings of a Gnostic Demiurge. This model of Mind is similar to Plato's Divine Mind or Huxley's Mind-at-Large, and similar to Penrose's use of an underlying Platonic reality (a so-called basic level of Universe). In the third (and last) part of the talk I take the assumption that we are living in a complete Universe simulation. I posit a query concerning how our supposed PHS could implement algorithmic control of a Universe. I need provide background asides before I answer this query. The first aside is (I) a discussion of Universe as a computation in terms energy interactions which take fundamental activity of Universe to be operating near Planck lengths and Planck time. I introduce the terms Negative Universe (a R. Buckminster Fuller term) and reality flux. Here Negative Universe is akin to Penrose's Platonic and Mental worlds, and reality flux describes the ensembles of virtual photons and anti-particles, some of which seemingly pass in and out of existence. Another aside (II) compares casual and teleological effects. I use physically-based arguments, and suggest that the typically arbitrary adoption of the causal viewpoint for most process in Universe is in fact an observation selection effect resulting from an immersion in a forward progression of time. I also (III) review the classic dualism (of mind and matter) and compare this to Penrose-Hameroff funda-mentalism. As a result of this aside, I take Mind as something that resides partially in Negative Universe. The last aside (IV) presents Gravity as an instantaneous most economical relationship of all energy events (as R. Buckminster Fuller did), and this then places the Gravity (calculation/update) in Negative Universe. I can now answer the query and propose mechanisms with which PHS could computationally steer a Universe (such as ours). Since Gravity and Mind have both been surmised to contain a non-spatio-temporal essence (in Negative Universe), I suggest that PHS could in fact use both Gravity and Mind as teleological control mechanisms for a Universe simulation.   C 61  Entropy Reversal and Quantum-Like Coherence in the Brain  Alfredo Pereira Jr., Polli, Roberson S. <apj@ibb.unesp.br> (State University of São Paulo (UNESP), Botucatu, São Paulo, Brasil)    Quantum-like macro-state coherence can be generated in the living brain by means of molecular mechanisms that induce local entropy reversal (at the cost of increasing environmental entropy). The idea that entropy reversal can locally increase (bio)physical organization derives from conjectures by Maxwell, Schrödinger and Monod. Contemporary models of the Ion-Trap Quantum Computer (ITQC) can be viewed as belonging to the "Maxwell Demon" family of systems, since: a) the movements of the ions are controlled to produce physical organization; b) external energy (the laser) is used to transfer information to the system; and c) the system’s activity (phonon modes related to spin values of different electronic configurations) support the performance of reversible operations. Analogously, in the living brain, biological mechanisms - as neuronal membrane channel gating - control the movement of ions. Astroglial cells, being responsible for the distribution of free energy (in the form of glucose) from arterial blood to neurons, and actively participating in tripartite synapses, may also be involved in an entropy reversal process. We propose that calcium ion populations trapped in the astrocytic syncytium, while interacting with neuronal electric fields, operate as a large-scale ITQC, with an architecture similar to the model presented by Kielpinski, Monroe and Wineland (2002). On the one hand, contemporary schemes for ITQC with hot ions (Poyatos, Cirac and Zoller, 1998; Molmer and Sorensen, 1999; Milburn, Schneider and James, 2000; Kielpinski et al., 2000) reveal that multimodal phonon patterns compose complex coherent states. On the other hand, empirical results from brain science indicate that astrocytes participate in the sustaining of neuronal excitation (Haydon and Carmignoto, 2006) and onset of oscillatory synchrony (Fellin et al., 2004), both functions closely related to conscious processing. Calcium waves in the syncytium are also a medium for large-scale integration (Robertson, 2002). This integration possibly includes inter-hemispheric communication by means of cerebrospinal fluid (a possibility based on the proposal made by Glassey, 2001). In conclusion, we suggest that the brain’s hot, wet and noisy ITQC, composed of a calcium ion population trapped in astrocytes and interacting with neuronal electric fields, can embody complex patterns that compose the contents of consciousness. FELLIN T et al.(2004) Neuronal Synchrony Mediated by Astrocytic Glutamate Through Activation of Extrasynaptic NMDA Receptors. Neuron 43(5): 729-43. GLASSEY G(2001) The Neuroglial Cell-Neuropeptide Highway. Published online: http://www.healtouch.com/csft/highway.html HAYDON PG CARMIGNOTO G(2006) Astrocyte Control of Synaptic Transmission and Neurovascular Coupling. Physiol Rev. 86(3): 1009-31. KIELPINSKI D et al.(2000) Sympathetic Cooling of Trapped Ions for Quantum Logic. Physical Review A 61, 032310, p. 1-8. KIELPINSKI D MONROE C WINELAND DJ(2002) Architecture for a Large-Scale Ion-Trap Quantum Computer. Nature 417: 709-711. MILBURN GJ SCHNEIDER S JAMES DFV(2000) Ion Trap Quantum Computing With Warm Ions. Fortschritte der Physik 48: 801-810. MOLMER K SORENSEN A(1999) Multiparticle Entanglement of Hot Trapped Ions. Physical Review Letters 82 (9): 1835-1838. POYATOS JF CIRAC JI ZOLLER P(1998) Quantum Gates With “Hot” Trapped Ions. Physical Review Letters 81, 1322-1325. ROBERTSON JM(2002) The Astrocentric Hypothesis: proposed role of astrocytes in consciousness and memory formation. Journal of Physiology-Paris 96: 251-255.   C 62  Neurons react to ultraweak electromagnetic fields   Rita Pizzi, D. Rossetti; G. Cino; A.L. Vescovi; W. Baer <pizzi@dti.unimi.it> (Department of Information Technologies, University of Milan, Crema, CR, Italy)    Since 2002 our group has been concerned with the direct acquisition of signals from cultured neurons. During the first experiments we noticed anomalies in the electrical signals coming from separate and isolated neural cultures that suggested that either neurons were extremely sensitive to classical electromagnetic stimulation or some form non-classical communication between isolated systems was occurring. We improved our experimental setup in order to further explore this phenomenon and eliminate possible experimental errors that might bias our results. Our last experiment was consisted of three MEA (Microelectrode Arrays) basins, one filled with human neurons and the others with control liquids. Each basin was in turn irradiated with a laser beam while the other basins were shielded by means of a double opaque Faraday cage. In all cases we found a sharp spike in the electrical activity coming from the neural basin simultaneous to the laser emission, but no activity was present in the two control basins with or without shieldings. To eliminate the possibility of electromagnetic coupling the hardware system was designed with special electronic devices and photo-couplers to avoid any kind of interference between circuits and MEAs. Several tests were performed by means of both oscilloscope and spectrum analyzer to ascertain the absence of cross-talk and induction phenomena. During one of the experiments we substituted the laser with a dummy load in order to simulate the current absorption equivalent to the one generated by the laser and we found the same peak was present. Upon further investigation we concluded that the phenomenon could be due to an electromagnetical field coming from the laser supply circuit that was too weak to be detectible with our measure instruments. Neurons appear to receive and amplify an electromagnetic spike whose value through the air, before reaching the Faraday shielding, is less than 70 microGauss and under the sensitivity of our oscilloscope (2 mV). It must be stressed that in order to cause a neuron spike using a direct electrical stimulation inside the cell, a 30 mV pulse is necessary. The value of the electric and magnetic field under the double Faraday cage is under the sensitivity of our instrumentation but is estimated to be at least one order of magnitude less. We believe the neurons are the active receiving element because the MEA control circuit and the activation circuit are completely separated, the MEA basins are connected to the ground, their shape is not suitable to act as antenna and the spikes observed in the neural basin are never present in the other control basins. Though the exact mechanism for the observed neural response has not been identified we can at the moment hypothesize that neurons act as antennas for extremely weak electromagnetic fields. The neural reactivity may be due to the presence of microtubules in their cellular structure. Microtubules are structurally similar to carbon nanotubes, whose tubular shape makes them natural cavity antennas. New analyses with more sensitive instruments, and a mu-metal cage to avoid magnetic fields, are underway to further investigate the nature of this extreme neural sensitivity.   C 63  The Mind’s Image of the World, the Classical Physics of Motion, and the Quantum Physics of the Brain   Arkady Plotnitsky <plotnits@purdue.edu> (Theory and Cultural Studies, Purdue University, W. Lafayette, Indiana)    This paper takes as its point of departure Alain Berthoz’ argument for the significance of physical movement in our understanding of the brain’s functioning. According to Berthoz, perception is not only an interpretation of sensory messages but also an internal simulation of action, thereby making perception and action irreducibly intertwined. The fact that every moving body must follow the laws of classical mechanics compels the brain to invent strategies to make complex mechanical calculations, and, hence, to internalize the basic laws of geometry and kinematics. Indeed, the whole conceptual structure of, first, Euclidean geometry and then of classical physics (including kinematics), or our physical-mathematical image of the world, may be seen as arising from this classical-like phenomenal image (a thought image) created by the brain and its capacities of both remembering the past and predicting the future. Berthoz also links the brain’s functioning, as grounded in motion, to the Bayesian theory of probability. The latter deals with predictions concerning the outcome of individual events on the basis of the available information and, hence, conceptually memory, rather than on statistical inferences based on frequencies of repeated events. Berthoz speaks of “a memory for prediction.” Thus, our interaction with the world is defined by taking chances and our success in the world by taking our chances well. Berthoz argues that, by focusing primarily on the connectivities within the brain, current neurobiological and neurophysiological theories by and large fail to take into account these, motion and environment oriented, workings of the brain, which he believes to be primary and fundamental to its development and functioning, or evolutionary emergence. Our biological constitution appears to be especially suited for creating the classical image of the world and succeeds in the world by working with this image. This, however, does not mean that either the world or the brain need themselves be seen as classical physical systems. The ultimate aim of this paper is to explore potential interconnections between Berthoz’s theory and Umezawa’s and Vitiello’s quantum-theoretical approaches to the brain, based on the understanding of the brain as a dissipative quantum system, continuously interactive with environment—the world. Although along somewhat different lines, both Berthoz and Vitiello argue that the brain creates a certain image of the world in our mind. By so doing, the brain enables the body to interact with and to live in the actual world, whose ultimate constitution appears to be quantum and may, ultimately, be beyond the brain’s (classical) image of it and possibly beyond any conception our mind can form. The question broached by this paper is why the physical machinery of the brain that creates the classical physical image of the world in order to interact, most especially probabilistically or by taking our chances well, with the actual world might need to be physically quantum. In other words, the question is why the physically quantum doubling of the world and the brain may be necessary to create the classical image of the world and of the mind itself.  C 64  Human Biocatalysis and Human Entanglement. How to Fill the Gap between Quantum and Social Sciences?   Massimo Pregnolato, Paola Zizzi <maxp@pbl.unipv.it> (Pharmaceutical Chemistry, University od Pavia, Pavia, Italy)    In complexity science, entanglement is what exists before order emerges. The role of quantum entanglement as the precursor to emergent order is much discussed in physics [1]. For instance, Gell-Mann [2] defines an entanglement field as a 'fine-grained structure of paired histories among quantum states'. The notion of the primordial pool which existed before the origin of life is also much discussed in biology [3]. According to Christopher Davia [4] the evolution of life is the evolution of catalysis. Indeed, the biosphere, taken as a whole, may be considered a macroscopic process of catalysis. From the evolution of catalysis, from specific to non-specific, Man has emerged, the most non-specific catalyst on Earth. McKelvey has found that an understanding of entanglement from quantum theory can throw useful light on the nature of ties among people [5,6] and their impact on emergent order in organisations. In terms of human behaviour, he explained that a high correlation between the paired histories of people would mean they think in similar ways; a low correlation would mean they go in different directions. We define Human Biocatalyst (HB) a human being able to catalyze human relationships in a selective way. A HB selects people with high relative affinity and catalyzes reactions between them through the communication. The products of these interactions could be a tangible human-human like-entanglement. Dean Radin has done extensive work on the idea of Human Entanglement. He describes experiments that shown a non-local connection between human beings when they ‘think’ of each other [7]. Entanglement, when included in quantum games [8], makes (somehow) everybody win. Entangled quantum strategies are such that all players cooperate, and classical egoism (destructive) is replaced by quantum altruism (constructive). Entanglement might explain some forms of telepathy, actually quantum pseudo-telepathy [9] between “quantum-minded” players who play a quantum game. We think that Basic logic [10] could be a good starting point towards a deeper understanding of the Quantum world also because it is the only logic which can accommodate the new logical connective @ = “entanglement”[11]. One of our dearest hopes is that Basic logic, once applied to the study of the deepest levels of the unconscious, might be useful for the care of some mental diseases, like schizophrenia, which are still wayward with respect to usual psychotherapy. The Quantumbionet will be presented. The network will include well-known intellectuals, teachers and laboratories supporting the development of sciences and aimed to play an active role on the international stage for human health and wellness enhancement. The network will be the bridge between science and human behaviour.  C 65  Whitehead’s tri-modal theory of perception in the light of empirical research   Franz Riffert, <Franz.Riffert@sbg.ac.at> (Education, University of Salzburg, Salzburg, Austria)    Whitehead has developed a bold theory of perception based on the concepts of his process philosophy (Whitehead 1978). According to him it is one of the shortcomings of modern philosophy not to shed any light on the sciences. In elaborating his theory of perception he showed how such a fertile interchange between sciences (psychology) and philosophy (process metaphysics) might be possible and what new perspectives follow from it. Whitehead’s theory of perception is tri-modal i.e. there are three different modes of perception which are related “genetically”. The most basic and most primitive of these three modes is ‘causal efficacy’ which is a form of immediate and rich albeit vague grasping of one’s surrounding. It is best conceived in neuro-physiological and/or sensory-motor terms and is connecting the perceiver directly with his/her environment. Based on this primitive mode and elaborated by abstraction and attention the second mode of perception is developed: the mode of ‘presentational immediacy’. In this more advanced mode of perception certain aspects of the rich content of the mode of ‘causal efficacy’ are abstracted and highlighted. These specific aspects are given in a clear and distinct way as sensa such as exact spatial and temporal relations, distinct forms and colours. The most advanced mode of perception, the mode of our everyday perception, is generated by integrating the two more primitive perceptive modes; one of these two more primitive modes acts as symbol while the other one takes the role of the designate; therefore Whitehead termed this mode “symbolic reference”. In this mode the feature of consciousness is introduced since according to Whitehead it is the subjective feeling of the contrast of what might be (symbol) and what is in fact the case (designate). Some of the features of Whitehead’s philosophical theory of perception can be tested empirically: First one may look for evidence in the neuro-sciences as well as in psychology in favour of its tri-modal character. Second the general tendency of perception from to distinct apprehension which finally is accompanied by consciousness can be tested against the body of research results in psychology of perception. Finally Whitehead’s claim that a primitive mode of perception does exist can be examined because he has described the characteristics of this perceptive mode; they can be compared with psychological evidence. Micro genetic (Werner 1956; Bachmann 2001) and percept genetic research (Smith 2000) is dealing with perception much in the same way as Whitehead. Results confirm Whitehead’s position concerning a general tendency from vague to distinct information processing in perception. The tri-modal character of Whitehead’s theory finds support in Anthony Marcel’s well-known tachistoscope experiments which are presented in his paper ‘Conscious and Unconscious Perception: Experiments on Visual Masking and Word Recognition’ (1983). Victor Rosenthal in a micro genetic experiment on reading (2005) speculates about two distinct neuronal pathways in the brain: one processing available information quickly but in a crude way, while the other one processes information in a detailed way but much slower. This also to some extent supports Whitehead’s position.   C 66  Dynamic Geometry, Bayesian approach to Brain function and Computability  Sisir Roy <sisir@isical.ac.in> (physics and applied mathematics, indian statistical institute, kolkata, w.b., india)    Recently, the present author along with his collaborators introduced the concept of dynamic geometry towards understanding brain function. This is based on the idea of functional geometry as proposed by Pellionisz and Llinas. This interpretation assumes that the relation between the brain and the external world is determined by the ability of the Central Nervous System (CNS) to construct an internal model of the external world using an interactive geometrical relationship between sensory and motor expression. This approach opened new vistas not only in brain research but also in understanding the foundations of geometry itself. The approach named tensor network theory is sufficiently rich to allow specific computational modelling and addressed the issue of prediction, based on Taylor series expansion properties of the system, at the neuronal level, as a basic property of brain function. It was actually proposed that the evolutionary realm is the backbone for the development of an internal functional space that, while being purely representational, can interact successfully with the totally different world of the so called “external reality”. Now if the internal space or functional space is endowed with stochastic metric tensor properties, then there will be a dynamic correspondence between events in the external world and their specification in the internal space. We shall call this dynamic geometry since the minimal time resolution of the brain, associated with 40 Hz oscillations of neurons and their network dynamics is considered to be responsible for recognizing external events and generating the concept of simultaneity. In this framework, mindness is considered as one of the several global physiological computational states (functional states) that the brain can generate. Since, computation and information processing are accepted terms in neuroscience, it is necessary to clarify the meaning of computation and information measure. The functional states are considered to be internal states related to the metric property associated to CNS. In fact they are being generated due to intrinsic properties of neurons. It indicates that Bayesian decision theory and Fisher information might play significant roles in understanding brain function. It is found that CNS does not compute rather optimizes the behaviours. This optimization of behaviours is similar to “computation capacity” for digital machine as proposed by Toffoli. This perspective will shed new light on the issue of computability vs. non-computability of brain.  C 67  Neural Correlates and Advanced Physics  David Scharf <dscharf108@gmail.com> (Physics, Maharishi University of Management, Fairfield, IA)    Although researchers are daily uncovering new information about the brain—from an increasingly exhaustive mapping of its neural pathways to a more thorough and detailed understanding of the correlations with conscious experience and cognitive faculties—still, at its current stage of development, neuroscience is not yet in a position to provide a comprehensive analysis of the microphysical underpinnings of conscious experience. The program for the neural correlates of consciousness does not claim to provide such a comprehensive microanalysis; instead, it offers to outline a global view of both the broad features and logical constraints of such a microanalysis. This program embodies two explicit assumptions: (1) that conscious experience supervenes on its neural basis, where supervenience implies that if the physical basis is present, then the corresponding conscious experience will occur, and (2) that the conscious experience is dependent on the physical. This second assumption casts the neural correlates program in expressly physicalistic terms. Also, a third, usually unstated, assumption is not harmless: Discussions of the neural correlates of consciousness take for granted that (3) these correlates are governed by classical physics—that any effects of advanced physics will be insignificant, will average out, or will otherwise not affect the brain’s determination of conscious experience. Unfortunately for those who take this route, assumptions (2) and (3) lock the researcher in a pernicious dilemma. Let’s suppose for a moment that these radical physicalists were right. Then a particular configuration of neurons firing (or other correlates) would determine any given conscious experience or mental activity. Naturally, this presents a burden of explanation: Given the dependency on the physical, how is it that mental content is internally coherent and intelligible, and how is it that (ordinarily) our mental representations accurately reflect the external world? A pointed way to frame the dilemma is to note that the logical and scientific train of reasoning leading to the neural correlates program itself would be determined by the underlying neural correlates, thus calling into question its own justification. This is a similar bind that Hilary Putnam and others identified as arising from the brain-in-a-vat scenarios, and which led to Putnam’s wholesale rejection of the neural correlates program—with its mind-brain dependence relation. But, as we see things, there are better alternatives to be had than Putnam’s conclusion. Successfully explaining—or at the very least allowing for—the internal coherence and external reliability of consciousness, in the context of a neural correlates program, fundamentally depends on the parameters of the specific type of physicalism we adopt. This is where advanced physics may come to the rescue. Indeed, certain aspects of consciousness that are incompatible with a physicalism based on classical physics may be not only consistent with, but explainable in terms of, a physicalism grounded in advanced physics.   C 68  Quantum Theory, the Dream Metaphor and the Meta-Brain Model  Thomas Schumann <tschuman@calpoly.edu> (Physics, California Polytechnic State University , San Luis Obispo, California)    We argue from the quantum double-slit experiment, from the evolution of emotions and other issues that the mental world influences the physical just as the physical influences the mental. From analogy with electro-magnetism (changing electric field produces changing magnetic field and vice-versa) that the mental and physical worlds are really one entity. From this comes the dream metaphor in which the mental and the physical are the same; this fits the quantum theory of measurement in which an observable of a system becomes "real" only when it is observed (the system is no longer in a superpostion of possible values for the observable). With the associated model of the "meta-brain" we derive intuitively the disturbance of a system when it is observed, the non-commutation of observables and, using the Einstein-Podolsky-Rosen situation, we derive the observer dependent nature of the wave function. The wave function is mental and thus physical as well. We discuss, in the context of the dream metaphor the "filling in of history by observation" associated with Wheeler's "delayed choice" thought experiment. We require a "recursion principle" by which the meta-brain produces the dreams or streams of consciousness which produce brains which produce the streams of consciousness. The meta-brain contains the non-local hidden variables which determine the content of the "dreams" or streams of consciousness. We discuss the anthropic principle within the "recursion principle" and eliminate from the multi-verse all (dream) universes which cannot produce a brain. We also consider the concept of a wave function for an entire universe to be meaningless in this context as an individual cannot observe the whole universe. That results, at least in part, because of the limit on the speed of information transfer (the speed of light).  C 69  Overlap with the different QUA   Francis Schwanauer <franz@gw-in.usm.maine.edu> (Phiulosophy, USM, Portland, Maine)    ABSTRACT: Renewed efforts to gauge the informative aspect in quantum effects has finally identified graviton and photon as the lowest promulgative degree of about-ness in quantum-interference. What makes the “built-in proof” of these rest-mass-less particles convincingly informative, is the fact of their being shared by overlapping parent particles. This most recently detected shortcut between presentation and representation, quantum-inference and quantum-causation, or sameness between showing and telling, reduces the new grammar of quantum interaction to such elemental laws as acceptable proximity, limits to collapse, and/or expansion, between the sufficiently “different” qua the other, and the elegant sharing and seamless transference of energy between spatial and temporal neighborhoods respectively. This, however, turns inertial frames into the axiomatic monopoly of consciousness, which not only dominates what implies in quantum-inference, but also what conditions in quantum-causation. If, therefore, conscious quantum-interference (qua quantum-information, transfer, etc.) holds, then the grip of consciousness becomes no less pervasive than that of a gravitational field on both the included and the neighboring phenomena. Though still proportional or restricted to its inertial frame as parent particle or self-inclusive superposition, it becomes the active agent behind the manipulation of its representational apparatus and the authentic origin of synchrony. This is shown both by its capacity to hurl never less than 2 such items as positive mass particles in the form of classical waves in different directions within the two halves of its very brain at the speed of light (cp. the Yang-Mills theory), and its ability to coordinate unheard-of extremes, not-withstanding contrary alternatives (cp. Feynman’s quantum weirdness), for a final choice and decision procedure on the promulgation of matter and/or anti-matter to suit its long run purposes. In short, if quantum-coherence between the sufficiently “different” by way of overlap holds, so will quantum-interference together with its more or less distant echo, the synthetic nature of quantum effects.   C 70  Causality, Randomness, and Free Will  Richard Shoup <shoup@boundary.org> (Boundary Institute, Saratoga, CA)    The experience of free will has often been regarded as a hallmark of consciousness, yet its meaning and very existence have been debated for millenia. In this talk, we explore the complex relationship between free will, determinism, causality (both forward and backward), and quantum randomness. The latter, a deep and central assumption in quantum theory, is associated with measurement interactions. From an analysis based on quantum entropy, it is proposed that quantum measurement is properly understood as a unitary three-way interaction, with no collapse, no fundamental randomness, and no barrier to backward influence. Experiments with quantum-random devices suggest that retro-causal effects are seen frequently in various forms, and can be shown to explain some anomalous phenomena such as clairvoyance and precognition. It is argued that all interactions are indeed unitary, reversible, and thus deterministic, but that large-number effects give a persistent illusion nearly equivalent to free will.   C 71  Can a Computer have a Mind?: Non-computability of Consciousness  Daegene Song <dsong@kias.re.kr> (School of Computational Sciences, Korea Institute for Advanced Study, Seoul, Korea)    Penrose has suggested that there may be a non-computable aspect in consciousness at the fundamental level as in Godel's incompleteness theorem or Turing's halting problem. It is shown that, as in Penrose's suggestion, consciousness in the frame work of quantum computation yields a physical example of the non-computable halting problem. The assumption of the existence of the quantum halting machine leads into a contradiction when a vector representing the observer's reference frame is also the system which is to be unitarily evolved, i.e. consciousness in quantum language, in both the Schrodinger and Heisenberg pictures.  C 72  Fundamental Biological Quantum Measurement Processes  Michael Steiner, Uzi Awret, R. W. Rendell, Sisir Roy, <mjsasdf@yahoo.com> (Center for Quantum Studies, George Mason University, Fairfax, VA)    Wigner, Von Neumann and others believed that consciousness and quantum state evolution are related. While this is a difficult open question, a simpler question is whether or not a process other than Schrödinger's equation is involved in basic biological processes. It is well known that use of Schrödinger's equation alone to treat interactions generally results in non-classical superpositions. Yet nature has managed to provide recognition processes as well as store information that appears to be completely classical, that is without superposition. Hence it seems reasonable to examine whether or not certain biological processes are somehow associated with the measurement process. We will explore the nature of the dynamic transition from Schrodinger only, i.e. wave only to where one gets measurement or collapse. We are supposing that the biological domain is where the collapse occurs. We examine biological macromolecules which enables the creation of biological records and the finalizing of biological recognition processes. We will be especially interested in biological macromolecules and systems that were designed to function close to the border separating the two domains. We calculate the threshold for several basic biological processes and compare this to the lower bound TL calculated by canvassing current quantum experiments on mesoscopic systems. It is argued that most fundamental biological process require recognition processes that must be inherently based on the measurement process. That is, nature has designed its systems taking into account the size or energy needed for measurement to occur. If this is the case, then we should be able to learn about the characteristics of measurement by examining biological systems. We will examine whether there is biological evidence that a threshold exists in (delta)E(delta)X > T. Several biological fundamental processes are examined. The first is the manner in which protein chains are recognized. One of the basic and ancient elements that is common in all three domains of life—the Eukarya, Bacteria, and Archae is the signal recognition particle (SRP). The SRP has basic functionality that would be consistent with the measurement process. The SRP recognizes and binds to a signal sequence carried by the ribosome and then guides it to the rough endoplastic reticulum (ER). These binding energies usually have three types of contributions, i.e. electrostatic, hydrogen bonds and induced dipole-dipole interactions or Van der Waals’ interactions. Other processes examined include high affinity protein interactions and protein RNA complexes that are crucial to biological recognition and record creation. Antibody substrate, P-MHC TCR complexes, hormone and their corresponding receptors and interaction hotspots will also be examined. We will also review the current status of mesoscopic physics, and show where experiments that have verified Schrödinger evolution lie in terms of T. We will see that most experiments that have been conducted actually have a small (delta)E(delta)X. For example, superconducting squid systems typically have a large delta X but very small (delta)E. Such experiments give us a lower bound TL on the threshold. Based on the most up-to-date experiments, we will provide an estimate of TL . We will see that a given threshold can describe quite well very different physical situations such as ionization and the Rydberg atom, and nuclear processes.   C 73  Why meaning is the harder matter: a Boh(e)mian anthropology   Koen Stroeken <koen.stroeken@ant.kuleuven.be> (Anthropology, University of Leuven, Huldenberg, Belgium)    Mainstream anthropology has kept itself outside the mind/matter debate, just as most neuroscientists have, albeit for the opposite reason. Students of culture feel hopelessly dualistic when confronted with the dominant materialism that recasts the debate as a mechanistic challenge, that of neurocomputation, which attributes to the brain a sort of ‘immaculate conception’ of consciousness. If a hundred years of research of cultures taught us anything it is that the principle of natural selection can describe the function and survival of ideas (Atran, Sperber) but not their content and origin, that is, the semantic stuff selected. Meanings appear to be universally shared despite our brains being unique individual constellations of absolutely separate matter. That is why, in practice, ethnographers treat human minds as selections from a common consciousness. Defying both materialism and Cartesian dualism, the implication is that subjective experience arises not from 'mother nature' alone, but from interacting with another source of causation, 'father culture' so to speak. This is another way of saying, with Bohm, that matter does not equal consciousness and that we need meaning, a second, moulding (hence harder) type of matter, to bridge both. From an anthropologist's perspective the best candidate for an interdisciplinary paradigm of thought indeed seems Bohm's solution to the quantum riddle: our classical spacetime, the explicate order, selects from an implicate order of potentialities. A cultural selection from the quantum multiverse constitutes the particular spacetime that is our universe, and thus consistently determines what humans can be conscious of and measure. This measured content of consciousness has been experimentally proven to be non-local and quantum entangled (Aspect, Wheeler). What does this mean in a cultural reading of experiments? The fact of our conscious perception knowing the future betrays our physical belonging to a more encompassing reality, the multiverse, for which our (Einsteinian) spacetime is a selection, entirely completed as selections are. Our mind stands as it were at the edge of spacetime, itself unfortunately (as Bohm remarked) the only world we can think. Humans are bohemians in their world. I conclude more concretely with data on spirit possession which illustrate the exceptional parasympathetic nervous system of the human species. Naturally selected to suspend homeostatic reactions and to stand emotions, our body (not just the brain) managed to use the binary principle of meaning systems (inclusion/ exclusion) to further control homeostasis (intrusion/ synchrony) and become conscious of more. In biological terms consciousness would thus be the by-product arising during this suspension and control, for which I tentatively consider a number of macro-neural correlates.   C 74  Consciousness and the measurement problem: A possible objective resolution  Fred Thaheld <fthaheld@directcon.net> (Folsom, Calif.)     A recent mathematical analysis of the measurement problem by Adler (1), from the standpoint of Ghirardi's (2,3) Continuous Spontaneous Localization (CSL) theory, reveals that collapse of the wave function takes place in the rod cells of the retina in an objective fashion following amplification of the signal, rather than in a subjective fashion (as had been proposed by Ghirardi et al) in the brain, mind or consciousness. This analysis is in agreement with the positions taken by Shimony (4) and Thaheld (5), that this event takes place in the rod cells of the retina but, at an earlier stage prior to amplification, involving the conformational change of the rhodopsin molecule. It is of historical interest to note here that both Wigner (6) (later in life) and Dirac (7) also espoused an objective process. Additional supporting evidence for an objective apaproach can be found in the persual of rhodopsin molecule and retinal rod cell schematics (8), which graphically illustrate why collapse has to take place in this fashion. This can also be subjected to 2 different empirical approaches, one involving excised retinal tissue mounted on a microelectrode array and superposed photon states (9) or, through molecular interferometry (10,11) involving matter-wave diffraction, where a "collapsing" wave packet will lead to a suppression of interference. This proposed solution to the 7 decades-old dilemma of the measurement problem, calling for an actual collapse mechanism, requires a modification of the Schroedinger equation to include nonlinear discontinuous changes. This will then allow one to address one or more related issues such as the Heisenberg 'cut' between the quantum and classical worlds, the validity of Everett's 'many worlds' theory (12), raises the possibility for controllable superluminal communication (13), that any living system with or without eyes might possess this same collapse ability, the maintenance of entanglement after repeated measurements, with interesting implications for the Schroedinger's 'cat' concept, finally leading to a new approach to the SETI issue via astrobiological nonlocality at the cosmological level (14). References: 1. Adler, S., 2006. quant-ph/0605072. 2. Aicardi, F., Borsellino, J., Ghirardi, G.C., Grassi, R. 1991. Found. Phys. Lett. 4, 109. 3. Ghirardi, G.C., 1999. quant-ph/9810028. 4. Shimony, A., 1998. Comments on Leggett's "Macroscopic Realism", in: Quantum measurement: Beyond paradox. R.A. Healey, G. Hellman, eds. Univ. Minnesota, Minneapolis. 5. Thaheld, F.H., 2005. quant-ph/0509042. 6. Wigner, E., 1999. in: Essay Review: Wigner's view of physical reality. M. Esfeld. Stud. Hist. philos.Mod. Phys. 30 B, 145. 7. Dirac, P.A.M., 1930. The principles of quantum mechanics. Clarendon, Oxford. 8. Kandel, E.R., Schwartz, J.H., Jessell, T.M., 2000. Principles of neural science. 4th ed. McGraw-hill, New York. (See especially p. 511, Fig. 26-3 and p. 515, Fig. 26-6. 9. Thaheld, F.H., 2003. BioSystems 71, 305. 10. Carlip, S., Salzman, P., 2006. gr-qc/0606120. 11. Zeilinger, A., 2005. Probing the limits of the quantum world. Physics World. March. 12. Everett, H., 1957. Rev. Mod. Phys. 29, 454. 13. Thaheld, F.H., 2006. physics/0607124. 14. Thaheld, F.H., 2006. physics/0608285.   C 75  A New Theory About Time   Jeff Tollaksen, Yakir Aharonov and Sandu Popescu <jtollaks@gmu.edu> (Dept of Physics & Dept of Computational Sciences, GMU, Fairfax, va, usa)    We present a fundamentally new approach to time evolution within Quantum Theory. Several advantages of this new picture over the standard formulation of Quantum Theory are 1) it can represent multi-time correlations which are similar to Einstein-Podolsky-Rosen/Bohm entanglement but instead of being between two particles in space, they are correlations for a single particle between two different times, 2) dynamics and kinematics can be unified within the same language, and 3) it introduces a new, more fundamental form of complementarity (namely between dynamics and kinematics), 4) it suggests a new approach to time-transience or subjective becoming, one of the most fundamental aspects of conscious experience. The last item is significant given Einstein's reflection that becoming or the subjective-now does not and cannot occur within physics. As a consequence, to date, physics does not incorporate time-transience, i.e. space-time does not evolve or have dynamics. As an analogy, in a geographic map, nothing indicates that one mountain vanishes and another appears, they all co-exist. Similarly, the passage of time has no fundamental or dynamical importance, it is merely an illusion. The new approach to time evolution incorporates becoming by utilizing the new Hilbert spaces introduced for each instant of time. (In contrast, traditionally one Hilbert space is used to represent the entire universe.) We then define a Super-Hamiltonian, which has as its ground state one entire history for the universe. Using another fundamental discovery we call internal and external reality, we associate the time of this Super-Hamiltonian with both awareness variables and processes related to wavefunction collapse. The evolution of awareness or consciousness is then associated with an adiabatic evolution of the Super-Hamiltonian. Because a single Now requires integration over all of the Super-Hamiltonian time, this new approach also illuminates the common phrase (e.g. by Bohm): now is the intersection of eternity and time.   C 76  Gravity minds? Parallels between the basic characters of the consciousness and the gravity.   Imre András Török, Gábor, Vincze <torokia@freemail.hu> (Department of Psychology, University of Szeged, Szentes, Hungary)    Our discourse consists of two parts. First we draw an epistemological and phenomenological parallel between two, seemingly remote and the most overall phenomena of the world. With this our aim is to help people understand the mind deeper. At the moment neither the gravity (the missing link from the Grand Unified Theory) nor the conscious experience are explained in their origins. The extreme manifestations of the gravity produce such phenomena that correspond to the criteria of the consciousness determined by Hussler. In case of the black holes we can observe such closeness on the level of the phenomenon that is obvious in case of the subject. That is, the subjective experience of the individual is not accessible on the level of the experience, similarly to this, the inaccessibility of the space of the black hole is obvious in case of the physical phenomena, only their effects can be shown. Beside the phenomenological similarity of the features of the two basic phenomena, their explanation attempts are also similar in the mainstream natural science. In one hand the subjective experiences are considered to be the consequences of other basic phenomena, while the gravity itself seems to be on independent physical phenomenon. In the second part of the discourse we give provocatively and tentatively such a contesting explanation to the gravity and subjectivity that in the first case makes the origin of the gravity possible on mathematical and phyisical basic (as the consequences of a complex phenomenon), in the second case we give contesting explanations related to the materialistic reduction of the consciousness relying on biological evidences. The biological firmament of the reasoning will prove the fact that the phenomenon of ipseity cannot be reduced into a materialist level yet it can be placed in the scientific psychology.   C 77  Quantum information theory and the human brain: The special role for human unconscious information processing  Maurits Van den Noort, Peggy Bosch; Kenneth Hugdahl <Maurits.Noort@psybp.uib.no> (Dept. of Biological and Medical Psychology, Division of Cognitive Neuroscience, University of Bergen, Bergen, Hordaland, Norway)    Concepts like entanglement, randomness, and complementarity have become the core principles of newly emerging quantum information technologies: quantum teleportation, quantum computation and quantum cryptography (Zeilinger, 2005). Although quantum computation promises to be a dominant form of information technology (e.g. Childress et al., 2006; Duan, Cirac, & Zoller, 2001), we do not know very much about the interaction between humans and quantum computers and the relation between quantum mechanics and (higher) brain functions yet (e.g. Koch & Hepp, 2006; Van den Noort & Bosch, 2006). In this presentation, behavioral studies and studies that focus on the peripheral- and the cortical level will be discussed that suggest a special role for unconscious (emotional) information processing in human computer interaction (Van den Noort, Hugdahl, & Bosch, 2005). The implications of these results both for human conventional computer and for human quantum computer interaction will be discussed. References: Childress, L., Gurudev Dutt, M. V., Taylor, J. M., Zibrov, A. S., Jelezko, F., Wrachtrup, J., Hemmer, P. R., & Lukin, M. D. (2006). Coherent Dynamics of Coupled Electron and Nuclear Spin Qubits in Diamond. Science, 314, 281-285. Duan, L. M., Cirac, J. I., & Zoller, P. (2001). Geometric Manipulation of Trapped Ions for Quantum Computation. Science, 292, 1695-1697. Koch, C., & Hepp, K. (2006). Quantum mechanics in the brain. Nature, 440, 611. Van den Noort, M. W. M. L., Hugdahl, K., & Bosch, M. P. C. (2005). Human Machine Interaction: The Special Role for Human Unconscious Emotional Information Processing. Lecture Notes in Computer Science, 3784, 598-605. Van den Noort, M. W. M. L., & Bosch, M. P. C. (2006). Brain Cell Chatter. Scientific American Mind, 17(5), 4-5. Zeilinger, A. (2005). The message of the quantum. Nature, 438, 743.   C 78  Mental causation, common sense and quantum mechanics  Vadim Vasilyev <edm@rol.ru> (Philosophy, Moscow State University, Moscow, Russia)    Many authors who try to comprehend the nature of connection of consciousness with quantum processes believe that presence of consciousness in measurement procedures leads to the collapse of the wave function. In other words, they admit the causal efficacy of consciousness or qualia. It is quite obvious, however, that quantum events, taken as such, don’t reveal the causal efficacy of consciousness, and some well-known interpretations of quantum mechanics have no need for any assumption as regards the role of consciousness in quantum phenomena. Hence the importance of the quest for independent arguments in favor of reality of mental causation and refutation of epiphenomenalism. In the near past there were many interesting attempts to destroy epiphenomenalism – Elitzur (1989), Hasker (1999), Kirk (2005), among others. Their arguments are very sophisticated, but, as a rule, such arguments can be blocked with no less sophisticated counter arguments. The simplest refutation of epiphenomenalism would have taken place in the case of contradiction of this doctrine with intuitions of common sense. Most philosophers, however, believe this is not our case. Indeed, while common sense assures us that, for example, our desires, considered as qualia, have an influence on our behavior, in fact it only assures us about a kind of correlation between desires and behavior, correlation that might be an epiphenomenon of some basic neuronal processes. Nevertheless – and this is my main point – it is possible to show that common sense convictions presuppose causal efficacy of consciousness after all. That’s because without such an assumption I simply couldn’t believe that other people have conscious states. I believe they have these states or qualia like I have because of their physical and behavioral similarity with myself. My conclusion is based on simplicity considerations. But if I consider the conscious states as epiphenomena, the world in which only myself is conscious (perhaps due to some peculiar property of my brain) is much simpler than a world where others are encumbered with qualia as well. Indeed, in the first world there is no multiplying of entities which were truly unnecessary and useless for explanation of the reality given in my experience (Jackson (1982), Chalmers (1996) and Robinson (2007) missed this point). Thus, if I assume that consciousness is epiphenomenal, I would hardly believe other people have consciousness at all. But common sense dictates me to believe they have conscious minds. Hence, my common sense comprises an implicit denial of epiphenomenality of conscious states. So we see that in some cases our common sense may even favor quantum mechanics, or, to be more exact, may support one of its most radical interpretations. References. Chalmers, D. 1996. The Conscious Mind. New York: Oxford University Press. Elitzur, A. 1989. Consciousness and the incompleteness of the physical explanation of behavior. Journal of Mind and Bahavior 10: 1–20. Hasker, W. 1999. The Emergent Self. Ithaca, N.Y. : Cornell University Press, 1999. Jackson, F. 1982. Epiphenomenal qualia. Philosophical Quarterly 32: 127–136. Kirk, R. 2005. Zombies and Consciousness. New York: Oxford University Press. Robinson, W. 2007. Epiphenomenalism. Entry in the Stanford Encyclopedia of Philosophy.  C 79  Spinoza, Leibniz and Quantum Cosmology  Laura Weed <weedl@strose.edu> (Philosophy, The College of St. Rose, Albany, NY)     During the Scientific Revolution, the mechanism of Isaac Newton and Rene Descartes triumphed over the more complex epistemological and metaphysical systems of Baruch Spinoza and G.W. Leibniz because the Spinozistic and Leibnizian systems seemed to speculate about unnecessary entities and forces, violating Ockham’s simplicity rule for scientific theories. In light of contemporary quantum mechanics, however, it may now be time to revisit some of the metaphysical an epistemological proposals of these two authors. I will propose three general metaphysical and epistemological positions espoused by one or both of these authors that may appear less speculative and extraneous to present day scientists than they did to their counterparts of the past. The general positions are 1) that parts and wholes interrelate forming an organic cosmos, rather than a congeries of compounded components; 2) that the totality of what exists exceeds human faculties and methodologies for acquiring knowledge; and 3) that the relationships among the varieties of temporal scales in the universe precludes a meaningful conception of universal mechanical causation. First, Leibniz, Spinoza and quantum mechanics agree that the world is not a computational result of adding parts. Rather, the cosmos is an organic system in which parts and wholes are mutually determining of one another. The paper will explore ways in which Leibnitzian monads, Spinozistic modes and the electrons in the Bell experiment reflect a holistic and inter-relational cosmos, rather than a compositional world. Second, while Newton and Descartes were both optimistic about the capacity of human knowledge to comprehend all there is, and to ultimately result in a grand unification of science, Spinoza and Leibniz both proposed perspectival and methodological limits on the human potential for knowledge. These limits are reflected, I shall argue, in the role of the observer in quantum theory, and in the Everett many-worlds hypothesis. Third, the concept of global mechanical causation proposed by Newton and Descartes presupposes a uniform global space-time, across which these causes might unfold. Both Spinoza and Leibniz understood time as a multi-layered phenomena, distinguishing among multiple local, regional and eternal conceptions of time. I will suggest that their paradigms might be more useful for interpreting Feynman’s proton and electron graphs metaphysically. Clearly, much of what Spinoza and Leibniz wrote is simply out of date and insufficiently prescient to be of any help with contemporary quantum understandings of reality. But I would like to propose that at least the three ideas articulated in this paper would be helpful in constructing a metaphysics and epistemology for the weirdness of the quantum world. Popular scientific conceptions of knowledge and reality have been wedded to Newtonian mechanistic materialism in ways that have become unhelpful for science. This new, although recycled, direction might be more productive.   C 80  Towards a Quantum Paradigm: An Integrated View of Matter and Mind  George Weissmann <georgeweis@aol.com> (Berkeley, CA)     A fundamental paradigm is the set of conditioned structuring tendencies that shape our experience existentially, conceptually and perceptually. It is based on a set of embodied assumptions or presuppositions. We call the specific fundamental paradigm which grounds our culture’s common sense and scientific views and which structures our existential reality, the Classical Paradigm (CP). A critical examination and analysis of relativistic and quantum phenomena reveals that the assumptions which define the CP break down in large parts of the total phenomenal domain. Remarkably, a century since the relativity and quantum revolutions, we have not yet succeeded in developing a new fundamental paradigm, a Quantum Paradigm, that could naturally ground relativity and quantum physics ontologically.. The mainstream Copenhagen Interpretation of QT is instrumentalist and yields the procedures we so successfully use to calculate the probabilities of the various possible outcomes of an experiment, given its preparation. But it does not provide an account of what is actually occurring in an experiment. In fact, when one tries to interpret it ontologically, it suffers from inner inconsistencies (measurement problem). The Copenhagen interpretation suggests that the topic of QT is not the world itself, but our knowledge of the world, the structure of experience. Various alternative interpretations have been proposed over the years in an attempt to remedy QT’s lack of an ontology. Most of them remained attached to core CP assumptions, including objective realism, which imply banishing consideration of consciousness. Some of these attempts were shown to be incompatible with the predictions and the structure of QT itself, while others survived but suffer from significant shortcomings. As a result, we are still navigating science, our own lives and society on the basis of a fundamentally flawed world view. Our claim is: we cannot ground quantum theory in the CP. In particular, we can no longer banish experience/consciousness from the picture and still hope to understand what QT is telling us about the nature of the world. We report on some promising progress towards the development of a Quantum Paradigm which provides an ontology for QT and inextricably integrates matter and mind. Henry Stapp, building on foundations offered by Whitehead and Heisenberg, has proposed an ontological model which builds on the Copenhagen interpretation and describes an unfolding world process, consisting of events that are - in human terms - moments of our experience. The probabilistic dynamics (tendencies) of this process are described by quantum theory. We propose integrating into this framework the relational postulate of Carlo Rovelli, which states that there are no facts or occurrences in an absolute sense, that these are always relative to a measuring or perceiving system. We further take into account insights gained by consideration of experimentally observed anomalies which suggest that quantum events are not fundamentally random but more like “decisions”. Proceeding thus, we arrive at a rudimentary and preliminary but heuristically useful version of a QP which could ground QT as well as human experience including its observed “anomalies”, and which encounters no “hard problem of consciousness”.   C 81  A Model of Human Consciousness (Global Cultural Evolution)  Marcus Abundis <marcus@cruzio.com> (unaffiliated, Santa Cruz, CA)    Evolutionary efficaciousness is measured in how well a given species adapts itself to its environment. In applying this premise to humanity, a model of global human cultural evolution is hypothesized. This exploration of Human Creativity focuses on: - emergence of humanity's direct conscious sense (personal ego), - the field of reasoning from which this conscious sense arises (imagination), - the field of reasoning that follows (knowledge), - and the system in which all is bound together (evolution). All else is derivative - a litany of subsequent emergent events (worship, war, work) endlessly folding back upon themselves, revealed as "civilization." This study begins with the organism that originally births humanity, Earth. Earth's geologic record shows at least five episodes of mass extinction followed by recovery. From these episodic cycles of Earthly death and rebirth, five evolutionary dynamics are named. The millennia-long interplay of these five dynamics brings greater diversity and complexity of life, until we arrive at the species of our epoch; including humankind with its challenges of consciousness. Earth's overarching evolutionary dynamics set the stage upon which human consciousness awakens. These dynamics organically stress (test) all organisms for viability, and trigger within humanity's adaptive psychology an “adverse relationship” with environment. A central focus of evolutionary fitness (rivalry with Nature’s adversity) mars humanity’s psyche with a sacred wound, as it appears "Mother wants to kill us?!" This sense of adversity provides evolutionary catalyst (bootstraps consciousness) and draws us to move expansively from discomfort to comfort. We are thus physically and psychologically charged to create adaptive responses, cultivating our "experience of consciousness." The sacred wound presents a paradox central to humanity’s continued expansion of consciousness. It lives in all intellectual and spiritual questions of unity vs. diversity (Earth-Mother vs. humanity) as the mythologizing of Natural adversity. Resolution of paradox begins in primal innocence at The Great Leap Forward (a state of unconscious unity) and evolves towards fully-manifest awareness (god-self, unity consciousness), prompting many states of consciousness along the way. But it is adversity that awakens humanity's unique creative spirit-dynamo to birth successive states of consciousness as a principal adaptive response. Our struggle with paradox fluoresce human consciousness towards diversity and complexity, following Earth's own metabolic trend. Humanity’s mirroring of Earth's evolutionary tendency (diversity and complexity) suggests functional means for human expressiveness. This expressiveness is mapped to Earth's five evolutionary dynamics, using five gender-paired archetypes. Our mirroring of Earth's evolutionary dynamics via these five archetypes (bio-culturalism) propels human consciousness across time. Humanity's bio-culturalism is amplified in these gender-paired archetypes and the mythic devices they enable. At a first level, "high/middle/low dreaming" archetypes reflect the hopes of humanity (creativity) set against Nature’s adversity, also seen in humanity's triune psyche: id, ego, superego, and other important triads. Deepening interoperation of this triune psyche completes two more of the five archetypes to create actualized archetypes. Actualized archetypes latently emerge as diverse but interdependent “realities” for individuals, communities, social enterprises, nation-states, etc. (civilization).   P 82  Quantum spaces of human thinking  Valentin Ageyev <ageyev@mail.kz> (psychology, Kazakh National University, Almaty, Almaty, Kazakhstan)    Thinking is ability to transform objective relations of nature in the purposes of human actions. Objective relations are quantized relations and devided into four types: casual, regular, system and relations of genesis. The human thinking has quan-tized character too as it is determined by quantized objective relations. Random relations are displayed by magic (sensual) type of thinking. Regular relations are displayed by mythological (intuitive) type of thinking. System relations are displayed by rational (logic) type of thinking. Relations of genesis are displayed by creative (historical) type of thinking. Magic (sensual) thinking is the way of transformation of objective random re-lations in the sensory purposes of spontaneous actions. The man operating spontane-ous way, recreates probable space of nature. Spontaneous action is determined by the sensory purpose which is a product of magic (sensual) thinking. Mythological (intuitive) thinking is the way of transformation of objective regular relations in the perception purposes of regular actions. The man operating in the regular way, recreates the ordered space of nature. Ordering action is determined by the perception purpose which is a product of mythological (intuitive) thinking. Rational (logic) thinking is the way of transformation of objective system rela-tions in the symbolical purposes of system actions. The man operating in the system way, recreates holistic type of nature. System action is determined by the symbolical purpose which is a product of rational (logic) thinking. Historical (creative) thinking is the way of transformation of objective rela-tions of genesis in the sign purposes of creative actions. The man operating in the creative way, recreates historical space of nature development. Creative action is de-termined by the sign purpose which is a product of historical (creative) thinking. Magic (sensual) thinking is the way of "cutting" in the nature of its first quan-tum space – probable. Products of magic (sensual) thinking are the "states" ("prob-ability") which are representing themselves as the purposes of spontaneous actions. As the result of spontaneous actions their purposes turn to the "magic" (sensual) knowledge expressing random character of nature. Mythological (intuitive) thinking is the way of "cutting" in nature of its second quantum space – regular. Products of mythological (intuitive) thinking are «object structures» ("orders") which are representing themselves as the purposes of regular actions. As the result of regular actions their purposes turn to the "mythological" (in-tuitive) knowledge expressing ordered character of nature. Rational (logic) thinking is the way of "cutting" in nature of its third quantum space – holistic. Products of rational (logic) thinking are «object forms» («formal logic»), representing themselves as the purposes of system actions. As the result of system actions their purposes turn to the rational (conscious) knowledge expressing holistic character of nature. Historical (creative) thinking is the way of "cutting" in the nature of its fourth quantum space – historical. Products of historical (creative) thinking are «genesis forms» («genesis logic»), representing themselves as the purposes of creative actions. As the result of creative actions their purposes turn to the historical (sensible) knowl-edge expressing historical character of nature   P 83  Concurrency, Quantum and Consciousness  Francisco Assis <fmarassis@gmail.com> (Electrical Engineering, Universidade Federal de Campina Grande, Brasil, Campina Grande , Brasil)    In this paper we review facts on theory of consciousness due to three authors: Tononi, Sun and Petri. In[1] Tononi proposes that consciouness level of a system can be measured capacity to integrate information and that quality of consciousness is given basically by the topology of the system. The ``system'' in the Tononi's theory is modeled by a graph G = (V, A, P ), where V = {1,\ 2,\ldots n } is the set of vertices, A subset ov V X V, is the set of edges and P is a probability distribution on the vertices V. In the Tononi's approach, A stand for causal relation between vertices connected, i.e. an edge means existence of a causal relation between its vertices. Following this setup the "amount of consciousness" of the system was associated with the minimum information bipartition. The first contribution of this paper is repositioning the measure proposed by Tononi in the framework of concurrency theory due to Petri[2]. One very remarkble issue of the Petri's theory is its physical motivation, it was sought to determine fundamental concepts of causality, concurrency, etc. in a language independent fashion. Also for insiders it is easy see that concepts of line, cuts and process unfoldering of a marked net correspond respectively to physical concepts of time-like causal flow, space-like regions and solution trajectories of a differential equation. For example, in paradigm of concurrency theory and its developments, e.g., Savari[3], the graph proposed by Tononi is a noncommutation graph. The new pointview we develop is consistent with Sun[4] applications of the idea of that success of physical theories settles on a hierarchy of descriptions similar the modular hierarchy found in computer and eletronic systems. For example, it is well known that unconscious processes cannot generate a complex verbal report while conscious activation can do it. Access consciousness and phenomenal consciouness are taken in consideration and related to other detailed levels of perception, memory. However Sun is clearly interested in constructing a computational machinery able to behaviour like a conscious being. At this point, we change the gear to treat more fundamental ontological aspects of the conscious experience itself and its relationship with quantum physics. The main remark is that concurrency theory with support of an ontologic status can offer a consistent start to a theory of counsciouness. [1] Gulio Tononi, "An Information Integration Theory of Consciousness", BMC Neuroscience, 5:42:1-22, 2004 [2] Carl Adam Petri, "Concurrency Theory", In Lecture Notes in Computer Science, pages 2-4, 1987 [3] S A Savari, "Compression of Words Over a Partially Commutative Alphabet", IEEE Trans. on Information Theory, 50(7):1425-1441, July 2005 [4] L. Andrew and Ron Sun, "Criteria for an Effective Theory of Consciousness and Some Preliminary Attempts", Consciouness and Cognition, 13:268-301, 2004  P 84  Consciously 'chosen' Quantum Design   Gerard Blommestijn <gblomm@gmail.com> (Amstelveen, Netherlands)     This presentation is based on the view that the self as 'I' experiences the outcome of the quantum mechanical (QM) reduction process related to the ultimate step of perception in the brain and this is the subjective perception. In the same way the self chooses the outcome of a QM reduction process that forms the initial step of a motor activity in the brain and this is the subjective choice. This thesis proposes that these QM reduction processes are not only connecting consciousness to perception and choice in humans, but also in all other life-forms (with or without brains) and even in the most primordial (bio) chemical compounds leading to the evolution of life. Compared to the standard scientific way of understanding nature, an essence of consciousness is added, this being totally subjective, experiencing and choosing 'I'. So, the 'subjectiveness' of a molecule 'chooses' the outcomes of reduction processes that determine the actions of this molecule (all according to the quantum mechanical probabilities). For instance, at the start of the evolution of life, a molecule 'chooses' outcomes that are moving towards being an essential part of the beginning of the first 'proto-cell'. Here the same principle may be at work as we see when light passes through a succession of many slightly tilted polarizing filters; repeated quantum measurements of the polarization of the photons 'guide' it in a more and more tilted direction. In the same way the continuous conscious perception and 'choice' of biomolecules may quantum mechanically 'guide' (beginning) living systems through their 'design' steps. This principle of consciously 'chosen' Quantum Design will be explained, as well as its application to the processes shaping life and evolution, largely according to the ideas of Johnjoe McFadden documented in the book 'Quantum Evolution' (ed. Flamingo 2000).   P 85  Two Gedankens, One Answer; Cloudy weather on the Mind/Body Front  Michael Cloud; Sisir Roy;Jim Olds <mcloud1@gmu.edu> (Krasnow Institute, George Mason University, Centreville, Virginia)     We consider approaches whose purpose is to investigate the relationship between consciousness/mind and matter/brain hardware in the context of testable theories. If consciousness is to be resolved as strictly arising from matter in a testable manner it would follow that one of two strategies should be pursued: importing objective data into consciousness, or exporting of subjective conscious experience out to the objective world. We therefore investigate two gedankenexperiments. One involves feeding objective brain state information (e.g. MRI-like data) to the subject of that data in real time, and subsequently asking the same subject to make experimental observations of that data. The second experiment is to consider the issues arising from a calculation (or testable Prediction Engine) attempting to predict its own future behavior. We suggest that both questions involve significant practical difficulties, and raise the question of whether they can be completed in the general case. We conclude with the question of whether under very basic requirements on hardware, the issue of subjective vs. objective can be testably resolved.  P 86  Reassessing the Relationship between Time and Consciousness  Erik Douglas <erik@temporality.org> (Philosophy (Science, Physics, Time...), Independent Scholar, Portland, OR)    I begin with a review of the key empirical results and ideas forwarded concerning the relationship between time and consciousness over the past twelve years. Time is, of course, a fundamental variable and background notion in most theories, and this is no less the case with explanations about the origin of consciousness. However, our understanding of time is itself heavily dependent of our interpretation of mind and human experience, and herein we find the kind of circular semantic relationship between key notions that suggests itself as a potentially fruitful approach to disclosing elements of the Hard Problem of consciousness to genuine scientific investigation. Following an overview of the general problem space as it is at present, I will turn to my own research into making one very important facet – perhaps the essential feature – of time explicable: the so-called passage of time. Making temporal transience explicit means finding a way to articulate its properties so as to avail them to scientific and physical inquiry. I undertake this through the construction of models which distinguish the qualities ascribed to time in its many applications and contexts, with special attention given to two classes of temporal models: Rhealogical and Chronological. I will use Smythies (2003) JCS article as a point of departure, but significant parts of this talk will draw from my recent published work (cf. Douglas, 2006) and will incorporate material from an forthcoming article to be submitted to the JCS. As a philosopher, my intent is less to answer ill-conceived questions than to repose them in the first place so that they may be properly subject to empirical study. As such, it is my hope to engender a new direction to pursue in how we think engage the study of consciousness.   P 87  The Affect is all at once cognition, motivation and behaviour  Veronique Elefant-Yanni, Maria-Pia Victoria Feser Susanne Kaiser <veronique.elefant-yanni@pse.unige.ch> (Affective sciences, University of Geneva, Geneva, Geneva, Switzerland)    We commonly perceive semantic terms, which characterize the affect a person feels, on a bipolar continuum, going from merry to sad for example. However in affective sciences, there is a persistent controversy about the number, the nature and the definition of the affect structure dimensions. We consider the affect as the momentary feeling a person has at any time that is induced by the situation as a whole, including internal and external stimuli. Responding to the methodological criticisms addressed to the preceding studies, we conciliated the principal theories regarding the affect structure with the same experimental setting. In particular, using the semantic items all around the circumplex we found three bipolar independent dimensions and using only the PANAS semantic items, we found two unipolar dimensions. Finally, we propose a heuristic theorization of affect based on a current firmly established in social sciences, coherent from semantics to sociology, but largely ignored by researchers in affective sciences, that allows to postulate that affect is all at once cognition, motivation and behaviour. The affect is an ever-present inconscious monitoring process of our environment, but it is also as a summation the first conscious source of knowledge that disposes us, mind and body, to respond to this situation. As the affect aggregates and makes the summation of all those many informations of our situation in no time, we should consider its relation with the quantum consciousness hypothesis.  P 88  Imagine consciousness as a single internal analog language formed of ordered water forged during respiration in concert with experience.  Ralph Frost <refrost@isp.com> (Model Development, Frost Low Energy Physics, Brookston, IN)    Common sense tells us that all of the abstract math symbols and expressions are secondary, and thus arise from some primary, internal "analog math". That is, that the abstract stuff is wildly secondary and that only the analog-energetic stuff is primary. Cutting our layered cake in this new manner lets us focus on the stuff that's not in the streetlight's intense glare. Pawing around out beyond the paradigmatic shadows, fumbling through the debris, searching for the right analog math then becomes some sort of quest for a new imagery that's somehow related to our baseline energetics. Keeping things simple, that means that we're looking first at the respiration reaction: organics + oxygen -> carbon dioxide + water + new parts + some energy flow This reaction, recycling carbon back from the flip-side of photosynthesis, powers the down-gradient neurology and everything else. Thus, that entire nervous segment must also be sort of secondary or just more involved in output/communications functions. Plus this view says that where ever there is high oxygen consumption there ought to be a high, stochiometric formation and flow of newly formed/forming water molecules -- a.k.a., a highly rational, wildly repeatable internal analog math process, influenced by the "vibrations' passing through each site where the reaction is taking place. Since a water molecule generally is a tetrahedral-shaped unit with two plus, and two minus vertices, within any enfolding field there are at least six ways each molecule can form or emerge. Considering n-units forming in a sequence, this leads directly to a highly rational 6^n internal analog math. Setting n=12, 6^12 gives us 2,176,782,336 different ways to scribble these 12 units together. n=8, or n=13, or n=16, gives us different sorts and sets of associative/logical patternings -- more variations on the same theme. Allowing that the repeating patterns of vibrations in the surroundings play THE big role in which patterns keep repeating in the sequences of water molecules that keep emerging, we arrive rather quickly at a moderately logical feel for the common internal analog math "language" that runs in the unconscious, subconscious, and conscious regions, plus the senses, memory storage (short-term, and when water patterns are bound with organics, longer-term), plus imagination-creativity, "feelings and impressions", and provides one way to hook fight-flight impulse-momentum directly to motility. That is, we get a quick and dirty introductory view of our common "wave mechanics". Is this THE internal analog math? You tell me. Put it to the experimental test. Stop breathing and find out what happens to your consciousness.   P 89  The sum over history interpretation of neural signals applied to orientation sensitive cortical maps.   Roman Fuchs, Gustav Bernroider <Roman.Fuchs@sbg.ac.at> (Organismic Biology, Neurosignaling Unit, Salzburg, Austria)    Higher level brain functions correlate with the spatio-temporal signal dynamics behind ensembles of nerve cells. The overall situation can be figured as a mapping of the history of membrane currents to the absence or presence of a nerve impulse at a given time and location.This general frame includes all possible signal amplitudes including the quantum scale that causally precede the stimulus sensitive activity of engaged nerve cells. Neural activities along this view can be considered as complex projection amplitudes that do not have to follow a single unique path, but can comprise a large set of alternatives in coherent superposition. The physics behind this concept goes back to the sum over history interpretation, originally proposed in the diagrammatic perturbation theory of R. Feynman. In a previous paper we have applied Feynmans perturbation theory to phase dependent coding mechanism in the brain (Bernroider et al 1996). Here we demonstrate its applicability in the analysis of layer 2 iso-orientation sensitive cortical acivity maps (*). The theoretical background and, in particular, the relation to studies of neural correlates of consciousness (NCC) will be given in a separate paper (Roy and Bernroider, this issue). Bernroider G, F. Ritt and EWN Bernroider (1996), Forma, 11, 141-159 Roy S and Bernroider G, this issue (*) Images of cortical activity maps were generously supplied by T. Bonhoeffer, MPI Munich   P 90  Consciousness as a black hole: perceptory cell and dissociated quantum   Johann Ge Moll <Johanngmoll@gmail.com> (Department of Psychiatry,Hospital Karlucovo, Medical Academy Sofia, Bulgaria, Sofia, Bulgaria)    1)Unlike the traditional opinion Consciousness doesn't participates into the Reduction of Wave Function, but is responsible for the reverse procedure of the “Restoration of the Wave Function”, retransforming back the Perception Function into Wave Function. 1,1) Similar to a Black Hole, the Consciousness swallows matter and energy, and radiates back Information 2) The Consciousness is an ontological mechanism for de-materialization and de-temporalization: Consciousness dematerializes the body. Here it plays the role of a cosmological machine for re-transformation of the Macroscopic Present into Quantum Future. This re-transformation of the Present into the Future occurs as a transformation of the Present into Memory. 3) The transformation of the Actualistic Energy into Possibilistic Information occurs as a transformation of the Forgetting Fantasizing Energy into Remembering Form, or – briefly – transformation of Time-Oblivion into Memory. 4) The transformation of the Macroscopic Present into Quantum Future has the following consequences: Transformation of the Actualistic Universe into Possibilistic Universe. 5) The transformation of Actualistic ontology into Possibilistic ontology is equal to Transformation of Asymmetry into Symmetry. 6) Symmetry is a logical equivalent of Objective Memory = Objective Memory= Omni-Order = Omni-Arrangement = Chaos = Pseudo-Entropy = Quantum Future = Kingdom of Possibility = Objective Knowledge = Information. 7) As a Black Hole, Consciousness curves time in perpendicular direction and forms Perpendicular Simultaneous Instantaneous Time. 8) By gathering together all Past, Present and Future, Consciousness performs Contraction of Time. 9) As “Time Contraction,” Consciousness verticalizes the Epochs. 2. We described the human organism as a system of two contrary ontological simultaneous movements: the Movement of “Materialization” and the movement of “De-materialization.” The act of transformation of Possibilistic Objective Knowledge into Actualistic Subjective Matter, which takes place as transformation of Possibilistic Quantum Future into Actualistic Macroscopic Present (insofar as the Possibilistic Quantum Future is the kingdom of Knowledge and the Actualistic Macroscopic Present is the kingdom of Matter), are responsible for the movement of “Materialization.” That transformation of Quantum Future into Macro-present occurs as the notorious act of reduction of the Wave Function. It is precisely that reduction of the Wave Function, which transforms the Wave Functions of Information into the Perception Functions of matter and the body, and these Perception Functions, in turn, build the Perception organs and the personal perception cell structures and organs of the body. The Force and the Impulse standing behind the above-mentioned movement of transformation of the Possibilistic future into Actualistic present, and performing the act of reduction of the Wave Function (and actually streaming from the Spirit – Matter) is the World Asymmetric Anti-gravity Force, which is realized subjectively as an act of Fantasy, and the analytically working Consecutive Temporal Intellect, and is objectively presented as an act of “Objective Chance-Fantasy.” The reverse process of reverse Re-transformation of Actualistic Subjective Matter into Possibilistic Objective Knowledge is responsible for the reverse movement of de-materialization, which occurs as reverse transformation of the Actualistic Macroscopic Present into Possibilistic Quantum Future. This reverse re-transformation of the macroscopic Present into Quantum Future occurs as an act of “Restoration of the Wave Function.” The Restoration of the Wave Function is realized as re-transforming back of the Perception function of matter and body into a Wave Function of Information. Consciousness is the organ, which performs this reverse process of “Dematerialization” of the body and Matter.   P 91  The enhanced perceptual state  Catarina Geoghan <cgeoghan@ntlworld.com> (Brighton, England)    In the early stages of psychosis, the prepsychotic phase, and also during meditation, individuals frequently experience enhanced perceptual sensitivity, whereby sights and sounds appear brighter and louder than usual. It will be argued that this is due to increased facilitation of a coherent reference frequency. This is based on a holographic model for perception according to which increased coherence results in increased response to perceptual stimuli.   P 92  Reveals the core secret of mind and it's mechanism   Sanjay Ghosh, Papia Ghosh <yogainstruments@yahoo.co.in> (NA, Spectrum Consultants, Howrah, West Bengal, India)     Our world needs a singular answer which can satisfy entirely the quest about mind and it’s mechanism. Now,the question is,can we expect to get such an answer by following the conventional process of observation?Certainly not.Then what should we do?We need to follow a completely new method of observation.What could be the necessary feature for such an observation technique?It must be a process based on new nature of instruments and the act of observation will be of three folds in nature,as like,a)first,we have to learn the art of extracting energy or apparent consciousness from all sorts of instruments b)second,we have to enter into the network of our dormant nervous system,the other name of which is finer part of mind c)finally,we need to know the technique of contemplation on natural objects,like,huge celestial and various earthly bodies. The accumulated power and the quantum of consciousness as to be earned by said succession,will boost one to enter into the causal start of manifestation,so of mind.There the number of active elements to be seen have been reduced into one and that itself will pronounce as the answer of ‘what mind is’!By the time,the mechanism of working of mind to be fully known,because,one will cross the entire track----starting from super gross artificial instruments to bio-physiological instruments and lastly the natural instruments. In fact,our urge towards manifesting ourselves in the name of nature,creates tremendous resistance within ourselves and therefore,we become complex or opaque in nature.So,on the other side,if by adoption of some method ,we can be able to reduce our resistance,we will start becoming simple,so almost transparent. The said transparency is actually the universal nervous body with unlimited quantum of power.The whole purpose of human being is to realize that condition by uniting with real consciousness. Our new package consisting of 236 instruments will lead you to attain such condition in quickest possible time.In Quantum Mind 2007,we propose to give live demonstration for a set of 3 instruments for immediate understanding. These instruments are 1)Near Vision Instrument:This will unvail the secret of conversion from transparent to opaque object and vise versa without using any chemical reagent or applied electricity. 2)Net Metallic Lens Instrument:The metallic ingradients of our body how largely affects our vision and creates tremendous illusion that is to be seen physically by this instrument. 3)Eye Electricity Instrument:The most sensitive as well as vital organ “eye”how produces a variety of unknown nature of power,one will be able to experience from this instrument. Finally,this paper in actual term, is a live demonstration of the mechanism of our Mental Syndrome.   P 93  A soul mind body medicine - a complete sould healing system using the power of soul   Peter Hudoba, Zhi Gang Sha, MD (China) <sharesearchfoundation@yahoo.ca> (Sha Research Foundation, Burnaby, British Columbia, Canada)    In recent decades, there has been an upsurge of new concepts of treatment. Words like “integrative,” “complementary,” “alternative” and “holistic” now permeate not only the healthcare field, but also everyday discussion. Various forms of mind-body medicine have become more and more popular, to the point of being widely accepted. These modalities emphasize the mind-body connection, which encompasses the effect of our psychological and emotional states on our physical well-being and the power of conscious intent, relaxation, belief, expectation and emotions to affect the health. Authors of this paper discuss the Soul Mind Body Medicine as an adjunct healing modality to conventional standard medical treatment. Mind over matter is powerful, but it is not enough. Soul over matter is the ultimate power. The healing power of the mind and soul can be used in conjunction with any and all other treatment modalities. Dr. Hudoba and Dr. Sha present techniques utilizing mind and soul power with special body postures that are very simple, powerful and effective. Positive results can be achieved relatively quickly. These simple healing practices can be easily taught to patients to support and enhance their healing process. Authors support their presentation with examples of their clinical research using the power of mind and soul in the healing of cancer and in development of human being.   P 94  Unified Theory of Bivacuum, the Matter, Fields & Time. New Fundamental Bivacuum - Mediated Interaction and Paranormal Phenomena.  Alex Kaivarainen <H2o@karelia.ru> (Dept. of Physics, University of Turku, Turku, Finland)     The coherent physical theory of Psi phenomena, like remote vision, telepathy, telekinesis,remote healing, clairvoyance - is absent till now due to its high complexity and multilateral character. The mechanism of Bivacuum mediated Psi - phenomena is proposed in this work. It is based on number of stages of long term efforts, including creation of few new theories: 1) Unified theory of Bivacuum, rest mass and charge origination, fusion of elementary particles (electrons, protons, neutrons, photons, etc.) from certain number of sub-elementary fermions and dynamic mechanism of their corpuscle-wave [C - W] duality (http://arxiv.org/abs/physics/0207027); 2) Quantitative Hierarchic theory of liquids and solids, verified on examples of water and ice by special, theory based, computer program http://arxiv.org/abs/physics/0102086); 3) Hierarchic model of consciousness: from mesoscopic Bose condensation (mBC) to synaptic reorganization, including the distant and nonlocal interaction between water clusters in microtubules (http://arxiv.org/abs/physics/0003045); 4) Theory of primary Virtual Replica (VR) of any object and its multiplication. The Virtual Replica (VR) of the object, multiplying in space and evolving in time VRM(r,t) can be subdivided on surface VR and volume VR. It represents a three-dimensional (3D) superposition of Bivacuum virtual standing virtual pressure waves (VPWm) and virtual spin waves (VirSWm), modulated by [C-W] pulsation of elementary particles and translational and librational de Broglie waves of molecules of macroscopic object (http://arxiv.org/abs/physics/0207027). The infinitive multiplication of primary VR in space in form of 3D packets of virtual standing waves: VRM(r), is a result of interference of all pervading external coherent basic reference waves - Bivacuum Virtual Pressure Waves (VPW+/-) and Virtual Spin Waves (VirSW) with similar waves, forming primary VR. This phenomena may stand for remote vision of psychic. The ability of enough complex system of VRM(r,t) to self-organization in nonequilibrium conditions, make it possible multiplication of VR not only in space but as well, in time in both time direction - positive (evolution) and negative (devolution). The feedback reaction between most probable/stable VRM(t) and nerve system of psychic, including visual centers of brain, can by responsible for clairvoyance; 5) Theory of nonlocal Virtual Guides (VirG) of spin, momentum and energy, representing virtual microtubules with properties of quasi one-dimensional virtual Bose condensate, constructed from ’head-to-tail’ polymerized Bivacuum bosons (BVB) or Cooper pairs of Bivacuum fermions (BVF+BVF) with opposite spin. The bundles of VirG, connecting coherent nuclears of atoms of Sender (S) and Receiver (S) in state of mesoscopic Bose condensation, as well as nonlocal component of VRM(r,t), determined by interference pattern of Virtual Spin Waves (VirSW), are responsible for nonlocal interaction,like telekinesis, telepathy and remote healing; 6) Theory of Bivacuum Mediated Interaction (BMI) as a new fundamental interaction due to superposition of Virtual replicas of Sender and Receiver, because of VRM(r,t) mechanism, and connection of the remote coherent nucleons with opposite spins via VirG bundles. For example VirG may connect the nucleons of water molecules, composing coherent clusters in remote microtubules of the same or different 'tuned' organisms. Just BMI is responsible for macroscopic nonlocal interaction and different psi-phenomena. The system: [S + R] should be in nonequilibrium state for interaction. The correctness of our approach follows from its ability to explain a lot of unconventional experimental data, like Kozyrev ones, remote genetic transmutation, remote vision, mind-matter interaction, etc. without contradictions with fundamental laws of nature. For details see: http://arxiv.org/abs/physics/0103031.  P 95  Sequences of combinations of energy levels that describe instances of self and invoke a current instance of self  Iwama Kenzo <iwama@whatisthis.co.jp> (z_a corp., Hirakata, Osaka, Japan)    This paper describes a summary of a robotic program, and puts forth a hypothesis of a brain structure by getting hints from the robotic program as well as the psychophysical results. The robotic program has the following functions: 1) forming sequences of assemblies of components in such a way that the sequences of assemblies of components match inputs from its outside world, 2) keeping and retrieving sequences of assemblies of components in / out of its memory, 3) generalization, and 4) specialization. The generalization process finds common features and relations among various cases of the sequences, and the specialization process makes generalized sequences match a new instance of inputs. The paper explains that the robotic program acquires concepts about its world; the program describes the concepts in sequences of assemblies of components. Our hypothesis of a brain structure is the following: The brain forms sequences of combinations of energy levels. Combinations of energy levels are like E1+E2 = E = E3+E4+E5. When a brain receives inputs from its outside including motor activities, energy generated by the inputs change molecular fine structures and their energy levels. Combinations of changed energy levels make quantum entanglements occur and energy flow. Molecular (and biological) changes of a bit larger scale (Hebbian learning level) are invoked when the energy flow does not go further. The molecular changes of a bit larger scale make the energy flow further and do not occur again when the brain receives the same inputs in the next time since the changed molecular structure become a path of the energy flow invoked by the same inputs. Thus the molecular changes of a bit larger scale encapsulate the changes in the molecular fine structures. The quantum entanglements with molecular structural changes form paths of energy flow, and this explains memory function of the brain. After a large number of combinations of energy levels are encapsulated, Combinations of Energy Levels that are Common to various cases (CELC) are invoked when entanglements occur upon receiving inputs. Energy kept in the combinations of the energy levels (CELC) generate molecular changes of a bit larger scale and encapsulate the combinations of the energy levels (CELC) in the same way as described above. Time sequence in the inputs is also represented in time dependent quantum entanglements among combinations of energy levels encapsulated by molecular changes of a bit larger scale. Sequences of combinations of energy levels match sequences of energy levels invoked by sequences of inputs, but time scales are different from those of the inputs. Combinations of energy levels represent roughly two types of properties: one type represents those specific to certain inputs (including motor activities), and the other type (or CELC) represents generalized properties. Given a set of new inputs at time T, quantum entanglements occur among energy levels encapsulated (both specific and generalized) as well as energy levels of working area of the brain. Temporary entanglements among combinations of energy levels in the working area match the new inputs (specialization), and the next sequence describes inputs that the brain will probably receive at time T + delta T. Entanglements that describe the probable next inputs generate motor activities if no inputs are given from its outside world at time T + delta T. Since quantum entanglements among combinations of energy levels encapsulated represent past and generalized activities, the past and generalized activities make current motors active. Then one can claim that consciousness occurs because past and very general activities described in the combinations of the energy levels invoke activities in a working area that generate a current motor activity. In other words, a described self invokes an instance of self at the next moment.   P 96  Why I’m not an “Orch Or”ian?  Mohammadreza(Shahram) Khoshbin-e-Khoshnazar <khoshbin@talif.sch.ir> ( , Tehran, Iran)     In my opinion “Orch OR” model 1.violates conservation of energy and 2.does not match with experience. 1.Let us look at the following problem: Just after childbirth, a mammal can recognize her young. However, a mom can not. Actually, she accepts any infant as her child! If a mom looks at her “false” infant, then she’ll feel a “false” subjective experience. Please note that this situation is more complex than previously assumed!! “Orch OR” can solve one part of this problem. There are zillion universes for humans and the number of possible space-time configurations is enormous, so the number of combination of states is quite large. These choices for human can be thought of as consciousness. Notice, however, there is only one real universe and all other possible universes are false universe. The false (virtual) universe allowed by the uncertainty principle and therefore, similar virtual particles” exist for only so short time. But a mom can create a virtual universe. This violates the law of conservation of energy. While, for mammal that consciousness is meaningless, there is no conservation of energy problem, since all of the parallel universes are the same (and actual). 2. “Orch OR” model face at least two important obstacles: first, quantum computation requires isolation (decoherence) and second it is unclear how quantum state isolated within individuals neurons could extent across membranes. To overcome the first problem, it assumes acetylcholine binding to muscarinic receptors act through second messengers to phosphorlate MAP-2 , thereby decoupling microtubules from outside environment, and to overcome second problem it assumes quantum state or field could extend across membranes by quantum tunneling across gap junction. Therefore, if we block muscarinic receptors (with atropine), or impair gap junctions, we’ll expect abnormality in cognitive behaviors. I have not checked first idea, but in 2001, Guldengel et al. produced a mouse with no gap junctions, but apparently normal behavior. In addition, in the X chromosome-linked form of Charcot-Marie-Tooth disease, mutations in one of the connexin genes (connexin 32) prevent this connexin from forming functional gap junction channels. However, apparently , there is no reported abnormality in cognitive behaviors.   P 97  An Operational Treatment of Mind as Physical Information: Conceptual Analogies with Quantum Mechanics  Sean Lee <seanlee@bu.edu> (Office of Technology Development, Boston University, Boston, MA)    A novel approach to consciousness as an operationally definable natural phenomenon within the framework of physical information is explored. Any meaningful connection of consciousness to the physical requires an unambiguous mapping of a space of subjective states onto information bearing elements of a physical theory, independently of the former's final ontological, causal and semantic status. At the same time, any such operational definition requires, by the definition of the phenomenon in question, that the mapping be performed by the experiencing subject. I argue that such a 'self-measuring' act leads unavoidably to an 'uncertainty principle' that is analogous in some intriguing ways to Heisenberg's principle for quantum mechanics. If we choose to ignore this uncertainty, then with the help of a thought experiment we can define what I call the 'r-equivalence' classes and 'E theory' of consciousness; essentially addressing what Chalmers refers to as the Easy problem. If we instead address this uncertainty and seek an 'H theory' of the Hard problem, we are lead to an account of subjectivity that exhibits two features strongly reminiscent of quantum theory: incomputability (randomness) and what we may think of as violations of local reality. While no direct connection between consciousness and quantum theory is postulated, the conceptual analogy may be made quite deep, perhaps with utility towards a future theory of consciousness.   P 98  How Quantum Entanglement Provides Evidence for the Existence of Phenomenal Consciousness  Reza Maleeh, Afshin Shafiee; Mariano Bianca <smaleeh@uos.de> (Cognitive Science, University of Osnabrueck, Osnabrueck, Niedersachsen, Germany)    We believe that the rise of consciousness has to do with the concept “information.” So, we discuss a new concept of information, called “pragmatic information,” in a way put forward by Roederer (2005) according to which information and information processing are exclusive attributes of living systems, related to the very definition of life. Thus, in the abiotic world, according to this attitude, information plays no role; physical interactions just happen; they are driven by direct energy exchange between the interacting parts and do not require any operations of information processing. Informational systems are open, that is, the energy needed for information processing must be provided by another source other than sender or recipient. We show that such a characteristic has to do with a specific interpretation of “intentionality” which, again, is the exclusive attribute of living systems. We use the concept of pragmatic information to explain hypothetically many phenomena such as perception, long and short term memory, thinking, imagination and anticipation as well as what happens in the living cells. But there is more to this. We argue that when the complexity of a system exceeds a certain minimum degree, in certain conditions, to be discussed in detail, the mechanical and non-mechanical aspects of information are realized. The former happens with matter and energy exchange while the latter does not. The existence of the latter, to be considered as the prototype giving rise to phenomenal consciousness, can be characterized by preparing the entangled states of quantum particles. The idea is that the correlation between two entangled particles shows the intention of a living being who prepares an entangled state with an informational content which cannot be reduced to separated fragments. In this sense, we say that two entangled particles have an information-based relation without energy-matter signaling. This is a non-mechanical relation between the remote components of a composite system which is due to a non-reducible information content prepared by a purposeful setup provider. So, planned systems (to be called derived informational systems versus original ones) will be categorized as informational systems (mechanically or non-mechanically) just as they show the intention of a living system. To sum up, the phenomenon “entanglement” can be viewed from two different aspects: Firstly, the aspect which deals with the causal part of entanglement. From such a perspective, entanglement is, at least in principle, causally explainable in a contextual manner. Secondly, the aspect which has to do with the intentionality of the setup organizer. The purpose of the one who prepares an entangled state makes the phenomenon informational. Such a phenomenon will not happen in nature, because it needs an intentional living agent to separate the particles in a space-like manner. So, if we accept that there exists a non-mechanical informational relation between two entangled particles, it would be just because of the intention of a setup organizer, otherwise it could also have happened in nature. The existence of such a non-mechanical informational relation can be considered as evidence for the existence of phenomenal consciousness.   P 99  Model of Mind & Matter: The Second Person  Marty Monteiro <j.monteiro1@chello.nl> (Fnd.Int'l.Inst.Interdisc.Integr., Amsterdam, Netherlands)    A general social model of human being is launched, focusing on the relation between mind and body. In constructing the human being’s mental and bodily architecture, the other human being is incorporated. From the point of view of the 1st person “I”, and the 2nd person “You”, the model pertains to the physical, mental and social process levels. From a growth-dynamic or evolutionary point of view, the physical reality is axiomatic to deduct the mental and social process level. “Interaction” is the key concept modelling all process levels of human functioning. The model is built up in the reference frame of two thinking tools, namely ‘finality’ as well as ‘causality’. The design of the mind-matter model centres on the phenomenon of 'interaction' between object- and between subject systems. Interaction is a simultaneous occurrence between events on physical, mental and social levels. By applying a rule to deduct the mental and social process levels from the physical level, departs from the question of ‘how’ the processes emerge and how their relationships to each other are established. In the reference frame of finality and causality, the process architecture on all levels provides a general basic social model of the human being. From an integrated point of view of the relation between the 1st and 2nd person, it is tried to unveil the mechanism of mind and matter. Recording of and acting upon environmental events of objects/subjects operates on physical, mental, and social levels. The physical level of stimuli is basic for the mental level. Through stimuli-interaction, mental cognition emerges. Cognition is primary social directed to get feed-back by perception extracting information from objects and subjects. The social level of re-corded norms in particular, are prerequisite for the formation and development of personality (long-term memory) and the emergence of new values from personality for building up a cul-ture. Attitude (short-term memory) mediates attuning communication and matching of values to create culture. Personality and culture are the end-results of human functioning. Modelling the architecture on physical, mental, and social levels, and the formation of person-ality and attitude-mediated culture, gives an answer to the question of ‘how’ these processes and systems emerge. It says nothing concerning the question of ‘why’ the human being performs his behaviour in that specific way. This issue refers to the household of energy flow in the reference frame of relative 'shortage-surplus'. From an imbalanced state, 'energy transaction' originates within a person, in order to bring about an energy balance in the framework of other objects/subjects. Through the exchange of psychophysical matter/energy of 'cost-benefit', subjective experience of ‘pain-pleasure’ takes place through 'energy transformation' – an op-eration of ‘fusion-fission’ between mind and matter. The hierarchical built up of personality and attitude-mediated culture is respectively a contra-evolutionary and evolutionary development. This development of personality towards men-talization on the one hand, and materialization of common culture on the other is not a linear event, but a discontinuous state transition. The human being is aware afterwards of the results of these transformational operations, but he is not able to know what happens within the ‘gap’, the discontinuous transitional evolution of the mind as well as matter. Therefore, personality development and natural/cultural evolution raises the ultimate problem concerning the question whether or not a universal force exists as a "unifying-creating force".   P 100  Spreading culture on quantum entanglement and consciousness   Gloria Nobili, Teodorani Massimo <gloria.nobili@fastwebnet.it> (Physics, University of Bologna, Castel San Pietro Terme, Italy)    The subject of “quantum entanglement” in general doesn’t seem to be particularly considered in Europe in the form of popularizing books or of educational physics projects. These authors have started to spread out this kind of scientific culture in both forms, including popularizing seminars too. Concerning the entanglement phenomenon, recently, new thought experiments have been outlined, new laboratory results have come out in the form of real discoveries in quantum optics, new studies on “bio-entanglement” and “global consciousness effects” have been carried out, and very sophisticated new ideas have been developed in the fields of quantum physics, biophysics, cosmology and epistemology. These authors intend to show their effort of diffusing widely this growing scientific knowledge. Beyond all this there is a long-term strategy aimed at inculcating new concepts in physics in order to trigger the interest of scholars at all levels, in that which is probably the most innovative and interdisciplinary subject of the human knowledge of this new millennium. In order to accomplish this difficult task, these authors are acting in the following ways: A) explain, using intuitive examples, the basic physical mechanism (1, 2, 3, 4) of entanglement at the particle level; B) explain all the possible ways in which entanglement may involve quantum or “quantum-like” non-local effects occurring also in the macro scale (2, 3, 4) represented by biological (DNA bio-computing, microtubules), psychophysical (consciousness, synchronicity and Psi effects), astrobiological (neural spin entanglement), and cosmological (Bit Bang) environments; C) study and spread the scientific knowledge concerning alternative ways for the Search for Extraterrestrial Intelligence (5) and – specifically – prepare research projects regarding possible non-local aspects of SETI (NLSETI) and their applicability (4) on the basis of our physics knowledge and technology; D) prepare extensive plans for post-graduate courses in physics (6) with a special address to “anomalistic physics”, brain biophysics and mathematics; E) train persons and students to reach optimal concentration states – by using well experimented techniques – in order to permit them to exploit at the maximum level their intellectual and consciousness potential. All of these educational and promotional actions are aimed at training people in understanding the strict link existing between physics and consciousness in all of its aspects, in the light of a probable general phenomenon that occurs at all scales by involving (micro and macro) matter, mind and consciousness. A strategy plan containing in a self-consistent way all of these aspects will be schematically illustrated. REFERENCES. BOOKS ( http://www.macrolibrarsi.it/autore.php?aid=4428 ) 1) Teodorani, M. (2006) “Bohm – La Fisica dell’Infinito”. MACRO Edizioni. 2) Teodorani, M. (2006) “Sincronicità”. MACRO Edizioni. 3) Teodorani, M. (2007) “Teletrasporto”. MACRO Edizioni. 4) Teodorani, M. (2007) “Entanglement”. MACRO Edizioni. ARTICLES 5) Teodorani M. (2006) “An Alternative Method for the Scientific Search for Extraterrestrial Intelligent Life: ‘The Local SETI’”. In: J. Seckbach (ed.) “Life as We Know It”, Springer, COLE Books, Vol. 10, pp. 487-503. 6) Teodorani, M. & Nobili, G. (2006) “Project for the Institution of an Advanced Course in Physics” (in Italian). E-School of Physics and Mathematics by Dr. Arrigo Amadori. http://www.arrigoamadori.com/lezioni/CorsiEConferenze/MasterFisica/Master_Fisica_MTGN_e-school.pdf   P 101  The Golden Section: Nature's Greatest Secret  Scott Olsen <olsens@cf.edu> (Philosophy & Comarative Religion, Central Florida Community College, Ocala, Florida)    "Resonance and Consciousness: buddhas, shamans and microtubules" -- Consciousness is one of the great mysteries of humanity. Like life itself, it may result from a resonance between the Divine (whole) and nature (the parts) exquisitely tuned by the amazing fractal properties of the golden ratio, allowing for more inclusive states of awareness. Penrose and Hameroff provocatively suggest that consciousness emerges through the quantum mechanics of microtubules. It is therefore a real possibility that consciousness may reside in the geometry itself, in the golden ratios of DNA, microtubules, and clathrins. Microtubules are composed of 13 tubulin, and exhibit 8:5 phyllotaxis. Clathrins, located at the tips of microtubules, are truncated icosahedra, abuzz with golden ratios. Perhaps they are the geometric jewels seen near the mouths of serpents by shamans in deep sacramental states of consciousness. Even DNA exhibits a PHI (golden ratio) resonance, in its 34:21 angstrom Fibonacci ratio, and the cross-section through a molecule is decagonal (a double-pentagon with associated golden ratios). Buddha said, "The body is an eye." In a state of PHI-induced quantum coherence, one may experience samadhi, cosmic conscious identification with the awareness of the Universe Itself.  P 102  Data reserve and recreating the memory in brain and the experimental witnesses suggesting it   Mojtaba Omid <mjtb_omid@yahoo.com> (Tabriz, Iran)    In this hypothesis, at first the digital system is familiarized and then the concept of zero and one or existence or not existence as the digital base is generalized to two kinds of the electromagnetic waves spectrum from the body especially the brain and its cortex and it is indicated that the radio waves of the brain can be considered zero and the lack of radio waves as the result of the replacing infrared waves relating to the metabolism and high temperature of the brain, one ; since according to the rules of specially relativity and the difference of light speed (EM waves) in different environments, in the certain spectrum of the radio waves, the time passing equal zero but it is not zero in the infrared waves, them the data obtained from the sensory organs to the brain and cortex are ciphered and are reserved as zero and increasing the speed of time passing as the result of the reciprocity of two radio and infrared spectrums and the recreation of these codes is accomplished according to the same processes which is described in the paper. Finally, in the second part of the article some witnesses are represented through the images provided by the equipments PET and f MRI from the brains of the patients with different mental and functional problems. In these patients the normal metabolism of the brain is destroyed which disorders the 0 and 1 system to form the codes and their reserve and recreation; these experimental observations prove the hypothesis.   P 103  Embryological embodiment of protopsychism and Wave Function  Jean Ratte <jean.ratte@holoener.com> (Centre Holoénergétique, Montreal, Quebec, Canada)    According to Goethe the human body is the most sensitive tool to detect subtle process that technological devices cannot. It is still true 200 years later. Despite all the interesting data, neurobiological imagery is invasive and alters the subtle aspects of Mind process. For the last 20 years we use a clinical in vivo non invasive procedure that bridges this Incommensurability between physical correlates and the consciousness This Vascular Semantic Resonance (VSR),3D spectrometer, brings quantum microtubular level to macroscopic clinical detection, and shows symmetry or entangled resonance between map and territory, between molecule and the entangled memory or name of the molecule, between syntax and semantics. The Cardiovascular network is a harmonic oscillator manifold bringing the quantum microtubular level to macroscopic clinical detection Vascular system and microtubule are coupled harmonic oscillators (Abstract #330 Tucson 2 and # 977 Tucson 3). There is Amplification of microtubules and receptor canals vibrations by the cardiovascular system which works as a resonant cavity, an interferometer, a multiplexing waveguide, a manifold. (See Roger Penrose; Road to Reality). Resonance between micro-oscillators such as pericorporeal pigments and cellular pigments is the physical basis of VSR. . The vibratory equivalence, of micro-oscillator pigments and ideograms, of phonemes and morphemes, is the biophysical basis of V S R, a complex biological spectrogram, accessing directly the meaning of signs, the qualia of quanta, resonating not only to the molecule but also to the entangled memory or functional signature of the molecule, detecting symmetry between Implicate and Explicate order. This method shows the human body as radar, an interferometer not only for EM waves, but for the 4 fundamental interactions in their matter and antimatter aspects. VSR shows a Vibratory Parallelism between embryological stages and the 4 fundamental interactions.Operative or vibratory identity is not ontological identity. The first undifferentiated stage or Morula resonates to Gravitation. The second stage, Blastula, differentiates in Ectoderm and Endoderm with polarization of space in inside and outside. Ectoderm resonates to EM field. Endoderm, polarisation of anterior and posterior, resonates to Weak Nuclear field. The third stage or Gastrula gives rise to the multiplexing mesoderm manifold, polarisation of time with bilateral symmetry, which resonates to Strong Nuclear Field. This clinical tool gives new insights in the puzzle of consciousness by showing a multilevel vibratory commonality between Protopsychism, Wave function, Non Locality, Curvature Tensor and Gravitational field and Degeneracy. These concepts resonate like the undifferentiated or prelogical stage or Morula. There is a vibratory scale resonance between quantum level and molecular, cellular and organism levels.DNA shares a vibratory identity with Wave Function (W.F) or Protopsychism. Transcriptase implements a W.F collapse on RNA. Reverse Transcriptase brings back to DNA wave function, or Degeneracy. Gametes Proliferation vibrates like W. F and Fecundation like Collapse of the W.F. Potentialization or Non Local vibrate like W. F and Actualization or Local vibrate like Collapse of W F These clinical results indicate that W F is not only a mathematical device but a true subtle biophysical process like Protopsychism. The Understanding of Vibratory commonality between quanta and qualia, between cosmogenesis and ontogenesis, between matter and antimatter fields such as Vitiello’s double, requires a quantum leap from « neuroectodermic »geometry to « mesodermic » Riemann hypergeometry (see Roger Penrose).Morula is Prelogic. Ectoderm is Logic of non contradiction,wave or particle. Endoderm is Logic of contradiction, wave and particle. Mesoderm is Logic of crossed double contradiction,hypersymmetry matter-antimatter.   P 104  Life and Consciousness  Michael Shatnev <mshatnev@yahoo.com> (Akhiezer Institute for Theoretical Physics, NSC KIPT,Kharkov,Ukraine)    We first consider the observational problem in quantum mechanics and the notion of complementarity. Then, following Niels Bohr, we discuss the complementary approach to problems of quantum mechanics, biology, sociology, and psychology in more detail. In general philosophical perspective, it is very important that, as regards analysis and synthesis in these fields of knowledge, we are confronted with situations reminding us of the situation in quantum physics. Although, in the present case, we can be concerned only with more or less fitting analogies, yet we can hardly escape the conviction that in the facts which are revealed to us by the quantum theory and lie outside the domain of our ordinary forms of perception we have acquired a means of elucidating general philosophical problems. Next we shortly discuss that quantum mechanics is not complete and therefore may be completed. For this purpose a new mathematical carcass of physics is needed, and, we try to show how to find it. Finally, using these approaches, Deutsch’s, Dyson’s and Penrose’s attitudes, we show how the notions of life and consciousness are connected.   P 105  Visions as special form of an altered state of consciousness   Josiah Shindi <josiahshindi@yahoo.co.uk> (Psychology, Benue State University, Makurdi, Nigeria, Makurdi, Benue, Nigeria)    The paper reviews Biblical accounts of visions. Several persons who claimed to have seen visions in the last five years were interviewed using structured questions. Results indicate that there are some similarities between the visions reported in the Bible and those of the participants in the study. Precipitating and exasperating factors in vision are discussed together with the visions’ content. Evidence point to the notion of special altered state of consciousness during the period of visions, specifically during the hyponagogic and hypnopompic states.  P 106  Metacognitive awareness: adopting new tasks for the remediation program for dyslexics  Malini Shukla, Jaison A. Manjaly <malini.shukla1@gmail.com> (Centre for Behavioral and Cognitive Sciences , University of Allahabad, Allahabad, Uttar Pradesh, India)    In this paper we aim to evaluate the role of metacognitive awareness in remediation programs adopted for Dyslexics. The remediation program PREP (PASS Reading Enhancement Program) focuses on cognitive remediation of reading problems by improving the information processing strategies that underlie reading, while at the same time avoiding the direct teaching of word reading skills. It also includes a self comparison of children with dyslexia between their training course experience and the new strategies employed by them after being consciously aware about their deficits. It was observed that Dyslexics were using self learning strategies which motivated them for independent learning. This new self learning techniques were adopted in comparison to the metacogntive awareness of their disability and also initiated a comparative assessment of their disabilities with the peer group. This shows that PREP has helped them in enhancing their metacognitive skills by enabling them in controlling and manipulating cognitive processes; and giving them the knowledge about the regulatory skills; and how to utilize these skills on the basis of being consciously aware about their deficit in reading. Thus their monitoring ability on their own performance has given an impetus to their overall performance. Researches (Tuner and Chapman, 1996) have shown that metacognitive regulation improves the performance in number of ways, which make use of better attentional resources, better use of existing strategies and a greater awareness of comprehension breakdown. The remediation program showed that formation of self helps Dyslexics to perform better. The awareness that I am disabled encourages them to employ better learning techniques. In the light of these observations, we propose that the current structure of the remediation program PREP can be improved by including more tasks to enhance metacogntive awareness and tasks based on this newly evolved metacogntive awareness. We argue that addition of these new tasks can improve the remediation program PREP.   P 107  A New Approach to the Problems of Consciousness & Mind  Avtar Singh <avsingh@alum.mit.edu> (Center for Horizons Research, Cupertino, CA)    Consciousness issues within the context of modern neuroscience and related problems in contemporary physics are addressed. Current theories of consciousness look towards information theory, information integration theory, complexity theory, neural Darwinism, reentrant neural networks, quantum holism etc. to provide some hints. These theories fall short of the rigors and quantitative measures that are normally required of a scientific theory. The most perplexing philosophical conundrums of the "hard problem" and "qualia" that afflict modern neuroscience can be resolved by a deeper understanding of the physics of the very small (below Planck scale) and very large (at the boundaries of the universe) scales. The modern philosophy of mind proposes that consciousness is a higher-order mental state that monitors the first or base state possibly generated by the brain. This paper builds upon the early approaches to consciousness wherein it was proposed that the state of self-consciousness is not a separate, higher-order consciousness of a conscious experience, but represents a continuum of the lower order states generated by the brain experience. In such a larger context, many of the mysteries of physics and neuroscience can be explained with an integrated model. This paper proposes such an integrated model that provides a direct relationship between the physics concepts of space, time, mass, and energy, and the consciousness concepts of spontaneity and awareness. The observed spontaneity in natural phenomena, which include human mind, is modeled as the higher order or universal consciousness. The integrated model explains the recent observations of the universe and demonstrates that the higher order consciousness is a universal rather than a biologically induced phenomenon. The neurobiological mind is shown to represent a subset of the complimentary states of the prevailing higher order universal consciousness in the form of the continuum of space-time-mass-energy. The proposed approach integrates spontaneity or consciousness into the existing and widely-accepted theories of science to provide a cohesive model of the universe as one wholesome continuum. The model represents the essential reality of different levels and dimensions of experience, both implicit and explicit, consciousness and matter, to be seen as equivalent and complimentary states of the same mass-energy known as the zero-point energy. The universal consciousness is shown to represent the spontaneous kinetic energy of the extreme kind, which is the ultimate complimentary state wherein everything in the universe is experienced as the zero-point energy field in a fully dilated space and time continuum.  P 108  Does attention mediate the apparent continuity of Consciousness? : A change detection Perspective   Meera Mary Sunny, Jaison A. Manjaly <meeramary1@gmail.com> (Center for Behavioural And Cognitive Sciences, Allahabad University, Allahabad, Uttar Pradesh, India)    Dennett (1991) argues that most of the theories of mind, irrespective of their ontological commitment, presuppose a Cartesian theater and continuity of consciousness. He claims there is no such theater where everything is re-presented. In other words, there is no boundary line which decides the onset of consciousness. He proposes multiple draft model of consciousness as an alternative to Cartesian theater and to minimize the problem of continuity of consciousness. Multiple drafts model claims to show that the apparent continuity of consciousness results from the brain's ignoring of irrelevant or unavailable information and not 'filling in' as suggested by other theorists. This paper shows the inherent problems with this claim. We argue that if one can convincingly claim that attention is a continuous process, it can also be shown that the apparent continuity of consciousness results from this feature of attention. Dennett apparently down plays the role of attention in this unification. He looks at consciousness as a continuously edited draft, without a final published material and accounts this as the dynamicity of consciousness. This paper shows the dynamicity of attention and eventually the possibility that the apparent continuity of consciousness is a feature of the underlying attentional mechanisms using results from an experiment that makes use of the change detection paradigm with hierarchical stimuli.   P 109  Implicit activities in auditory magnetoencephalograpy (MEG)  Yoshi Tamori, Noriyuki Tomita <yo@his.kanazawa-it.ac.jp> (Human Information System Laboratory, Kanazawa Institute of Technology, Hakusan-shi, Ishikawa, JAPAN)    It is well known that unimodal peek responses exist at only the beginning of MEG waves. Despite of the existence of continuous base tone (CBT: A4=440[Hz] pure tone), the magneto encephalographic (MEG) responses look to have been gone to the background noise later than the onset peaks (N1m and P2m). Even if there might continue introspective perception after the onset responses, the amplitude of MEG response to the secondary tone stimuli generally decreases. It is unknown what kind of neural processing exists in such silent activity period in terms that MEG activity has been gone. Although such silent activity period might be showing perceptual acclimatization, corresponding representation or activity should exist in the brain as long as the existence of introspective perception. In order to investigate the feature of such silent activity period, we added to the continuous base sound (or switched from the continuous base sound to) an extra pure tone of another frequency five hundred millisecond after the onset of continuous base tone. All the sound stimuli in the present experiment were presented to the left ear. In the present study, unimodal response was appeared in the MEG waves one hundred millisecond after the onset of the secondarily applied base tone. This peak is considered to be N100m counterpart of secondarily applied base tone. Current dipoles were estimated by the algorithm based on Savas law. The values of GOF (goodness of fit) for the adopted dipoles were larger than 95% fit. All the estimated current dipoles (ECDs) for N100m are located in the right primary auditory cortex (A1). The amplitude of the secondary peaks during the silent activity period appeared to be depending on the distance, in terms of wave frequency, of the secondary tone from the continuous base tone. In the present study, the cosine component of fixed A1 dipole for the overlapped secondary tone was decreasing, as the frequency approaches to CBT’s. We choose several frequencies (e.g. A4#=466[Hz]; D4#=311[Hz]; G4#=415[Hz]) for the secondarily applied tones. The latency of the secondary applied A4# sound was always larger than the one for the other secondary sounds. Our whole-head type MEG system consists of 160 axial gradiometers with SQUID sensors. The gradiometers have 15 mm diameter and 50 mm baseline. The gradiometers are arranged in a radial manner around the helmet. All the subjects are right handed and they can discriminate the frequency difference of the presented sounds. The loudness of the presented sounds are all calibrated/flattened by 70[dB(A)] considering the property of the perceptual loudness curve in ISO 226:2003. Contained noise in the measured magnetic fields was reduced by averaging techniques over several sessions. It has been suggested that some kind of implicit activity exists in the silent activity period in MEG responses. This frequency dependent depression could be qualitatively explained by a tonotopically aligned inhibitory neural network model. The underlining mechanisms are, however, remaining still unknown matter for the implicit activity. Relation to the consonant feature in the frequency dependent depression will be discussed in the presentation.   P 110  Anomalous light phenomena vs bioelectric brain activity   Massimo Teodorani, Gloria Nobili <mlteodorani@alice.it> (Cesena (FC), Italy)    We present a research proposal concerning the instrumented investigation of anomalous light phenomena that are apparently correlated with particular mind states, such as prayer, meditation or psi. Previous research by these authors (1, 2) demonstrate that such light phenomena can be monitored and measured quite efficiently in areas of the world where they are reported in a recurrent way. Instruments such as optical equipment for photography and spectroscopy, VLF spectrometers, magnetometers, radar and IR viewers were deployed and used massively in several areas of the world. Results allowed us to develop physical models concerning the structural and time-variable behaviour of light phenomena, and their kinematics. Recent insights and witnesses have suggested to us that a sort of “synchronous connection” seems to exist between plasma-like phenomena and particular mind states of experiencers who seem to trigger a light manifestation which is very similar to the one previously investigated. The main goal of these authors is now aimed at the search for a concrete “entanglement effect” between the experiencer’s mind and the light phenomena, in such a way that both aspects are intended to be monitored and measured using appropriate instrumentation. In order to reach this task the test subject would be checked by using portable neurophysiologic instruments such as an interactive brain visual analyzer (IBVA), a brain holotester and a skin conductance detector. These measurements are intended to be carried out in optimal simultaneity with a high-resolution digital camera for photometry and photography, a videocamera, a low-resolution holographic grating for spectroscopy, a VLF-ELF computer-assisted spectrometer, a triaxial magnetometer, and a computer controlled random event generator (REG). At the same time a high-precision timing is intended to be set up in order to check the level of simultaneity of both the brain phenomenon and the light phenomenon, and the temporal and morphologic evolution of both phenomena. In both cases redundant calibrations and background noise extraction procedures are planned. This is a quantitative research project in the field of both photonic/plasma physics and biophysics, and it is based on very well tested previous experimental research on plasma light-phenomena and on a theoretical biophysics background concerning the brain electric activity. Of most importance, a correlation between VLF-ELF data and bioelectric activity is searched. This group has already at their disposal at least one “test subject” who is very willing to participate to this kind of experiments. The goal of this research project is twofold: a) to verify quantitatively the existence of one very particular kind of mind-matter interaction and to study in real time its physical and biophysical manifestations; b) to repeat the same kind of experiment using the same test-subject in different locations and under various conditions of geomagnetic activity. REFERENCES. 1. Nobili G. (2002) “Possible bio-physical interference of the electromagnetic field produced by Hessdalen-like lights with human beings”. Workshop on Future Research on the Hessdalen Project, Hessdalen, August 10, 2002 : http://hessdalen.hiof.no/reports/Workshop2002_Gloria_ABSTRACT.pdf 2. Teodorani M. (2004) “A Long-Term Scientific Survey of the Hessdalen Phenomenon”. Journal of Scientific Exploration, 18, 2, 217-251. http://www.scientificexploration.org/jse/articles/pdf/18.2_teodorani.pdf   P 111  Proto-experiences and Subjective Experiences  Ram Lakhan Pandey Vimal <rlpvimal@yahoo.co.in> (Neuroscience, Vision Research Institute, Acton, MA)    We present an argument for proto-experiences without extending physics. We define elemental proto-experiences (PEs) as the properties of elemental interactions. For example, a negative charge experiences attraction towards a positive charge; this "experience" is defined to be the PE of opposite charges during interaction. Similarly, PEs related to the four fundamental interactions (gravitation, electromagnetism, weak, and strong) can be defined. Thus we introduce experiential entities in elements in terms of characteristics of elemental interactions, which are already present in physics. We are simply interpreting these properties of interaction as PEs. One could argue that there is no shred of evidence for "what it's like" to being an electron being "attracted" to (say) a single proton. However, it is unclear what else electron will "feel" towards proton other than a force of attraction, and we define it as the PE of an electron for a proton. The experience (such as attraction/repulsion) by ions that rush across neural-membrane during spike-generation is called neural-PE. Neural-PEs interact in a neural-net, and neural-net PEs somehow emerge and get embedded in the neural-net during development and sensorimotor tuning with external stimuli. A specific subjective experience (SE), for example, redness is selected out of embedded neural-net color PEs in visual V4/V8-red-green-neural-net when a long wavelength light is presented to our visual system. Similarly, when signals related to neural-PEs travel along the auditory pathway and interact in auditory neural-net, auditory SEs emerge. Thus, the emergence of a specific SE depends on the context, stimuli, and the specific neural-net. What way is our hypothesis different from the straight-forward physicalist view (SEs are emerged entities in neural-nets from the interaction of non-experiential physical entities, such as neural signals)? [This has led to explanatory gap or 'hard problem'.] The difference is that we acknowledge the existence of experiential entities in physics, where the emergence of SE from experiential entity such as PEs is less 'brute' than that from non-experiential matter. What way is our hypothesis different from panpsychism [1]? Panpsychism requires extending physics by adding experiential property to elements, which lacks evidence. Our hypothesis does not require extending physics; it simply interprets the existing and well accepted properties of elemental interactions, which have significant amount of evidence and are the building blocks of physical universe. Our hypothesis implies that non-experiential matter (mass, charge, and space-time) and related elemental proto-experiences (PEs) co-evolved and co-developed, leading to neural-nets and related PEs, respectively. Furthermore, there could be three types of explanatory gaps, namely the gap between (i) SE and the object of SE, (ii) SE and the subject of SE, and (iii) subject and object, where 'object' is internal representation. The hypothesis is that SE, its subject and its object are the same neural activity in a neural-net, where a neural activity is an experiential entity in our framework. These gaps are actually closed if the above hypothesis is not rejected; this trio appears distinct in our daily lives, but it is a sort of illusion because internally they are same neural-activity; when information related to ‘subject experiencing objects’ projected outside, objects in 3D appear with respect to reference subject. Moreover, SE cannot be objectively measured; it requires subjective research; however, the relative effect of SEs, such as that in color discrimination, can be measured objectively. Our hypothesis (a) contributes in bridging the explanatory gaps because experiential entities are introduced, (b) minimizes the problem of causation because our framework is in within the scope of physicalism, and (c) does not require extending physics.   P 112  Disorders of consciousness in schizophrenia: a reverse look at consciousness nature   Serge Volovyk <sv3@duke.edu> (Department of Medicine, Duke University Medical Center, Durham, North Carolina)    Schizophrenia and consciousness represent the most challenging, mysterious and enigmatic interwoven phenomena. Despite of clinical pattern of schizophrenia since Kraepelin traditionally is assumed to spare consciousness problem, recent neurocognitive research have shown that certain functions of consciousness (sense of agency, self, memory, executive functions, insight, monitoring) could be impaired in schizophrenia and that this may account for symptoms such as depersonalisation, hallucinations, self fragmentation, disorders of memory, delusions of control, etc. Cognitive deficits are considered as specific symptom domains of schizophrenia. These at the first glance unrelated traits have common molecular dynamic nature and mechanisms. Cognitive impairments specific to schizophrenia in generalized sense of information processing reflect continuum of subtle dynamic molecular pathways ranging from perturbation in free radicals redox spatiotemporal homeodynamics with changing hemispheric biochemical dominance/accentuation including alteration of nitric oxide-superoxide complementarity, responsive redox signaling networks, concomitant alterations in genes expression and transcription, redox control of neurotransmission pattern, synaptic circuitry and plasticity to changes in neurogenesis and functional hemispheric asymmetry. Free radicals, primordial “sea” for life origin, evolution and existence, induced by cosmic and terrestrial background radiation, are evolutionally archetypal, ubiquitous, and omnipotent in physiological-pathophysiological dichotomy in brain / CNS. Free radicals dual immanent nature and functions in brain are based on their quantum-chemical dynamic charge transfer / redox ambivalence (interactional nucleo-, electro-, and ambiphilicity spectrum); corresponding spectrum of reactivity and selectivity; subtle borderline norm-pathology dichotomy with discontinuity threshold, physiological functional ambivalence and complementarity, and dynamic free radicals homeostasis. In given generalized framework global stable average incidents rates of “core” schizophrenia (and consciousness disorders as opposite side of phenomenon) at the molecular level may be considered as quantum-chemical stochastic phenomenon originally based on perturbation of free-radical redox brain signaling networks and disorder of information processing connected with effects of natural radiation background, seasonal variation in geomagnetic field, and solar cycles terrestrial activity superimposed upon individual’s immanent developmental trajectory.   P 113  New quantum approach to qualia, consciousness and the brain.  John Yates <uv@busi8.freeserve.co.uk> (London, United Kingdom)    In this paper I do not rule out the possibility of including the consideration of results such as those of Jahn, Walach, Radin and others, in the spirit that I feel that, unlike much early USA scientific and technical opinion, I would not have effectively ruled out the possibility of the Wright Brothers as having discovered aviation. At the same time such results are certainly not paramount in considerations at this time. My approach uses category theory and a McTaggart A series as well as the conventional B series effectively used by Deutsch, Bohm and Penrose. This sounds philosophically and physically more realistic but at the present state of the art it may be required that the A series is a proper class. My theory will relatively easily link with any physically meaningful and duplicable NDE results which may be provided for example, by NDE experiments like those of Fenwick and Grayson and has many other advantages. Dream precognition results and ESP are very much denied by sceptics and on the whole by physicists. On dreams I certainly have not obtained precognition as such but noted apparent peculiar effects not dissimilar in superficial appearance. In psychology it is necessary to remember that many conclusions have been drawn and are repeatable from work like that of Strogatz. I favour dynamical systems psychology somewhat along the lines of Lange, but requiring an A series philosophy. By adding some ideas due to Stickgold and Hobson, I have already obtained preliminary surprising results. Presently I am proceeding to look at a structure somewhat along the lines of the Sprott work on psychology. I believe that through ignoring the McTaggart A series or trying to subsume the A series to within the B series, important opportunities are being lost and that early calls on quantum theory may be being made, when complex system theory could be more directly appropriate. http://ttjohn.blogspot.com/ presents the entire blog to date, including more work than that required here. The simplest appreciation of the situation may be that the present approach contains a past, a present and a future without further ad hoc additions and so in a sense exhibits qualities, generally recognised as certainly existing in human consciousness, but which are not obvious in theories which do not. Also it allows the existence of a God or Gods and free will (or indeed hypothetical gods or freewill) within its bounds, though does not insist on their existence a priori and in this sense it is more appropriate to consciousness theory than a conventional physics theory would be which almost excludes these factors or a theological theory which a priori insists on them. The absence of the possibility of freewill in a physical theory suggests solipsim or incompleteness rather than some disproof of free will and this is carefully avoided with the present approach, which yet contains much mathematics including all of quantum theory and high energy physics, together with chaos and catastrophe theories where relevant.  P
b249ac2ab1b428f9
« · » Section 7.9: Exploring Eigenvalue Equations check to force initial vector to be of unit size check to see paths of vectors (only for unit vectors) Please wait for the animation to completely load. The time-independent Schrödinger equation is an energy-eigenvalue equation. What does this mean?  Symbolically (this symbolic notation is called Dirac notation) the act of an operator acting on a state can be expressed by A |α> = β |β>, (7.17) where A is an operator, β is typically a number, |α> and |β> are two different state vectors. In general, therefore, the action of an operator on a state produces a different state. For special states, however, the final state is the original state and we have A |a> = a |a>, (7.18) where |a> is an eigenvector or eigenstate of the operator A with eigenvalue a.  This equation is called an eigenvalue equation. To find energy eigenfunctions of the time-independent Schrödinger equation, we solve Hψ = Eψ and solve the boundary conditions. We can demonstrate the idea of operators, eigenvalues, and eigenvectors by using a 2 × 2 matrix to represent the operator, = A11 A12 A21 A22 (7.19) and a two-element column vector, |a>  = a1 a2 (7.20) to represent the state vector. An eigenstate of the operator A has the property that the application of A to the eigenvector results in a new vector has the same direction as the original eigenvector and a magnitude that is a constant times the eigenvector's original magnitude.   In other words: eigenvectors of A are stretched, not rotated, when the operator A is applied to them. Select a matrix to begin.  Restart. Drag the red vector around in the animation. The result of the matrix (operator) acting on the column vector (the state vector) is shown as the green vector.   To make things easier, you may set the red vector's length equal to 1. What are the two eigenvectors and eigenvalues of each operator? The OSP Network: Open Source Physics - Tracker - EJS Modeling Physlet Physics Physlet Quantum Physics
63c1567ba68b86ee
The Many Interpretations of Quantum Mechanics What is the ultimate nature of reality?  Are quantum effects constantly carving us into innumerable copies, each copy inhabiting a different version of the universe? Or do all those other worlds pop out of existence as mere might-have-beens? Do our particles surf on quantum waves? Or are we ultimately made of the quantum waves alone? Or do the waves merely represent how much information we could possess about the state of the world? And if the waves are just a kind of information, information about what? Or is the information all that there is—and all that we are?  Those are the kind of questions in play when a physicist tackles the dry-sounding issue of, “what is the correct interpretation of quantum mechanics?” About 80 years after the original flowering of quantum theory, physicists still don’t agree on an answer.  And although quantum mechanics is primarily the physics of the very small—of atoms, electrons, photons and other such particles—the world is made up of those particles. If their individual reality is radically different from what we imagine then surely so too is the reality of the pebbles, people and planets that they make up.  As recounted by our December article, The Many Worlds of Hugh Everett by journalist Peter Byrne, 50 years ago the iconoclastic physics student Hugh Everett introduced the idea that quantum physics is incessantly splitting the universe into alternate branches. Byrne’s article talks about Everett’s life (did you know his son is the lead singer of the rock band Eels?) as well as about his theory and the “Copenhagen Interpretation” he aimed to supplant. But many other interpretations of quantum mechanics exist, and today Copenhagenists have more subtle variants to choose from than the one that Everett once called “a philosophic monstrosity.” Here is an all-too-short run-down on some of them.  The basic scenario an interpretation must address is when a quantum system is prepared in a combination of states known as a superposition. For example, a particle can be at both location A and B, or in the infamous thought experiment, Schrödinger’s quantum cat can be alive and dead at the same time. The problem is that when we observe or measure a superposition, we get but one result: our detector reports either “A” or “B,” not both; the cat would appear either very alive   or very dead.  Copenhagen Interpretation  This interpretation (or variants of it) has long been the party line for quantum physicists. The Schrödinger equation describes how a wave function evolves smoothly and continuously over time, up until the point when our big, clunky measuring apparatus intervenes. The wave function enables us to predict, say, there’s a 60% probability we’ll detect the particle at location A. After we detect it at A or B, we have to represent the particle with a new wave function that conforms with the measurement result.  What bothers some people about this interpretation is the random, abrupt change in the wave function, which violates the Schrödinger equation, the very heart of quantum mechanics. Everett argued that this approach was philosophically a mess: it used two contradictory conceptual schemes to describe reality, the quantum one of wave functions and the classical one of us and our apparatus.  Many Worlds Interpretation  Everett’s theory. Also known as the relative state formulation.  The superposition of the particle spreads to the apparatus, and to us looking at the apparatus, and ultimately to the entire universe. The components of the resulting superposition are like parallel universes: in one we see outcome A, in another we see outcome B. All the branches coexist simultaneously, but because they are completely non-interacting the “A” copy of us is completely unaware of the “B” copy and vice versa. Mathematically, this universal superposition is what the Schrödinger equation predicts if you describe the whole universe with a wave function.  What bothers people about this interpretation is its conclusion that we are perpetually dividing into multiple copies, which may have ghastly implications as well as being bizarre.  Bohmian Interpretation  Also known as the De Broglie–Bohm interpretation or the pilot wave interpretation.  This theory postulates that every particle not only has a wave function but also exists as an actual particle riding along at some precise but unknown location on the wave and being guided by it. How the wave guides the particle is described by a new equation that is introduced to accompany the standard Schrödinger equation. The randomness of quantum measurements comes about because we cannot know exactly where a particle started out. The theory was proposed by David Bohm in 1952 (a few years before Everett’s theory), extending a theory of Louis De Broglie’s from 1927.  Changing the Rules  Some theorists seek to find a mechanism that causes the “collapse” of the wave function from a superposition of possibilities to a single outcome. For example, Roger Penrose has proposed that gravitational effects may play this role. Other models, such as the Ghirardi-Rimini-Weber theory, introduce specific modifications to the Schrödinger equation. By differing from standard quantum theory, such models in principle might be falsifiable by experiment (or conversely, standard theory could be falsified in their favor).  Decoherence Theory  This is not an interpretation, but it is an important element of the modern understanding of quantum mechanics. It expands upon the kind of mathematical analysis that led Everett to his interpretation, because it analyzes the effect that stray quantum interactions with the surrounding environment have on a system in a superposition. The chief conclusion is that the almost unstoppable loss of information through these channels “decoheres” a quantum superposition, making it more like an ordinary classical state. It explains very well why we see the classical world that we do, and clarifies the requirements to keep quantum effects manifest in the lab.  Copenhagenists can point to decoherence as an explanation of what makes large classical systems different from small quantum systems (in general, large systems decohere much more readily and rapidly than tiny ones). Everettians can point to it as a more complete explanation of how the parallel branches form and become independent. But best of all, decoherence can be studied experimentally, and a very active area of quantum research is confirming it and exploring it in ever greater detail.  Consistent Histories  This scheme analyzes sequences of states of a system (which may include the whole universe), to find what questions can be consistently answered about the system, such as “was the particle at A or B at time T?” The measurement problem, however, is not resolved: the question of which histories actually happen remains a matter of probabilities just as with the standard Copenhagenist approach.  Is it Real?  In some respects the decision between a Copenhagenist and an Everettian viewpoint boils down to a basic question: Is the wave function real or is it just information? If it is “real”—in some sense the universe really consists of quantum waves propagating around—then one tends to be driven to an Everettian viewpoint; the “collapses” that wave functions must undergo to produce the one reality that we see are too problematic. But if the wave function is just information, for example, a representation of what an experimenter knows about a system, then that “collapse” is completely natural. Imagine the standard classical scenario of flipping a coin. Before you look at it, your knowledge of its state is “50% chance of heads, 50% chance of tails.” When you look, your knowledge instantaneously changes to, say, “100% heads, 0% tails.”  “Shut Up and Calculate!”  Some physicists talk of the “shut up and calculate interpretation”: ignore the philosophical puzzle of how the classical and the quantum coexist and use the Schrödinger equation (and all the subsequent mathematical developments of quantum theory) to compute quantities of practical interest. These include energy levels of atoms; predictions for particle collider experiments; the properties of semiconductors, superconductors and other materials; and so on. It is all that most physicists ever need.  Transactional Interpretation  This interpretation has waves traveling forward and backward in time, setting up standing waves, for example between an emitter of a particle and its subsequent detector. It was proposed by John G. Cramer (physicist and science fiction author) in 1986 and claimed by him to provide insight into puzzles such as wave function collapse and the Schrödinger’s cat experiment. These insights have led Cramer to pursue an experiment to try to demonstrate the sending of signals backward in time (which most quantum physicists will tell you is impossible if standard quantum mechanics is correct). Share this Article: The perfect movie companion to Jurassic World Add promo-code: Jurassic Hurry this sale ends soon > Email this Article
36a8a6ae06c0a2f7
« · » Section 1.6: Exploring the Input of Data: Complex Expressions real part of the function = imaginary part of the function = Please wait for the animation to completely load. Complex functions are absolutely necessary to describe quantum-mechanical phenomena. Quantum-mechanical time evolution is governed by the Schrödinger equation1 which is itself complex, thus yielding complex solutions. In many exercises you will be expected to enter a formula to control the animation (position and time are given in arbitrary units). Restart.   In this Exploration, you are to enter the real (the blue curve on the graph) and imaginary (the pink curve on the graph) parts of a function, fRe(x,t) and fIm(x,t),  for t = 0.  Once you have done this, the time evolution of the function is governed by the form of the function you have chosen and the "Resume" and "Pause" buttons. f(x,t) = Please wait for the animation to completely load. Besides entering [fRe(x,t), fIm(x,t)], the real and imaginary components of the function, you will also be asked to enter the function in its magnitude and phase form, f(x,t) = A(x,t)eiθ(x,t) where A and θ are real functions. The default function for this Exploration is [cos(x − t), sin(x − t)] or ψ(x,t) = ei(x t) which is called a plane wave.2  In the text box you can enter a complex function in magnitude and phase form. Try it for the plane wave, exp(i*(x−t)), to see if you get the same picture as above. Input the following functions for the real and imaginary parts of f(x,t) in the first animation, then determine what amplitude and phase form you have to enter into the text box of the second animation to mimic the results you saw in the first animation. 1. real = exp(-0.5*(x+5)*(x+5))*cos(pi*x) | imaginary = exp(-0.5*(x+5)*(x+5))*sin(pi*x) 2. real = sin(2*pi*x)*cos(4*t) | imaginary = sin(2*pi*x)*sin(4*t) Try some other complex functions for practice. 1By the Schrödinger equation we mean what is often called the time-dependent Schrödinger equation since this is the Schrödinger equation. 2For example, the complex function z(x) = eix = cos(x) + i sin(x) and z(x) = 1/(x + i) = x/(x2 + 1) −i/(x2 + 1). The OSP Network: Open Source Physics - Tracker - EJS Modeling Physlet Physics Physlet Quantum Physics
73144e3b468c6b1f
Wave Function From Sixty Symbols Channel (Extra Footage) A wave function or wavefunction is a probability amplitude in quantum mechanics describing how a particle or a system of particles behaves in a given quantum state. The probability amplitude is distributed as a waveform through space and time, the unifying concept of wave-particle duality, hence the name wavefunction. Generally it is a function of position space and time, or momentum space and time, or rotation, that returns the probability amplitude of a position or momentum for a subatomic particle. Sometimes the wavefunction may not depend on time, but will always depend on space. As a function of a space, it maps the possible states of the system into the field of complex numbers. The laws of quantum mechanics (the Schrödinger equation) describe how the wave function evolves over time. The most common symbols are ψ or Ψ (greek lower and capital psi, used interchangeably). The square of the (complex valued) modulus |ψ|2 is equal to the probability density (not just the probability) of finding a particle in an infinitesimal space element surrounding a point in space and time; this is the Born interpretation of the wavefunction (see below).[1] The SI units for ψ are unusual, containing a square root of a metre: m-d/2, where d = dimension of space, d = 1 for one dimension etc. The reason for the square root is so that the units of probability density |ψ|2 are m-d, as should be since a probability is clearly dimensionless, and it is probability per unit space. For example, in an atom with a single electron, such as hydrogen or ionized helium, the wave function of the electron provides a complete description of how the electron behaves, for a given quantum state. It can be decomposed into a series of atomic orbitals which form a basis for the possible wave functions. For multi-electron atoms (more than one electron) or any system with multiple particles, the underlying space is the possible configurations of all the electrons and the wave function describes the probability amplitude of those configurations. Simple examples of wave functions are in common quantum mechanics problems; the particle in a box and the free particle (or a particle in an infinitely large box). Views: 118 © 2015   Created by Sevan Bomar. Badges  |  Report an Issue  |  Terms of Service
37b7c0da1df21f73
Quantum Physics Wave Function for Classical Mechanics Authors: Janne Mikael Karimäki In this paper the relationship of classical physics and quantum physics is studied by introducing a partial differential equation, which describes classical mechanics, but looks very similar to the Schrödinger wave equation of quantum mechanics. This work is largely based on David Bohm's causal interpretation of quantum mechanics, but is in some sense complementary to it. In Bohm's theory the Schrödinger wave equation is used to derive classical looking equations of motion for quantum physics. Here exactly the opposite is done. The equations of classical physics are put into a form resembling the Schrödinger equation of quantum physics. Comments: 9 Pages. Other people have had this idea before me. This may still be interesting for some people interested in the foundations of physics. Download: PDF Submission history [v1] 2012-03-29 13:10:53 Unique-IP document downloads: 109 times Add your own feedback and questions here: comments powered by Disqus
e8cdaeee5b8c6cfc
Collaborative Refutation At least eight people—journalists, colleagues, blog readers—have now asked my opinion of a recent paper by Ross Anderson and Robert Brady, entitled “Why quantum computing is hard and quantum cryptography is not provably secure.”  Where to begin? 1. Based on a “soliton” model—which seems to be almost a local-hidden-variable model, though not quite—the paper advances the prediction that quantum computation will never be possible with more than 3 or 4 qubits.  (Where “3 or 4″ are not just convenient small numbers, but actually arise from the geometry of spacetime.)  I wonder: before uploading their paper, did the authors check whether their prediction was, y’know, already falsified?  How do they reconcile their proposal with (for example) the 8-qubit entanglement observed by Haffner et al. with trapped ions—not to mention the famous experiments with superconducting Josephson junctions, buckyballs, and so forth that have demonstrated the reality of entanglement among many thousands of particles (albeit not yet in a “controllable” form)? 2. The paper also predicts that, even with 3 qubits, general entanglement will only be possible if the qubits are not collinear; with 4 qubits, general entanglement will only be possible if the qubits are not coplanar.  Are the authors aware that, in ion-trap experiments (like those of David Wineland that recently won the Nobel Prize), the qubits generally are arranged in a line?  See for example this paper, whose abstract reads in part: “Here we experimentally demonstrate quantum error correction using three beryllium atomic-ion qubits confined to a linear, multi-zone trap.” 3. Finally, the paper argues that, because entanglement might not be a real phenomenon, the security of quantum key distribution remains an open question.  Again: are the authors aware that the most practical QKD schemes, like BB84, never use entanglement at all?  And that therefore, even if the paper’s quasi-local-hidden-variable model were viable (which it’s not), it still wouldn’t justify the claim in the title that “…quantum cryptography is not provably secure”? Yeah, this paper is pretty uninformed even by the usual standards of attempted quantum-mechanics-overthrowings.  Let me now offer three more general thoughts. First thought: it’s ironic that I’m increasingly seeing eye-to-eye with Lubos Motl—who once called me “the most corrupt piece of moral trash”—in his rantings against the world’s “anti-quantum-mechanical crackpots.”  Let me put it this way: David Deutsch, Chris Fuchs, Sheldon Goldstein, and Roger Penrose hold views about quantum mechanics that are diametrically opposed to one another’s.  Yet each of these very different physicists has earned my admiration, because each, in his own way, is trying to listen to whatever quantum mechanics is saying about how the world works.  However, there are also people all of whose “thoughts” about quantum mechanics are motivated by the urge to plug their ears and shut out whatever quantum mechanics is saying—to show how whatever naïve ideas they had before learning QM might still be right, and how all the experiments of the last century that seem to indicate otherwise might still be wiggled around.  Like monarchists or segregationists, these people have been consistently on the losing side of history for generations—so it’s surprising, to someone like me, that they continue to show up totally unfazed and itching for battle, like the knight from Monty Python and the Holy Grail with his arms and legs hacked off.  (“Bell’s Theorem?  Just a flesh wound!”) Like any physical theory, of course quantum mechanics might someday be superseded by an even deeper theory.  If and when that happens, it will rank alongside Newton’s apple, Einstein’s elevator, and the discovery of QM itself among the great turning points in the history of physics.  But it’s crucial to understand that that’s not what we’re discussing here.  Here we’re discussing the possibility that quantum mechanics is wrong, not for some deep reason, but for a trivial reason that was somehow overlooked since the 1920s—that there’s some simple classical model that would make everyone exclaim,  “oh!  well, I guess that whole framework of exponentially-large Hilbert space was completely superfluous, then.  why did anyone ever imagine it was needed?”  And the probability of that is comparable to the probability that the Moon is made of Gruyère.  If you’re a Bayesian with a sane prior, stuff like this shouldn’t even register. Second thought: this paper illustrates, better than any other I’ve seen, how despite appearances, the “quantum computing will clearly be practical in a few years!” camp and the “quantum computing is clearly impossible!” camp aren’t actually opposed to each other.  Instead, they’re simply two sides of the same coin.  Anderson and Brady start from the “puzzling” fact that, despite what they call “the investment of tremendous funding resources worldwide” over the last decade, quantum computing still hasn’t progressed beyond a few qubits, and propose to overthrow quantum mechanics as a way to resolve the puzzle.  To me, this is like arguing in 1835 that, since Charles Babbage still hasn’t succeeded in building a scalable classical computer, we need to rewrite the laws of physics in order to explain why classical computing is impossible.  I.e., it’s a form of argument that only makes sense if you’ve adopted what one might call the “Hype Axiom”: the axiom that any technology that’s possible sometime in the future, must in fact be possible within the next few years. Third thought: it’s worth noting that, if (for example) you found Michel Dyakonov’s arguments against QC (discussed on this blog a month ago) persuasive, then you shouldn’t find Anderson’s and Brady’s persuasive, and vice versa.  Dyakonov agrees that scalable QC will never work, but he ridicules the idea that we’d need to modify quantum mechanics itself to explain why.  Anderson and Brady, by contrast, are so eager to modify QM that they don’t mind contradicting a mountain of existing experiments.  Indeed, the question occurs to me of whether there’s any pair of quantum computing skeptics whose arguments for why QC can’t work are compatible with one another’s.  (Maybe Alicki and Dyakonov?) But enough of this.  The truth is that, at this point in my life, I find it infinitely more interesting to watch my two-week-old daughter Lily, as she discovers the wonderful world of shapes, colors, sounds, and smells, than to watch Anderson and Brady, as they fail to discover the wonderful world of many-particle quantum mechanics.  So I’m issuing an appeal to the quantum computing and information community.  Please, in the comments section of this post, explain what you thought of the Anderson-Brady paper.  Don’t leave me alone to respond to this stuff; I don’t have the time or the energy.  If you get quantum probability, then stand up and be measured! 176 Responses to “Collaborative Refutation” 1. Mateus Araújo Says: I think the issue is that most of us don’t have eight people nagging us to comment on obviously wrong papers, so we just do what you probably want to do: ignore them. 2. Ashley Says: I’m not sure if I have anything further than the arguments you’ve made already to say about the main message of the paper. But here’s a comment on a more trivial point: at the end of Section 1, the authors claim that “Researchers are now starting to wonder whether geometry affects entanglement and coherence; the fi rst workshop on this topic was held last year”. I attended the workshop they cite, which was emphatically not about this issue. As can be seen from the schedule, the workshop was concerned with mathematical questions to do with the geometry of quantum states in the standard Hilbert space picture of quantum mechanics, rather than any physical issues concerning whether the underlying qubits were arranged in a linear trap, etc. 3. Scott Says: Mateus and Ashley: Thanks!! 4. Douglas Knight Says: From Lubos Motl, that’s high praise. Are there even 100 living people he has described in such positive terms? So it’s not at all ironic that would bias you to agree with him. 5. Alex Says: Just ignore them. I have to admit I have great fun watching you debunk Joy Christians and the likes, but in the end, it is a futile and time-consuming task. It’s best just to ignore them and enjoy our little treasures. They’re so much more interesting! 6. Anon J. Mouse Says: Unfortunately, I think it’s an example of this phenomenon: Ross Anderson is quite well known in computer security, mainly for his work on banking security. In fact, if you hadn’t written this piece I probably would have been the ninth person to ask you about it — it was clearly wrong, but from someone without a prior history of crankiness. It’s always disappointing when someone you respect does something silly like this. 7. Scott Says: Anon J. Mouse #6: Thanks! I intentionally didn’t look up who Anderson and Brady were before writing the post, since I didn’t want that to bias me. But, yes, the amount of respectful attention this obviously-wrong paper seemed to be getting did surprise me. Incidentally, I just found Anderson’s blog, which includes a comment by Jonathan Oppenheim making substantially the same points as in my post. 8. Greg Kuperberg Says: There is another point to make about scientific revolutions such as Newton’s laws or quantum mechanics. Yes, they can be supplanted or revised, but it’s not as if there is any turning back. The new revolution is almost always even less palatable to the old guard than the old one. (OTOH I do not understand the statement that BB84 does not use entanglement. I guess that there are non-entanglement versions, but it looks like the original BB84 does use entangled Bell pairs.) 9. Scott Says: Greg #8: No, BB84 just requires sending individual, unentangled qubits in one of the four states |0⟩, |1⟩, |+⟩, or |-⟩. (Indeed, the lack of any need for entanglement is one of the main reasons why the protocol is already practical today.) Interestingly, I understand that many of the security proofs for BB84 introduce entanglement as a formal convenience, but the entanglement never appears in the actual protocol itself. 10. Greg Kuperberg Says: How about that! I skimmed the link and totally misread it. Of course you’re right. 11. Yury Says: “Like monarchists or segregationists, these people have been consistently on the losing side of history for generations…” Unfortunately, I am not sure that monarchists or segregationists have been consistently on the losing side… At least, those who are against democracy and human rights often win — just look at Ancient Greece, Ancient Rome, independent Italian City-States, many democratic European governments in the first half of the 20-th century (which lost to dictatorships), or modern Russia (which is now much less democratic than it used to be 15 years ago). Also I’m afraid that now science is losing to pseudoscience. Most people in the world believe that “evolution is just а [wrong] theory,” “Big Bang never happened,” homeopathy works and biofields exist. It is surprising, however, that this paper attracted that much attention. I try to ignore articles that are based on clearly wrong assumptions like Quantum Mechanics or General Relativity is wrong, evolution never happened etc. Why do people pay attention to such papers :-( ? Thanks for an interesting post! 12. Scott Says: Yury #11: Good point! I should have clarified that monarchists, segregationists, and anti-QM folks have been consistently on the intellectually losing side. But they can win plenty of shallow “victories,” just as con-men or street thugs can. 13. Luboš Motl Says: LOL, Scott. I am sure your thinking engines are good enough to see eye-to-eye with me. By remembering my unflattering quote, you’re also showing some sense of history and long-term memory. More than 6 years ago, the unflattering words were revealed because of your decision to write anything (including any lie) about quantum gravity and string theory that someone pays you for. I don’t know whether this decision to be bought still holds. If it does, my assessment of course still holds as well whether or not we see eye-to-eye with one another. 14. Scott Says: Lubos says he’s sure my “thinking engines” are good enough to see eye-to-eye with him! Callooh! Callay! This might be the single greatest compliment I’ve ever received. And Lubos, in return for your generous compliment, I have some good news. As a result of major life changes—getting married, having a baby, etc.—I have abandoned my previous materialistic, money-grubbing ways. I’m now strictly a man of principle. And as such, no amount of money could ever induce me to abandon my total, principled commitment to Loop Quantum Gravity. OK, OK, I’m kidding about the last part. In fact, I have a much better appreciation now for the achievements of string theory than I did back in 2006, partly due to a meeting in Florence where Brian Greene spent 4 hours explaining them to me and others. I came away genuinely impressed, convinced that string theory and especially AdS/CFT are unequivocally a step forward in our understanding of the universe, even though we have a great deal more to learn. I’m not ready to say that alternative ideas like LQG are garbage and have nothing worthwhile to contribute, let alone that global warming is a sham, but maybe Lubosification is a process that will happen to me one step at a time. :-) 15. Greg Kuperberg Says: I suspect that LQG isn’t absolute garbage either, and I also cannot compare it to string theory on any direct authority either. But speaking as an external observer, it really does look like LQG is (or maybe was) a Nader-like quest to compete with string theory. I.e., a quest in which simply being invited into the debate is the first major goal, even if it has no good consequence or even negative consequences for the actual outcome of the debate. Of course, also speaking as an external observer, global warming looks like anything but a sham, in fact global warming denial looks like a sham. 16. Scott Says: Greg #15: My understanding of string theory is that it’s “what you’d inevitably come up with” if you took quantum field theory and perturbation theory as your fundamental starting points, then tried to tame divergences by replacing the point particles by extended objects. When you do that, you get some wonderful things that weren’t explicitly put in (e.g., the graviton), but also various aspects that seem to require ugly kludges to make them consistent with observed reality. Meanwhile, my understanding of LQG is that it’s “what you’d inevitably come up with” if you took GR and its demand for background-independence as your fundamental starting points, and tried to create a quantum theory satisfying that demand while leaving aside the details of particle physics. When you do that, you get something wonderful that wasn’t explicitly put in (spacetime discreteness), but also various aspects that seem to require ugly kludges to make them consistent with observed reality. If I’m right, then despite their incompatibility both with each other and (probably) with the ultimate truth, neither string theory nor LQG is nearly as “arbitrary” as they might seem to an outsider. If we had to pick one of the two that’s had more technical successes, that would be string theory, but that doesn’t mean LQG has had no technical successes. 17. Greg Kuperberg Says: Scott – Except with one colossal difference: One is conjectured to be mathematically viable, and the other one is conjectured not to be. 18. Greg Kuperberg Says: To be more precise, superstring theory satisfies a formidable array of mathematically rigorous consistency checks. It is either entirely or very nearly rigorously defined as a perturbative model of quantum gravity (in 9+1 dimensions). There are a ton of technical mathematical successes. Whereas with LQG, I’m not sure that there really are any technical successes. Some consistency checks have been claimed/published, but there are arguments that they are all superficial. I have also heard that renormalization theory speaks against the viability of a macroscopic limit of LQG, although I don’t know a whole lot about it. 19. Ajit R. Jadhav Says: Yesterday, when I first saw this post, not a single comment had yet made an appearance. The only thing I could think of, by way of a reply, was the following old one which I happened to remember, wanted to write down as my answer, but, somehow, didn’t (I decided to wait for other comments to appear, first). Anyway, the joke goes: Masochist to sadist: Oh, please, please, hit me! Sadist to masochist: No, I won’t! Ok, I will try to read that paper. … Yeah, in a way, I have known of that dancing droplet thingie since the time it came out, and wasn’t impressed much (if at all) by it. … Anyway, Scott’s point #2 and #3 seem to be right on. 20. Mkatkov Says: Scott: … Lev Tolstoy: Every happy families are similar, whereas any unhappy family is unhappy in a different way. Charles Babbage: A different kind of physics was discovered (and was actually necessary ) before the classical computers became practical. 21. Quantum Cowboy Says: I have to admit, I was pretty shocked to see this paper, and it makes you wonder about this guy. I get crackpots occasionally emailing me with their theory of how Bell’s theorem is wrong, or relativity is wrong, or quantum computation/crypto is wrong. But rarely do you find a claim of all three! Plus the mention of Bohemian mechanics and black holes in a quantum paper, pretty much tick five of the seven crackpot boxes. The only thing that’s missing is that it be written in several different fonts, and start with an introduction about how the author is a misunderstood genius and that mainstream scientists are too conservative to understand the truth of his theory, but that just like Einstein, he’ll be proven right in the end. People like Ron Rivest were hyping up this thing. Amazing! 22. Henning Dekant Says: Scott #16 a very nice summation of the String and LQG efforts. I will probably have to quote this at some point. As to the paper, I believe there is something intrinsically sinister going on when people try to approach QM from a topological angle. Apparently it makes people get all wobbly in the head. How else to explain Joy Christian and now these apparently respectable fellows, pushing these rather strange ideas? 23. Ajit R. Jadhav Says: Oops. In my above reply, please take it as #1 and #2 (of Scott’s points). Those two seem stronger points than #3 to me, with #2 being the strongest. (Above, I just got the point numbers wrong). … Sorry about that. … BTW, whenever any new view or theory to resolve the quantum riddles is put forth, I invariably end up wondering how a simple computational model simulating the essential physics of it might look like. Ditto here. The simulation being presented here, if it can be called that, is physical. That, in part, is a problem: with this kind of a model, the authors don’t have to directly dvelve in detail how the boundary/initial conditions are to be handled. Writing a C++ program would force them to be explicit with respect to all such details. If such a program is presented, then our (my!) task will become that much easier: the only remaining task will be to examine the points on which the conventional (say the Copenhagen) view and the new view differ. By its nature, a C++ simulation will have to capture the new concepts (objects/classes), and make them work in a new way (methods/algorithms). This nature of the computational simulation forces one to be explicit about the theoretical content. In contrast, what is presented as a physical simulation need not be so directly concerned with presenting the new theoretical ideas in their completeness; it could get away with being rather suggestive. 24. Luboš Motl Says: To be sure that you don’t miss my comments on your Lubošification, see 25. Tyler Says: Although I still agree with the issues raised here on the Ross Anderson and Robert Brady paper (ie, I think quantum computing and quantum crypto will be viable in the future), I don’t think their argument requires that BB84 use entanglement. If I understand correctly (I only did a quick read, so that’s a big “if”), they are against even the Bell inequalities. This would of course allow for a local hidden variable approach which would destroy quantum information, as it would reduce it to classical information. If this were the case, then it may be possible to use some crazy measuring device to measure the signals in the BB84 protocol in any basis, without disturbing the original signal. Now, I am pretty sure that the Bell inequalities do hold true, and so quantum crypto is still secure. I don’t really understand how they are claiming there is a flaw in the logic of the Bell inequalities. I just thought in the interest of fairness that point should be brought up. 26. Scott Says: Tyler #25: Fine, but that would require a completely different argument than the one they actually made! If they said: we think QM is completely wrong, ergo Eve can violate the No-Cloning Theorem, ergo she can break BB84, then their starting premise might still be bollocks, but at least steps 2 and 3 would follow logically. Instead they said: “As the experiments done to test the Bell inequalities have failed to rule out a classical hidden-variable theory of quantum mechanics such as the soliton model, the security case for quantum cryptography based on EPR pairs has not been made.” Nowhere in the article do they indicate any awareness that the main QKD schemes are not based on EPR pairs, and indeed the paper title says “…quantum cryptography is not provably secure,” with no mention of EPR pairs. So this seemed like a case of straightforward confusion. 27. David Speyer Says: What’s striking to me, as a non-physicist, is that they never discuss why the Bell experiments obtained the correlations predicted by orthodox quantum mechanics. They explain a loophole by which a local theory could be consistent with the experimental results. But (on a quick skim) I see no recognition of how amazing it is that this local theory manages to recreate the precise predictions of the nonlocal theory. 28. Robert Brady Says: I am afraid Haffner doesn’t report 8 qubits that could be used in quantum computation. Most of the ions are in the unexcited (ground) state, but the usual procedure in many-body theory is to subtract out the ground state. This can also be understood in terms of the modes of oscillation of multiple coupled oscillators, referred to in the paper by Ross and myself, where most or all of the oscillators act coherently as a single entity corresponding to the ground state. A similar effect occurs in superconductors, where I did theoretical and experimental work as a fellow of Trinity College alongside Brian Josephson. The phase difference across a Josephson junction between two bulk superconductors can represent at most one qubit. This is true even though the superconductor contains very many entangled electrons. Brian’s original work describing this phase transition is available here. You can of course analyse the same system as if it contained a large number of qubits, one for each pair of electrons, as long as you remember they are not independent. But nobody considers counting them individually, as if there were millions of qubits for quantum computation, because they are all correlated with a single phase and act as a single entity. 29. Scott Says: Robert Brady #28: Your response reminds me of the creationists who state categorically that “there are no missing links.” Then when people say, “What about Australopithecus? What about Homo erectus? etc.,” they reply, “well, that one’s really a human. That one’s really an ape. Neither one is transitional between the two.” Since they get to make up the rules as they go along, there’s no way they can ever be proven wrong. In a similar way, you claim that there’s no evidence for entangled states of more than 3 or 4 qubits. Then people immediately respond, what about this experiment? What about that one? In each case, you have some a posteriori reason why that experiment doesn’t count: yes there are hundreds of particles in a cat state, but they’re not behaving independently so they don’t count as “qubits.” And what about, say, the “cluster-like” quantum states discussed in this paper, which involve many-particle entanglement not in a collective cat-like degree of freedom? I assume you have some other reason why those don’t count. What you need, and don’t have (as far as I’ve seen), is a theory that would explain a priori what sorts of many-particle entanglement would count. 30. Ross Anderson Says: Scott, when we exchanged email privately before you made this blog post, I asked you to point me to any experimental paper that challenged the soliton theory of the electron. Rather than continuing that discussion in a civilised fashion, you chose to post this rant instead, inviting your followers to be abusive. But as Robert points out, the paper you cite does not report eight qubits at all. I’ve had similar conversations by email with other scientists who’ve privately pointed out other papers claiming multiple qubits; but most have already been attacked by other workers in the field, as here. I’m saddened that your response to Robert’s post was simply abusive. Anyone who’s interested in a substantive discussion of this issue may find it on light blue touchpaper, and more generally at the.Emergent Quantum Mechanics workshop. 31. Joe Fitzsimons Says: I’m a little confused about the focus on that particular eight qubit experiment. There are a number of eight qubit experiments out there, particularly in quantum optics (see for example Jian-Wei Pan’s experiments). 32. Scott Says: Joe #31: Thanks! I mentioned the Haffner et al. experiment only because it was the first >4-qubit experiment not in liquid NMR that popped into my head. You’re right that I also could’ve mentioned optical experiments, but I’m pretty sure it wouldn’t have mattered. Anderson and Brady are playing the game where first someone else proposes a many-qubit experiment—any such experiment—then they think up a creative reason why it doesn’t contradict their model. They’ve left the realm of Popperian falsifiability. 33. Luboš Motl Says: Dear Scott, I don’t believe it’s possible for a well-defined model to avoid falsification in this way. At most, they may have a vague template of a model superimposed on tons of wishful thinking that looks compatible with their not-too-comprehensive consistency checks. However, it’s totally obvious that if one proposes any particular model that is “qualitatively inequivalent” to proper quantum mechanics, it has to contradict some experiments that are well-known and easy to do. In fact, one may eliminate whole, almost complete classes of these QM-inequivalent models. For example, if they admit that all possible states of 2 electrons are described correctly by quantum mechanics (including their entanglement), and one imposes any kind of mild locality or Lorentz symmetry which is tested in many ways as well, it’s clear that the laws for 2 electrons may only be extended in a unique way to an arbitrary number of electrons simply because any subset of 2 electrons in the larger set has to agree with quantum mechanics. It’s also silly to say that we can’t study any states with more than 4 entangled qubits in Nature. Take any molecule such as benzene – I could pick pretty much any other molecule but I want to be specific. It has a hexagon of carbon atoms. Each carbon atom has 4 valence electrons to share; for each carbon atom, 1 of these 4 is attached to a hydrogen atom – organized to a hexagon radially attached to the carbon hexagon. This leaves 3 free valence electrons for each carbon atom. Each carbon atom is connected to the neighboring carbon atoms – the bond is “double” for one of the carbon atoms and “single” for the other one. In total, we have a configuration of at least 6 qubits here. The low-lying states of benzene distinguish two states, the worth of 1 qubit – because the double and single bonds have to alternate and there are just two options. Indeed, the sum and differences of these two alternating arrangements give two energy levels of the molecules we may test. There’s one qubit visible at low energies except that any model that denies that these are built from a larger number of electrons/qubits that may be organized/entangled in many other ways a priori will contradict the locality to the extent that it will really be incompatible with the atomic theory of matter itself! ;-) It’s nothing else than the atomic theory that implies that the properties of molecules are constructed of – and subsets of – properties of the collections of atoms from which the molecules are built. There just can’t possibly be any model that would get the right low-energy levels of the benzene molecule while it would deny the a priori existence of many electrons storing an arbitrary number of qubits of quantum information a priori. If someone thinks that their model can describe such things, he should write a paper about the description of the molecule in a “completely different way” than QM. It’s not just the benzene molecule. It’s any molecule. It’s any system with many particles. Condensed matter physics gives a whole new perspective on this issue. The people who don’t use the regular multiparticle quantum mechanics and/or quantum field theory (or its upgrade, string theory, and all these three frameworks are really equivalent when it comes to the analysis of these low-energy states) are really abandoning modern physics. To claim that they have an alternative, they have to start from scratch and they have to offer their – totally different – explanation for every single observation in modern science. It won’t be enough to describe one particle or one force because they are apparently messing with the very way how composite systems and interactions are constructed out of the smaller ones, too. So they have to separately check whether larger, composite systems according to their theory behave in agreement with experiments, too. And of course that the answer is a resounding No. What Dr Anderson and Dr Brady do is typical pseudoscience in which one decides that the right theory must be destroyed and declared wrong and they construct an alternative except that they set much lower standards for the alternative and don’t even try to check whether the alternative is capable of describing at least 1% of the things that the right theory is able to describe, at least the elementary things. 34. Greg Kuperberg Says: Since Anderson requests a serious discussion instead of ridicule, here is one: Reference [36] in Anderson-Brady reports a violation of the Bell-CHSH inequality under quite strict conditions, and the rebuttal given to that finding is incomprehensible. 35. Slava Kashcheyevs Says: Brady seems to be simply misguided about the basics of quantum physics. At behest of my quantum computation colleague I’ve posted a bit more detailed critique: 36. Andris Says: I asked a physicist colleague (Vyacheslavs Kaschcheyevs – the best quantum theorist in our country) for his opinion on this paper. Here are his comments: The conclusion is the same as Scott’s but Vyacheslavs also points out that Anderson&Brady ideas are refuted not only by quantum computing experiments with more than 3 qubits but also by a large number of quantum physical phenomena which have been tested long before quantum computing was invented. 37. John Says: It would be much cooler to promote good papers on your blog, and let the bad ones molder. Nobody cares about this paper. (And please, leave poor Lubos alone!) 38. Scott Says: John #37: When I have promoted good papers on this blog, people have accused me of unseemly “hype.” That is, when they commented at all — I didn’t get nearly as many entertaining reactions as when I’ve ripped into bad papers! But, OK, point taken. 39. Scott Says: Lubos #33: I agree that I was being overly generous to Anderson and Brady when I used the word “model” to describe their ideas. That they don’t have a well-defined model is precisely what lets them wiggle free of whatever experiments people bring to their attention. On the other hand, I respectfully disagree with the following statement of yours: No, it’s logically possible that someday, someone will invent a theory that agrees with QM on all experiments that have already been done or can be easily done with current technology, but that disagrees with QM on the states that arise (for example) in large-scale implementations of Shor’s factoring algorithm. I have no idea what such a theory would look like—and even if someone constructed one, I’d give long odds against its being true, unless it was somehow even more elegant and mathematically compelling than QM. And certainly, one would need to work thousands of times harder than Anderson and Brady are working to construct such a theory that gave sensible results on current experiments. But I do regard the problem of ruling out such theories—what I once called Sure/Shor separators—as a wonderful scientific project, and indeed, as one of the main intellectual reasons to try to build scalable quantum computers. It’s sort of like proving P≠NP or the Riemann Hypothesis—other examples where “reasonable people already agree on the answer,” and yet we stand to learn a great deal from the journey. 40. Bram Cohen Says: Does this paper deny that Bell inequality violations happen in the real world? Claiming that there isn’t any experimental evidence for that is… a bit of a stretch. 41. Robert Brady Says: Scott #29. The test is very specific. Can you do a quantum computation representing numbers greater than 16? For example, the factors of 21 are 3 and 7, both of which are less than this limit, and so this computation can be performed, unlike a factorisation of, say, the product of two primes greater than 100. The experiments discussed here do not attempt to do any computation. Instead they report individual correlated or entangled items, which you call ‘qubits’ as if they could be used in a computation. In my field of many-body theory there are millions upon millions of similar entangled entities, for example the entangled spins in a ferromagnet or the electron pairs in a superconductor. They are not called qubits because you cannot do a quantum computation with them. To the contrary, they are called a ground state and are usually subtracted out. I do not think it is creationism to point out, with references, why you cannot do a computation with these entities, nor is our paper unfalsifiable for the same reason. 42. Greg Kuperberg Says: Bram – The tone of the paper is, yes, there have been Bell violation experiments, but the experiments have loopholes. The authors don’t really believe quantum probability. (As I said, I can’t make sense of their objection to reference [36].) 43. Scott Says: Robert Brady #41: I see. 21 doesn’t count since, although it’s greater than 16, its prime factors are not. Can I have you on record that, if someone uses Shor’s algorithm to factor 51 into 3×17, that will change your mind? (Or will it still not count since, when written in binary, 51 consists only of 1’s and 0’s?) Regarding many-body experiments: “usability for computation” is not some magical pixie-dust that infuses certain physical systems while leaving all the others untouched. When you have a large entangled quantum state of hundreds or thousands of particles, the burden is on the QM skeptics—i.e., the scientific radicals—to explain exactly how to account for that state and its evolution within their classical model. It’s not on the people who accept standard QM—i.e., the “conservatives”—to demonstrate that, in principle (if not yet in practice), you could do all the things with the state that standard QM says you could do. 44. John Sidles Says: The wrangling with regard to the feasibility (or not) of fault-tolerant quantum computing has much to lean (as it seems to me) from the wrangling with regard to the reality (or not) of anthropogenic climate-change: • Little is gained when the strongest enthusiasts confront the weakest skepticism, and • Little is gained when the strongest skeptics confront the weakest enthusiasm. Conclusion  The skepticism of the Anderson/Brady preprint is insufficiently strong (in its mathematical physics) to justify the opprobrium that quantum computing enthusiasts are heaping upon it. What would be deeply thrilling (to me) would be an arxiv preprint that expressed skepticism of quantum computing comparably forceful to Edgar Djikstra’s skeptical computer science essay Go To Statement Considered Harmful … perhaps along the lines Hilbert Space Considered Harmful. The essay Hilbert Space Considered Harmful would replace Djikstra’s dictum GOTO non fingo with an analogous C^n non fingo, that is, a demonstration that for very many (all?) real-world dynamical systems, the effective state-space is very much smaller than a Hilbert space. This would set the stage for faith in the absolute physical reality of Hilbert space, not to be overthrown in a radical revolution, but rather to fade gracefully into irrelevance, as more powerful mathematical methods of dynamical simulation replace it … rather like the moderating effect that democracy exerts upon Europe’s twelve monarchies! 45. Lou Scheffer Says: Lubos #33 and Scott #39, I agree with Scott that Lubos’ statement is not at all clear. This is *exactly* what happened with general relativity. It’s qualitatively different than Newtonian physics, it produced results that agreed with all the well known results, but predicts very different results in the strong-field, high speed regimes. (The one existing test at the time, the precession of the perihelion of Mercury, is not at all an easy test. There’s a classical explanation as well involving the J2 moment of the sun, which is hard to rule out since the Sun does not rotate as a solid body). 46. Raoul Ohio Says: Slightly related: Does all probability follow from QT? 47. Luboš Motl Says: Dear Scott #39, you write: Sure, it’s also logically possible that someday, a creationist theory explaining all the data like evolution and many more will be proposed, too. It’s logically possible that Elvis Presley will send us greetings from the Moon, saying that he has enjoyed it over there. Logically possible is almost everything in science. A different question is whether it’s at least likely enough so that you would expect such a thing occur once per age of the Universe if it occurred every second. And in this sense, all the three things I mentioned are de facto impossible. I understand that you consider a quantum computer running Shor’s or different algorithm to be something in between the Earth and Heaven – after all, we don’t have it yet. My suspicion is that you also want to say that its possible existence is disputable because this makes your field look like unsettled, ongoing research near the frontiers of physics knowledge – which it’s actually not because in reality, quantum computation is just an advanced engineering application of rudimentary physics insights of the 1920s and 1930s. However, if you have an objective reason why you think that a quantum computer running Shor’s algorithm is “extreme” so that QM could fail in it, well, I am certain that you won’t be able to define any physically natural – non-spiritual – quantity according to which it would be extreme. It may have 100 or 1,000 qubits and this may look large relatively to the devices that have already been constructed. However, this number of qubits/particles is tiny relatively to the numbers in other physics systems where quantum mechanics has been tested and confirmed. Take a piece of metal, it has 10^{26} atoms and a ground state. The ground state is a particular (and rather generic, from an a priori viewpoint) linear superposition of tensor products of states of many electrons (and other particles) and it has the properties dictated by quantum mechanics. Because the ground state of this system – and many other, very different systems – is just a “rather random” linear combination of the basis vectors you may obtain from 10^{26} qubits or any other large number, it makes it pretty much inevitable that 1) all the other basis vectors in the multi-qubit Hilbert space are allowed states, 2) one can make any superpositions because if one seemingly random (one that only differs by low energy) combination is allowed, and it always works, it pretty much by statistics implies that all of the other also work. So by checking several (well, millions of) different systems, we have verified that the required Hilbert space indeed grows exponentially with the number of degrees of freedom and that all superpositions are allowed (the superposition postulate holds) because some “random ones” are always allowed. It’s implausible you will find a loophole although we would have to define a “loophole” rigorously to be able to rigorously decide whether such a class of theories may already be fully falsified. (The tail that will remain “unfalsified” will be written as equations of QM plus some tiny corrections, and it will be possible to show that a viable, non-falsified theory is just a built as an unnatural mutation of quantum mechanics.) Incidentally, if we talk about the “a priori” restrictions on the Hilbert space – the kinematic Hilbert space, so to say – that’s something that folks in quantum gravity know very well. Quantum gravity *does* invalidate the locality in the strict sense (it’s needed for the Hawking radiation to restore the information about the initial state, even though it’s apparently coming from a causally disconnected region). The qubits in region A and qubits in region B, when too dense, aren’t quite independent from each other. One may only achieve those configurations for which the object isn’t “too much mass concentrated within the Schwarzschild radius”. If there’s too much mass/energy in a region, it collapses into a black hole and the quantum computer (and its usual description) breaks down (and is crushed a second later). But just like we know that this nonlocality and refusal to acknowledge complete independence of various regions exists, we also know that it’s very far and its effect on mundane, low-energy, low-density, quantum-computing-like experiments is negligible. The restriction only occurs when the density of information approaches 1 bit per Planck area (if we measure the surface of the region which is relevant because quantum gravity is holographic) which is 10^{-70} square meters. If you tried to impose restrictions on the Hilbert space of a local quantum field theory that would start to operate at much larger areas, it would be equivalent to claiming that G, Newton’s constant, is much larger i.e. gravity is much stronger. Because the black holes clearly saturate the entropy and they would be restricted, you would reduce your idea about the black hole entropies and it would become impossible for some stars to collapse into black holes – because such a process would disagree with the second law of thermodynamics, and so on. We have actually lots of other physics reasons and arguments – not just playing with individual qubits at the “fundamental level” – to be sure that our description is right even for heavily multi-body systems. Thermodynamics really allows us to work with entropy and we do measure entropy experimentally. A certain entropy simply does mean that there exist exponentially many mutually orthogonal (classically mutually exclusive) states in the Hilbert/phase space. This may be verified by thermal experiments. To ban most of the microstates would mean to seriously reduce the heat capacity of the object. But we just know that the heat capacity of a proposed design for a quantum computer can’t be much lower than predicted by quantum mechanics. It’s just a piece of matter and similar matter has been subjected to tons of thermal experiments since the 19th century. When cooled near absolute zero, only some degrees of freedom survive but the entropy will still be extensive – it’s been tested with a great accuracy – and in the quantum framework, it means that the dimension of the Hilbert space grows exponentially with the size. Whether people will be able to overcome all the technical problems with QC – and whether there is a mistake in the proofs that QC is immune against various types of real-world inaccuracies and noise – may be an open question. But the fact that somewhat larger systems with 100 or 1,000 qubits obey the postulates of quantum mechanics as safely as systems with 1 or 2 or 3 or 4 qubits is something I am ready to bet my head upon (as in guillotine) simply because I know that we have verified QM on systems with a small number of particles as well as much larger numbers of particles. QM isn’t just a theory of small systems; it’s a theory of all systems and it’s essential whenever the classical limit is unjustifiable. I think that if you work 1,000 times harder than Brady and Anderson, you will not get a working, inequivalent, non-quantum description of the phenomena for which quantum mechanics seemed to work. Instead, if your work is impartial, you will end up seeing what I already see now, namely that such an alternative theory just can’t work. Well, maybe you have to work 10,000 more hard than Brady and Anderson but the result – a clear understanding of these basic conceptual issues – is a price that deserves and justifies this 10,000-times-harder work. ;-) All the best 48. Scott Says: Luboš #47: As you might recall, I’ve offered $100,000 for a convincing argument that scalable QC is impossible in the physical world, and I don’t have that kind of money to throw around casually. If you like, my offer corresponds, not to my assigning the speculation in my comment an 0.0000000000001% probability (like Elvis on the Moon), but certainly to my assigning it an 0.1% probability or less. I won’t go much lower than that, simply because, just because we can’t think of a “non-spiritual” way to separate the states arising in Shor’s algorithm from the many-particle states arising in current experiments, and even have what look like good arguments against that (e.g., your entropy argument, which of course assumes QM), doesn’t tell me there isn’t such a separator that would be obvious to physicists of future generations. Sure, it would require a deformation of QM unlike anything we’ve seen, but I don’t see that it takes us into “Elvis lives” territory. And also, I’m pretty sure an examination of the history of physics would show that discoveries people previously would’ve assigned an 0.0000000000001% probability, have happened at least 0.1% of the time. :-) But even if—as you and I strongly predict—the effort to build a scalable QC “merely” results in actually building a scalable QC, I still say we would’ve learned something important. And not merely because of the intellectual inadequacies of the people who had smidgens of doubt. By analogy, if someone proved the Riemann Hypothesis, I wouldn’t say that person had “merely” achieved the same level of certitude as a physicist who’d long ago told all his friends that he was “morally certain” of the hypothesis’ truth, with no proof! Having a proof is qualitatively different. 49. Nex Says: Very interesting paper. This soliton model is something I would like to see thoroughly explored as I am always interested in classical models of QM behavior, I certainly agree that inability of deriving QM from classical physics might simply be a failure of imagination. As for Bell inequalities, the paper does not deny them, but claims another loophole due to the fact solitons in this model propagate in common density wave background so to speak (if I am getting it right), but I agree they should explain in much more detail how this loophole works and why it lets them circumvent Weihs and Salart experiments. 50. Scott Says: John Sidles #44: The reason I commented on this paper was simply that lots of people asked me to! In general, though, I disagree with the idea that in any intellectual dispute, we have a moral obligation to respond exclusively to our opponents’ best arguments, and to let their dumbest, most egregiously-wrong arguments pass completely without comment. Yes, if we care about truth we’d better respond to the former. But puncturing the latter can also be a useful service to the public—besides being easy and fun! :-) 51. Ajit R. Jadhav Says: Scott #43: Robert #41 makes two points. Let me deal with each, separately. 1. I don’t know anything towards settling that point about “16” limit. My hunch would be that the authors would be wrong in prescribing such a limit. However, since I don’t understand either their theory or TCS in general to the sufficient extent (and that’s why I was insisting on his supplying a C++ program), I am unable to see if the authors have an intuitively unbelievable but rigorously provable result. For an example of such a theorem (intuitively unbelievable but actually provable), I can cite the case of Huygens’ principle supposedly not working out in 2D. Completely unbelievable, but “true.” (http://www.physicsforums.com/showthread.php?t=148787) BTW, if you ask me, IMO, that theorem is based on a wrong way of understanding Huygens’ principle. But, yes, if you grant the mathematicians’ definition of what constitutes that principle, then, sure, the proof, by itself, is valid. It’s just that those definitions themselves are different from what people understand on the common-sense physical grounds. Those definitions themselves are only tenously understandable. Further, they (and the 2D limit) can be made completely irrelevant in a physically sound and simpler view of Huygens’ principle. The point is: since I don’t understand the authors (A+B), I would allow them (at least for the time being) the possibility that they could be on to something similar: something that is only tenuously understandable, and is a minor quirk of theory that doesn’t at all matter in practice (just the way I can always apply Huygens’ principle also in 2D—following another definition of the principle: a local definition). 2. However, when Robert (#41) comes to this: “They are not called qubits because you cannot do a quantum computation with them,” he does have an absolutely valid point. He also points out the reason why. This point of his remains valid, even if the objection concerning the absence of an explicit model for entanglement which Slava (#35) points out, also remains relevant! The authors (A+B) need to respond to that. Enough, for the time being. 52. Slava Kashcheyevs Says: Ajit #51: The deal breaker is that classical models, however beautiful, have no capacity to approach any problem that “orthodox” quantum mechanics solves using the notion of entangled states (aka many-particle superpositions), regardless of interpretation. The point of my blog post is that this set of problems inaccessible by A+B covers an overwhelming majority of situations to which physicists have every applied the quantum formalism. Quantum computation is just a modern, tiny subset. The fluid mechanics approach to de Broglie waves is dead since 1920s and has no bearing on limitations of QIP. Scott #50: If the 3 points in your original post is what you have identified as “opponents’ best arguments”, I’d counter that these are not even arguments, just unsubstantiated statements. In my physicist’s view, their best argument is Brady’s explicit model, so I attacked it (for the same social reason as yours – people have asked me about it). 53. Scott Says: Slava #52: Thanks for sharing! No, I wasn’t suggesting anything of the kind! I’m not sure that I know how to identify the “best” arguments for why QC must be impossible, any more than I know how to identify the “best” arguments of the creationists or 9/11 truthers. On the other hand, in many previous posts on this blog (most recently here), I’ve addressed anti-QC arguments that at least weren’t in direct contradiction with existing experiments, and that I found noticeably more informed and interesting than this one. 54. John Sidles Says: Scott (#50 upon #44) “In general, though, I disagree with the idea that in any intellectual dispute, we have a moral obligation to respond exclusively to our opponents’ best arguments, and to let their dumbest, most egregiously-wrong arguments pass completely without comment.” LOL … Scott’, isn’t it the case that post #50 knocks down a strawman that no one has ever advocated? To articulate the “Monarchy” critique (of #44) more completely, it commonly happens that FTQC enthusiasts defend the “House of Hilbert monarchy” by arguing that the sole alternative to the House of Hilbert is an nonviable anarchy of theories that are physically incomplete and/or mathematically immature and/or just plain wrong. However, the text associated to Wikipedia’s amusing gallery of European monarchs suggests a third alternative for the future of quantum dynamical STEM studies: When we reflect upon the evolution of the quantum dynamical literature, we perceive that during the early 20th century, the House of Hilbert Space reigned over science with all the uniting vigor of Wilhelm I. Nowadays however, the STEM-wise influence of the House of Hilbert is greatly diminished, as practical dynamical computations increasingly employ mathematical frameworks that do not extend naturally to unitary evolution upon state-spaces of any dimensionality. In consequence, the role in science of today’s House of Hilbert is evolving to be less evocative of the intimidating authoritarianism of Kaiser Wilhelm I, and more evocative of the comforting and popular — but nowadays largely ceremonial — role of Beatrix, Queen of the Netherlands. Summary  The House of Hilbert formally reigns over the STEM enterprise, but in practice doesn’t. FTQC enthusiasts envision the restoration of the House of Hilbert as an absolute STEM monarchy … yet that restoration is comparably as likely to happen, as Queen Beatrix of the Netherlands seizing the reins of power. Conclusion  The Aaronson $100,000 wager is fiscally safe (both now and in the foreseeable future). But operationally, the Aaronson wager is lost already (largely at present and increasingly in the foreseeable future). 55. Luboš Motl Says: Lou #45, I disagree that general relativity is a “qualitatively different” explanation of the gravitational force than Newton’s theory. The full exact theory starts with more advanced, more geometric principles but when one focuses on the actual gravitational force in the contexts previously described by Newton’s theory, it’s strikingly obvious that general relativity is just a deformation of Newton’s theory. It may be reorganized as Newton’s theory plus corrections that go to zero in the “c to infinity” and “G to zero” double limit. That’s what I don’t call a qualitatively different explanation of the force. On the other hand, things like refuting the very existence of the exponentially many states and their arbitrary superpositions *is* a qualitative denial of basics of quantum mechanics, it would be a qualitatively different theory. Of course that *within* quantum theory, one may deform existing theories – their Hamiltonians – by adding new fields or other degrees of freedom and new interaction terms to the Hamiltonian etc. But that’s not a deformation of the postulates of quantum mechanics that have to stay completely constant because any nonlinear or other deformation of the postulates of quantum mechanics would lead to a logically inconsistent theory, e.g. one in which P(A or B) isn’t equal to P(A)+P(B)-P(A and B). 56. Scott Says: John Sidles #54: I don’t know what the hell that means. I’ll tell you what: you can have all of my “operational” money, if I can have just half of your “fiscal” money! 57. Ross Anderson Says: Nex #49: thanks; this is exactly our intention – to see how far a classical model of QM can be taken. It was really surprising to get a decent model of the electron; what more can we do? Ajit #51 and Slava #35: in the sonon model, two particles are entangled if their \chi waves are phase coherent. Slava #52 and Lubos #47: You are right to say that a lot more work is needed before mainstream physicists will accept the sonon model as an explanation for QM. Lubos, you say we need to do 1000 times more work. For reference, Robert spent 50% of his time last year working on this and I spent perhaps 5%, so call it half a year. Spending 500 person-years on classical models of QM would cost $50m and would presumably need a DARPA BAA spread over a dozen universities for five years. I can’t see us making that sale just yet. Slava, I fully agree that Robert’s model is our best argument; you want us to extend it to cover the standard model, the exchange interaction, the gyromagnetic ratio, superconductivity and much else. Again, this is a lot to ask at this stage. But would you be prepared to take sonon theory more seriously if we came up with further non-trivial results, such as on superconductivity or the weak interaction? 58. Ajit R. Jadhav Says: Slava #52: >> “The deal breaker is that classical models, however beautiful, have no capacity…” Yeah… I did appreciate that point though I didn’t jot it down explicitly. … But… There are times when one doesn’t want to be hair-splitting. It’s obvious that the moment you say: the classical, you immediately forgo: the quantum-mechanical. By definition. Sticking to the definitions, this part is very obvious. The point isn’t that. The point is this: Even if someone puts forth a new view of QM that (ultimately mistakenly) is advertised as being “classical,” and even if it’s not a “complete” theory addressing all the postulated aspects of QM, but if this new view actually has sufficient departures from what the term classical strictly means and demands, to make it interesting, then: in passing judgments, should we be making appeals to definitions? I think not. … And, in fact, I think, in your post at your own blog, you, too, actually did not. So, the deal-breaker isn’t that they describe it as a “classical” model; the deal-breaker seems to me to be that their description is not (even near-sufficiently let alone “completely”) quantum mechanical. I use the C++ program as a heuristic device. I mean it in the sense that: the basic argument has come from theory—a simulation cannot be a substitute for a theory. Yet, its enormous utility in ensuring “specific-ness” and “completeness” of description should be obvious. I mean, suppose that you already had this program for A+B’s theory (made available by them). Wouldn’t it then be so very easy to ask them to identify the line of code where they began dealing with, say, the entanglement (within a self-advertised “classical” framework)? Or, with the superpositions? Or, with the QM “collapse” (per the Copenhagen interpretation). That’s what I meant by “specific-ness.” Programs compell you to be specific. As to “completeness,” a (good) simulation would help prevent all the futile discussion based on those afterthoughts—it would force the programmer to fully incorporate the QM aspects, simply because their absence would be so directly noticeable. I would expect a QM simulation to be capable of addressing at least the single-particle double-slit interference situation, if not also the Bell’s inqualities, the delayed choice eraser, etc. The double-slit interference would make for enough of “completeness,” in practice. Including a fairly comprehensive indication of the handling of the auxiliary data. Of course, to be fully satisfactory, the program documentation would also have to show how its design and implementation fully addresses a decent postulatory description of QM, say as is found in any standard UG text on the mainstream QM (e.g. Eisberg & Resnick/Griffith/Gasiorowicz/etc). The modeling situation itself may be elementary and only an example (as in the double-slit interference). But the simulation has to show how the program at least implicitly implements for one specific application case the entire set of the QM postulates—and in what sense. BTW, if it’s a new view of QM, it would have to differ in some sense from the mainstream QM, just the way SR deviates in some ridiculously small but still in principle quantiable sense from the Newtonian mechanics, even at the “everyday” low speeds. The simulation will have to identify how—in what kind of limit its description converges to the mainstream postulatory QM (which is an unsatisfactory and a broken description, IMO). Alright. May be, this reply has become too long and boring. Just wanted to jot down what I meant, what it is that I am usually looking for, and why. * * * Enough for now. Will check back tomorrow. 59. Robert Brady Says: Scott #43 I hope it is clear why we claim it is an order of magnitude harder to produce numbers greater than 16 using Shor’s algorithm. You suggest a quantum computation that is required to calculate the number 3. This would not be a contradiction because 3 is less than 16. On the second part of your response (and thank you for your input Ajit #51) 1. As a graduate student I learnt many-body theory, and I am sure we share the experience. But your response seems to suggest you think there is something wrong with this theory. If so, what precisely is wrong with it? 2. It is the usual procedure in many-body theory to treat unexcited entangled items as a ground state. Do you disagree with this procedure? 3. The ground state acts as a single entity. Brian Josephson shows this explicitly for superconductors in the reference cited. Do you think there is something wrong with that? 4. A single entity can encode at most one qubit for the purpose of computation. What is wrong with that? 60. Greg Kuperberg Says: It can’t be taken past the Bell inequalities, I can tell you that. What more can you do? A decent model of two electrons. That’s the hard part. 61. Scott Says: Robert Brady #59: I see, so both prime factors need to be greater than 16 in order to satisfy you. We should wait for the use of Shor’s algorithm to factor 323, then? Regarding your four-part syllogism, what’s wrong with it is that the only thing that justifies calculations that treat the ground state as a “single entity,” is the existence of a more fundamental theory, according to which there are actually far more degrees of freedom there than just one qubit (but they’re behaving collectively). Treating a many-body approximation as if it gave you the fundamental degrees of freedom, whole ignoring the degrees of freedom of the very theory (QM) that the approximation rests on top of, is like building a house on air. You don’t get to do that without first laying more solid foundations. 62. Robert Brady Says: Greg #60, Scott (introduction) and others: regarding Bell’s inequality, does section 5 of this paper provide the information you require? I am afraid it does assume familiarity with Cramer’s transactional model and with Mead’s model, and of why they are consistent with Bell’s inequality experiments. Regarding the analogue of two entangled electrons, does Figure 5 of the same link, and the surrounding text, give you the information you require? If not I would be pleased to provide more. 63. Bram Cohen Says: Brady #62: Are you claiming that there’s a classical mechanism which can give results which violate the Bell inequality? 64. Robert Brady Says: Scott #61 Yes, both prime factors need to be greater than 16 since it is necessary to exclude calculations that might be done with fewer computational qubits. Have I understood you correctly? In the introduction to this blog, you comment approvingly on the Josephson effect. I don’t want to put words into your mouth, but you now seem to be saying that Brian’s original thesis ignores “the existence of a more fundamental theory, according to which there are actually far more degrees of freedom.” Can you elaborate? 65. Greg Kuperberg Says: Robert – Of course it’s not satisfactory. Drawing a figure if two electrons is not the same as modeling two electrons with an equation. Listing a few citations to other people’s older models of quantum mechanics is also not the same as you giving a model of two electrons. You’ve made a complete muddle of the issue of Bell inequality violations. This is really key, because the entire topic of quantum computing is a gargantuan extension of Bell violations. In one paper there is an indecipherable claim that a quite strict Bell violation experiment (reference [36]) has loopholes. In another paper there is a figure and there are citations, but there is no equation for two electrons that either allows or prohibits Bell violations. One of the citation is to Carver, whose model is non-local and explicitly allows superluminal Bell violations, thus contradicting the need to claim loopholes in the other paper. 66. Bram Cohen Says: Lubos #47: The difference with the other scientific theories you consider is that in the case of quantum computation there’s another very large piece of evidence – specifically, the extended church-turing thesis – arguing for the other side. (The thesis says that any reasonable model of physics can simulate any other reasonable model of physics with only polynomial slowdown.) Granted, the extended church-turing thesis is a more high level concept than a low level one, but it’s so extraordinarily robust that anything which violates it must be taken with extreme skepticism. Of course, when two fundamental scientific theories contradict each other that’s fertile ground for coming up with and performing experiments which force the issue. The entire field of quantum gravity is dedicated to exactly this sort of program, and there the two theories don’t even contradict each other, just don’t merge nicely. It can be hard to even design the experiment in some cases, and Aaronson has done the best job so far of proposing such an experiment, which has in fact actually been done, and thus far quantum computation has held up admirably. I wouldn’t say that it’s scaled up enough that I’m convinced that it won’t run out of juice eventually – there are, for example, analog classical mechanisms which in principle are able to do arbitrary calculations and work okay at a small scale but break down when scaled up – but I view this line of research as extremely important. 67. Greg Kuperberg Says: Bram – Except that the polynomial Church-Turing thesis isn’t evidence, it’s theory. It’s a theory with accumulating evidence against. 68. Robert Brady Says: Bram Cohen #63 Yes. Bell proposed experiments that test for either non-local or non-causal processes. Cramer’s transactional model exploits the non-causal element, which is also present in the solutions to Euler’s equation because it is time reversal symmetric. See section 5 of this paper for references and a discussion which is unfortunately rather brief and addressed to those familiar with these papers — perhaps it might be expanded upon. From an aesthetic point of view it would be preferable not to have to use the time reversal symmetry. It might be possible to do this, as explored in the joint paper with Ross. 69. Scott Says: Robert #64: Fine, I’ll elaborate. If you accept QM, then the states created in these superconducting experiments are basically cat states, something like (|0⟩n+|1⟩n)/√2 where n is the number of electrons (which could be in the billions). Now, you point out correctly that, while the underlying Hilbert space might have 2billion dimensions, the Hilbert space relevant to these particular experiments is merely a 2-dimensional subspace, the one spanned by |0⟩n and |1⟩n. And you therefore declare yourself satisfied that “only one qubit” is there. Unfortunately, that’s not even the start of the beginning of an acceptable answer. As Greg #65 stressed, you don’t get to replace the precise predictions of QM by slippery verbal reasons-why-you’re-not-yet-proven-wrong that change from one experiment to the next. Instead, you need to replace QM by an alternate mathematical theory that (1) also describes anything that could possibly happen to a many-particle quantum system (not just one particular thing), (2) agrees with all experiments that have already been done, but (3) unlike QM, does not require an exponentially-large Hilbert space. The reason many people here are getting exasperated with you is that you seem to have no inkling of what would actually be involved in constructing such an alternate theory. 70. John Sidles Says: In multiple comments, Shtetl Optimized readers have expressed skepticism that the mathematical framework of the Anderson/Brady preprint is adequate to build a viable theory of quantum dynamics upon it (and personally I share that skepticism). And yet, reasonable grounds exist to extend that same mathematically-grounded skepticism to orthodox FTQC. As was previously noted: Scott Aaronson #56 John Sidles #54: “The Aaronson $100,000 wager is fiscally safe (both now and in the foreseeable future). But operationally, the Aaronson wager is lost already (largely at present and increasingly in the foreseeable future).” “I don’t know what the hell that means. I’ll tell you what: you can have all of my ‘operational’ money, if I can have just half of your ‘fiscal’ money!” Hmmm … to address Scott’s concerns, let’s make explicit the metaphorical argument (of #54), by focusing not upon “mere” money, but rather upon rational investments of every researcher’s most precious resource (students especially): time, attention, and imagination! Commonly students learn undergraduate-level quantum mechanics from texts that include (everyone has their own favorite list), Feynman’s Lectures, Dirac’s Principles of Quantum Mechanics, Gottfried’s Quantum Mechanics, Landau’s Quantum Mechanics. Nonrelativistic Theory, and Nielsen and Chuang’s Quantum Computation and Quantum Information (there are hundreds more). To pursue serious research, starting at the graduate level, an entirely new set of textbooks enters the picture, that include (again, everyone has their own favorites, and the following texts all are highly-ranked Amazon.com best-sellers) Spivak’s Calculus on Manifolds: a Modern Approach to Classical Theorems of Advanced Calculus, Frankel’s The Geometry of Physics, Nakahara’s Geometry, Topology and Physics, Lee’s Introduction to Smooth Manifolds, Nash’s Topology and Geometry for Physicists, and Zee’s Quantum Field Theory in a Nutshell. What is striking about this second list is how sparse references are to Hilbert space (in the narrow sense of Feynman/Dirac etc.). Slowly the realization dawns at the graduate level: “I’ve studied these texts in the wrong order! It’s better to study Spivak first, not last!” And indeed this modern mathematical sentiment is vigorously espoused by Amazon reviewers: When you are in college, the standard calculus courses will teach you the material useful to  engineers  physicists. You must pretty much forget the material in these courses and start over. That’s where you need Spivak’s “Calculus on Manifolds”. Spivak knows you learned  calculus  quantum physics the wrong way and devotes the first three chapters in setting things right. When quantum dynamics is appreciated through the lens of the second reading-list, it become apparent that rather little (if any?) of modern quantum research depends essentially upon the absolute existence of an exponential-dimension Hilbert/Dirac state-space. It is necessary only that the dynamical state-space be effectively Hilbert/Dirac … and the set of dynamical manifolds having this property is vast (as the second reading-list is at pains to inform students). In view of this burgeoning literature, and the increasing desirability (for students) of mastering the requisite mathematics before tackling the quantum physics, it is not unreasonable to foresee that (to paraphrase Einstein): “Hilbert space by itself, and unitary evolution by itself, are doomed to fade away into mere ideals — to which Nature herself may not entirely aspire — and only a symplectic union of the two will preserve an independent reality.” Indeed, the literature of the most recent decade amply documents that the 21st century’s inexorable supplanting from center-stage of the 20th century’s cherished Hilbert space already is well underway … and thus in the decades to come, a return of (our still-cherished) Hilbert space to center-stage of quantum dynamical research is about as plausible — and about as desirable too! — as (the still-cherished) Queen Beatrix reigning as absolute monarch of the Netherlands. 71. Slava Kashcheyevs Says: Ross #57: No. Your use of \chi wave corresponds to treating the 2 sonons as bosons in the same quantum state – a coherent two-particle condensate which is a product state with no entanglement. Forget about about all the complex stuff. Just show the community how two electrons form a singlet and a triplet. No big deal, just a little element towards a minimally realistic approximation of helium / positronium. Unfortunately, there is no way you or anybody else can make it with a single-fluid hydrodynamic model. One \chi for all not what many-body quantum physics is about. And is Robert’s model is your best argument, then there no argument. 72. Slava Kashcheyevs Says: BTW, as Joe has rightly pointed out on my blog, there is no way to tell whether sonons are bosons or fermions. End of story. 73. Robert Brady Says: Many – so many comments, particularly on spin symmetry, entanglement, and Bell’s inequality, to which I will respond shortly. Scott #69 Thank you! I now understand. I was indeed referring to what you call the “underlying” theory beneath the measured entangled states. I had wrongly assumed you were too. The “underlying” theory is described in Brian Josephson’s thesis. On page 18 he introduces the operator exp(i N \theta) which connects states with N and N-2 electrons in a bulk superconductor. Its value is S = exp(2 i \theta) where \theta turns out to be the phase observed in a Josephson junction. In this way, Brian reduces the large number of electrons down to a single parameter – the phase – which is measured in a Josephson junction. This is the parameter which underlies the correlated phenomena you refer to. At this underlying (Josephson) level, the phase can code for at most one computational bit, even though it encompasses a very large number of electrons in the ground state. Are we moving towards agreement, at least on this issue? If so, that would be some progress on the primary subject of this blog! 74. Scott Says: Robert #73: No, we are not moving toward agreement. You keep talking about the effective theory of one specific collective phenomenon, and I keep trying to get you to focus on the only general theory we know (QM) from which that effective theory can be derived — a theory that implies the existence of vastly more degrees of freedom in the system, which could be probed by some other experiment if not by this specific one. At this point, I basically throw up my hands: I’ve explained it to you as clearly as I know how. Maybe someone else can take a crack at explaining it. 75. John Sidles Says: Robert Brady #73, in the fifty years since the 1962 Josephson thesis that you cite, a tremendous amount has been learned regarding antisymmetrized quantum dynamical state-spaces (per arXiv:math/0005202, arXiv:math/0208166, and arXiv:1110.6367 for example). Without in the least laying claim to an expert level of personal expertise, it seems (to me) reasonable to anticipate that substantial advances in our physical understanding of quantum dynamics will at least refer to these substantial (and ongoing) advances in our mathematical understanding of quantum dynamical state-spaces (in all of their variously symmetric/antisymmetric/asymmetric varieties). PerhapsShtetl Optimized readers can identify fundamental advances in quantum physics understanding that were not accompanied by and/or authorized by and/or catalytic agents of fundamental advances in mathematical understanding? 76. Robert Brady Says: Slava Kashcheyevs #71 #72 Greg Kuperberg #65 and others interested in spin symmetry and the standard model. Much more could indeed be done on particle symmetries. This is what we know for the R11 sonon. (a) It has spin-half (Fermi) symmetry, as can be seen from the Pauli matrices after equation 8 here. (b) Their lowest energy state has spins opposed, as can be seen from the interaction energy (equation 9) and the symmetry of the spherical Bessel function discussed thereafter. The R10 sonon has a lower order symmetry (since n=0), and the higher families of sonons presumably have more complex symmetries. The details might be an area for further investigation. 77. Raoul Ohio Says: Discussion of Blogs, Comments, etc, from Sci Am: SO fits into this picture. 78. Greg Kuperberg Says: Robert – I am not all that super interested in spin symmetry or the standard model. All I said was that you don’t have a credible discussion of Bell inequality violations. Some discussion, yes, but nothing that hangs together at all. 79. Bram Cohen Says: Brady #68: What section 5 makes clear is that you’re clearly proposing a classical system. The problem then is that Bell’s theorem isn’t a guideline or principle, it’s a theorem, so no amount of you pointing to complex mathematical machinery in references is going to get people to read them, because you might as well be claiming to have found a way to trisect an angle with a compass and straightedge. 80. Anonymous Says: Hey, everyone, it’s hopeless. You are not going to convince Brady of anything.He obviously has no understanding of the vast range of physical phenomena that cannot be explained without quantum mechanics, and how well tested and tightly constrained our current physical models actually are. 81. Bram Cohen Says: Greg #67: You could say that the second law of thermodynamics is ‘just a theory’ as well, that doesn’t stop people who claim to have violations of it from rightfully being dismissed as cranks out of hand. Granted the second law seems *more* fundamental than ECT, but that’s like saying helium is less common in the universe than hydrogen. Note that I’m not dismissing the evidence against ECT, just expressing far more skepticism that QM will continue to hold up as the experiments progress than some others have. 82. wolfgang Says: I just read this paper and I found it quite amazing: First there is the reference to a video clip with Morgan Freeman (quite interesting how the drops bounce around on the vibrating plate), then they mention deBorglie-Bohm, an interpretation which is completely equivalent to Copenhagen (and others), at least in the non-relativistic case, and finally they present a ‘soliton model’ of the electron, which reminds me of Lord Kelvin’s vortices in the ether proposal of the 19th century (but I think L.K. made more sense). This is then used to make an argument about quantum computers. If this is physics in the 21st century then I want the 20th century back… 83. Scott Says: wolfgang #82: No, this isn’t “physics in the 21st century”—it’s just two guys trying to overturn modern physics, far from the first or the last. Of course there’s a selection effect; stuff that’s actually representative of “physics in the 21st century” is less likely to lead to emails asking me for comment or to an annoyed blog post like this one. :-) 84. Greg Kuperberg Says: Bram – At this point the second law of thermodynamics is almost a mathematical theorem rather than a separate physical theory. That’s different, that’s something that you would already believe even if it weren’t tested. In any case it has been confirmed many times, except in regimes (such as cosmology) where it doesn’t apply without modification. There is no theorem supporting the polynomial Church-Turing thesis within the current laws of physics. On the contrary, quantum probability is true and within quantum probability, the polynomial Church-Turing thesis is close to disproven rather than proven. So you shouldn’t believe the polynomial Church-Turing thesis, for the same reasons that you should believe the second law of thermodynamics. 85. Slava Kashcheyevs Says: Robert Brady #76 Sure, I have understood from your paper the claim that a sonon has spin 1/2 and two sonons couple anti-ferromagnetically. This does not bring you any closer to construct a signlet or demonstrate the exchange symmetry (boson/fermion/anyon). 86. Robert Brady Says: John #75 Good comment. See the 50 year update conference. Scott #74 Oh yes we do agree! :-) Fortunately, you do not need to rewrite Brian’s thesis in order to analyse quantum computing using Josephson junctions. If the Josephson phase changes by 2 \pi around a loop, this is called a flux quantum. Tunnelling of flux quanta was first observed by John Clarke (see conference link above). In fact, these quanta appear to behave like individual quantum mechanical entities. The analysis you describe can be applied to them and the usual results of quantum computing follow. Each individual flux quantum is a collective phenomenon. I hope you will agree it does not contain millions upon millions of computational qubits — unless you think Josephson’s thesis doesn’t apply to these experiments, in which case please specify! 87. Robert Brady Says: Bram Cohen #79 I understand your question. Let me describe why the motion of sonons is consistent with Bell’s analysis. The coherent motion of sonons at low velocity obeys equation 11, and its trajectory obeys equations 12 and 13. Equation 11 is the same as the Schrodinger equation, and the probability of a trajectory reaching (x, t) is | \psi(x,t) |^2 (which follows from (12) and (13) — the paper reproduces Bohm’s reasoning). These are the same equations on which Bell’s analysis is based, and therefore it would be surprising if the motion of sonons were inconsistent with it. I do not think this is controversial, but please tell me if it is not clear. I think the debate is about how to interpret the consequences, which seem to be counter-intuitive. Cramer’s transactional interpretation of quantum mechanics is only one of the possible ways in the literature to interpret it; it was not intended to be a proof. 88. Robert Brady Says: Slava Kashcheyevs #85 There is obviously a lot of detail required in these areas in order to satisfy you. I think your question, or at least similar questions that are relevant to particle physicists, is answered in the extensive literature on the subject. As you will know, the same compressible inviscid fluid is studied in the field of analogue gravity — see Barcelo’s review article. See in particular Volovik’s book regarding particle symmetries and the standard model. Happy reading! 89. Robert Brady Says: Wolfgang #82 Thank you. There are some similarities, as you suggest. However, sonons are irrotational, unlike vortex atoms. You may want to look at the online talks at the recent conference on tightly knotted and linked systems, which in some respects are the successors to Lord Kelvin’s vortex atoms. 90. Lou Scheffer Says: Lubos #55, You say: ‘I disagree that general relativity is a “qualitatively different” explanation of the gravitational force than Newton’s theory.’ Of course you are entitled to your own opinion of ‘how different’ two theories are, but I suspect you are the only person on the planet with this view. Newton’s is an action-at-a-distance theory in a flat space-time with no mechanism. GR provides the equivalent of Newtonian forces via shortest paths in curved space-time. It was so weird at the time that only a few people even understood how it might work. For example, Planck said, of combining gravity with SR: “As an older friend, I must advise you against it, for, in the first place you will not succeed, and even if you succeed, no one will believe you.” Herman Weyl, another rather sharp guy, said of GR: “It is as if a wall which separated us from the truth has collapsed. Wider expanses and greater depths are now exposed to the searching eye of knowledge, regions of which we had not even a pre-sentiment.” A typical (I believe) modern view is that of Ashtekar: “Space-time is not an inert entity. It acts on matter and can be acted upon. [...] There are no longer any spectators in the cosmic dance, nor a backdrop on which things happen. The stage itself joins the troupe of actors. This is a profound paradigm shift [that]… shook the very foundations of natural philosophy. It has taken decades for physicists to come to grips with the numerous ramifications of this shift and philosophers to come to terms with the new vision of reality that grew out of it.” I’d be very surprised, but impressed and interested, if you can find similar support for your proposition the GR and Newtonian gravity are qualitatively similar. 91. John Sidles Says: Greg Kuperberg (#84) says: “The second law of thermodynamics is almost a mathematical theorem rather than a separate physical theory.” The word “almost” is smile-inducing because it calls to mind so much STEM history: • The Earth is almost flat. • Malaria is almost invariably associated to the bad night air of swampy regions. • The Parallels Postulate is almost self-evident (and/or almost a mathematical theorem). • Planck’s radiation law almost follows from Boltzmannian statistical mechanics. Recent work such as — to cite one article of many — Derezinski, De Roeck, and Maes “Fluctuations of quantum currents and unravelings of master equations” (2007, arXiv:cond-mat/0703594) is exemplary of contemporary efforts to close the “almost” gap to which Greg Kuperberg’s comment #84 refers. The Anderson/Brady preprint is (as it seems to me) relatively less sophisticated, less successful, and therefore (arguably) less promising in regard to further progress. 92. Bram Cohen Says: Brady #87: That is not at all clear. Could you answer simply whether your model allows non-local phenomena? 93. Ajit R. Jadhav Says: I think this thread has by now got numerous high-quality comments. At this point, I have to raise a few questions: To A+B’s critics/detractors: Is the real point of contention the very idea that what A+B propose is claimed to be a classical model? To A+B: 1. You make reference to Cramer’s interpretation. As the nine formulations paper states, Cramer’s interpretation quantitatively “makes no predictions that differ from those of conventional quantum mechanics.” In the mainstream QM, there is instantaneous action-at-a-distance (IAD)—i.e., IAD, as distinguished from mere entanglement. For instance, in the Copenhagen interpretation, the wave-fuction collapse requires IAD. Inasmuch as your theory produces results that are quantitatively identical to the mainstream QM theory, your theory also involves/entails IAD. Am I correct? 2. Why must the fluid be compressible? Does it have a deep but not very obvious relevance in imparting the specifically quantum-mechanical character to your theory? Finally, to wrap up, here’s a suggestion to A+B: I have touched upon this point above, but wish to highlight it again, separately. I think it would help your cause if you explicitly establish how your theoretical constructs correspond to or lead to the postulates of the mainstream QM, esp. the nonrelativistic QM. In deference to the 80/20 rule, personally, I would suggest writing an article that’s accessible to someone who hasn’t read anything beyond the first half of Quantum Chemistry by McQuarrie. More sophisticated accounts could then address the remaining 20% (or even just 2%!) of the objections/queries. 94. Greg Kuperberg Says: Equation 11 is the same as the Schrodinger equation.. These are the same equations on which Bell’s analysis is based… I do not think this is controversial, but please tell me if it is not clear. Actually, it’s beyond controversial. Equation 11 is the *single-particle* Schrödinger equation. In order to violate Bell’s inequalities, you need the *multi-particle* Schrödinger equation. If you don’t have that, then the entire discussion is nonsense. Besides, you clearly are arguing in the alternative. In your comments here you accept Bell violations, but in your other paper you dismiss them as the result of loopholes. 95. Slava Kashcheyevs Says: Robert #88 It may look like very complicated and requiring “a lot of details” to you, but there is hardly more elementary exercize in two-particle quantum mechanics than constructing and classifying symmetric and anti-symmetric states. This where sonons fail hopelessly. My point was not request you re-do all of physics, but to point out an overwhelming amount of evidence contradicting your model . But I’ve said enough. I’m not challenging anything (except the relevance of your sonon model to real particles), so the burden of proof is not on me. Have fun. 96. Luboš Motl Says: Lou #90, I am not silly so be sure that I know all the differences between GR and Newton’s theory, too. But my point is that GR doesn’t restrict the data describing Newton’s gravitational forces. On the contrary, it adds some new degrees of freedom – the metric tensor which may sustain gravitational waves even in the absence of sources – which is made necessary by the fact that the force in GR has to obey the cosmic speed limit, the speed of light. But what is discussed here is a qualitatively different theory that would *steal* something from quantum mechanics. Clearly, the tensor-product-like exponentially growing Hilbert space (with all the complex linear superpositions allowed) seems too large and complicated to the authors discussed in this thread. So they want something “simpler” really in the sense that it subtracts the number of possible states. My comments about GR’s being a deformation of Newton’s theory were just an example of my broader claim that there doesn’t exist a single precedent in physics in which a working theory would be superseded by a qualitatively different one that reduces the “space of states” of the older theory. This just won’t happen. Using the words relevant here, it won’t happen that quantum computers will be made impossible because a hypothetical better, future theory will prohibit the entanglement or arbitrary superpositions with many qubits. Such a hypothetical evolution is indefensible, contradicts all known laws of quantum mechanics, has no historical precedent, and is only motivated by certain people’s limited intellectual abilities because these people just find QM too complicated and its Hilbert space too large. But it will never get smaller. 97. Slava Kashcheyevs Says: Gred #94 “If you don’t have that, then the entire discussion is nonsense.” Yes, they don’t have that (but do not seem to realize it) and the entire discussion is nonsense. 98. Robert Brady Says: Bram Cohen #92 Yes. As you would expect for a model consistent with Bell, the sonon energy is delocalised and the processes are not necessarily causal. See for example the delocalisation of the energy in the spin-correlated |ud> + |du> state in #14 here. 99. Anonymous Says: Greg #94: It’s even worse than you say. NO form of the Schrodinger equation (single particle, multi-partilce, whatever) is needed to show that QM violates the Bell inequalities. (The only thing one needs to say about time evolution is that the spin state does not change as the particles fly to the detectors.) Rather, it’s the general structure of the spin state, together with the Born rule, that lead to the violation of the Bell inequalities. What Bell showed is that this general structure of the spin state CANNOT be reproduced by ANY theory of classical probabilities that does not have instantaneous action-at-a-distance. In 1985, David Mermin wrote a fantastic article for Physics Today, “Is the moon there when nobody looks?”, which gives a beautifully clear explanation of Bell. It’s behind the paywall at Physics Today, but a google search turns up several places where it is freely available. 100. Anonymous Says: And, in many experimental tests of Bell, it’s PHOTONS, not ELECTRONS, that are used. 101. John Sidles Says: Luboš Motl broadly claims “There doesn’t exist a single precedent in physics in which a working theory would be superseded by a qualitatively different one that reduces the “space of states” of the older theory.” The assertion is incorrect: the four-letter “GCAT” hereditary state-space of DNA — as broadly foreseen by von Neumann in a 1946 letter to Norbert Weiner — is a tight restriction of the (foggily envisioned) larger-dimension protein-template hereditary state-space that was espoused in the 1930s and 40s by luminaries like Linus Pauling. 102. RobvS Says: Luboš Motl #96 I like the idea to do a trick like Eric Verlinde did with gravity. Making the “old” theory indeed a statistical average (coarse graining) of the “new” theory. This not only incorporates entanglement. The holographic principle demands massive amounts of entanglement. But I like that approach better mostly because my (lack of) math skills make it impossible for me to understand (super) string theory. Some thermodynamics I do understand. 103. Robert Brady Says: Ajit #93 Thank you. Yes, the emergence of quantum motion from completely classical motion might well seem unintuitive, even after you have seen the videos of Couder’s experiments. No IAD – Instantaneous action at a distance is impossible in Couder’s experiments, even though they faithfully reproduce tunnelling, double-slit diffraction etc. Likewise, on our model, the probability a trajectory passes through (x, t) is just |\psi(x,t)|^2 and so there is no need for any wavefunction collapse, instantaneous or otherwise. The fluid must be compressible for the very ordinary reason that the speed of sound is theoretically infinite in an incompressible fluid. Thanks for the suggestion of a simple paper. 104. Robert Brady Says: Greg #94 and Slave #95 Many thanks. I accept I did not provide an explicit spin superposition and show how it is measured. This is now here for the |ud> + |du> spin states of two R11 sonons. I hope it is clear how to do the others from this example. I am afraid R11 sonons are spin-half and we are not ready to publish with spin-1. 105. Anonymous Says: In #103 we see that Dr. Brady (as predicted) still just doesn’t get it. (Trying html tags, let’s see if they work.) Reproducing some aspects of quantum phenomena with a local classical model is certainly possible, but reproducing all aspects is certainly not. That’s what Bell proved. But it was strongly suspected long before Bell, since all attempts to construct such a model had failed. 106. Greg Kuperberg Says: Anonymous #99 – Certainly the two-particle Schrödinger equation, with or without spin, does violate Bell’s inequality and other Bell-type inequalities. And certainly one single particle cannot violate any such inequalities in any straightforward way. It also doesn’t matter whether you use photons or electrons. Bell violations are a pervasive phenomenon of quantum probability as it applies to almost any type of joint quantum state. 107. Scott Says: Greg #106: Actually, you can perfectly well violate a Bell inequality with just a single particle, in the “entangled” occupation-number state |0⟩|1⟩+|1⟩|0⟩! It requires a more subtle measurement, but apparently it’s even been demonstrated experimentally. This is a point that I was long confused about myself, but see for example this delightful paper by van Enk. 108. John Sidles Says: On further reflection, Luboš Motl’s comment (#96) provides us with that valuable entity, a Great Truth (namely, a Truth whose opposite also is a Truth): In addition to the genetic example (of #101), we have also: A mathematical example  The restriction of elliptic curves to finite fields yields (along with much elegant mathematics) elliptic curve cryptography. A condensed matter/field theory example  Ken Wilson’s renormalization group method systematically replaces (many) microscopic degrees of freedom with (fewer) macroscopic degrees of freedom, so as to usefully make physical sense of (i) phase transitions and (ii) the divergences of field theory. Thus we appreciate the dual aspects of … Luboš Motl’s Great Truth  In the 21st century the 20th century’s Dirac/Hilbert quantum dynamics foreseeably will — or foreseeably won’t! — “be superseded by a qualitatively different dynamics that reduces the ‘space of states’.” 109. Anonymous Says: Greg #99: We appear to have a semantics problem. By “the Schrodinger equation”, I meant just the time-evolution equation of QM. Violations of Bell in QM are not dependent on how the system evolves in time. As best I can tell, what you mean by “the Schrodinger equation” includes the entire superstructure of QM (states, observables, Born rule, etc.). I was trying to distinguish this superstructure from the Schrodinger equation itself. 110. Anonymous Says: Oops, I meant Greg #106. 111. Greg Kuperberg Says: Scott – Almost, but I don’t think that you actually can. You can certainly create that state which is entangled in an occupation number basis. However, you will only see non-locality if you apply measurements that have a chance of creating a second particle. So, no dice I think. In any case, certainly the actual Bell violations are done with two particles in an entangled state, even if you could in principle measure two boxes with entangled occupation. 112. Scott Says: Greg #111: This paper by Babichev, Appel, and Lvovsky claims to have actually achieved an experimental Bell violation (subject to the usual detection loophole), using a single delocalized photon as the sole entangled resource. (Yes, I’m sure the measurements involve additional particles both on Alice’s end and on Bob’s end, but so what?) 113. Greg Kuperberg Says: Scott – This is interesting enough that I must keep quiet until I understand it better. :-) 114. Ajit R. Jadhav Says: Robert #103: Oh, you are welcome! But… Let me wrap up, somewhat at a length. (I will sure check back for comments and all, but as far as I am concerned, the wrap up for this thread seems to be fast approaching.) 1. I do think that a part of the problem lies with the way you (+Ross) have written the paper—it covers too much of territory, too fast. For an astonishing prediction setting concrete limits on the number of coherent cubits, prior discussion is so sparse as to be almost absent. I was interested in the 3D case, and so did a word search on “four” in your A+B paper. The only places it appears is in the abstract and conclusions! That’s rather like the Copenhagen quantum—it’s there only when measured, at emission (abstract) and absorption (conclusion). … I also dare suggest that you once again check your logic. Chances are very extremely bright^{bright} that the result holds only under a restricted set of auxiliary conditions. 2. BTW, you said (#103) no IAD, but you still didn’t quite directly clarify if your theory makes predictions that are quantitatively identical to those of the mainstream QM theory, or not. (By mainstream QM, I mean any of the nine+ interpretations or treatments in (students+Dan Styer)’s paper.) The reason I insist on this part is that I myself have had a preliminary (conference) paper on a new approach to QM (of only photons, so far); my approach in principle leads to a quantitatively different prediction (though I don’t know except in broad outlines how to work out its detailed maths). 3. Coming back to your research: A simpler paper is certainly needed, but also a paper that at least addresses all the stages of the quantum evolution in a simple example case, if not also presenting a working C++ simulation for it. 4. Also, I would suggest: In that paper, please make a clean break from Couder et al’s work. It simply confuses people. It’s obvious that Couder’s work does not reproduce all aspects of QM. Even if we assume a simplest model of the universe consisting of just electrons + photons, if the dancing droplets are taken to be electrons, it’s obvious that the since the waves induced by the droplets are the force-carriers, in this model, they should represent photons. However, in the Couder model, such “photons” are not quantized—they are not localized in space, as the real photons in a single-photon-at-a-time diffraction experiment would show. Naturally, the Couder model is insufficient in terms of how much of a quantum character it can fake. (And that is apart from the very simple question that had struck me as soon as I read about it the very first time around 2010, the same time it got covered in the MIT News: Who/what vibrates the universe? especially in the 3D? My other question was: In 3D, how precisely does a droplet induce waves?) Now, yours is a different model. It is a “purely” mathematical model, not fully realizable in a classical experiment. The classical fluid isn’t inviscid. Qua a mathematical model, it would be possible to overcome the limitations of the Couder model in it. If so, why make a reference to the Couder model at all? 5. Finally, I sense that I might have other issues about your sonon model. I mean some deeply physical issues (not mathematical); .e.g., things like: the existence of the singularity at the sonon surface, i.e. the very existince of a sharp boundary surface. The Aristotlean law of the excluded middle entails that a physical theory cannot carry singularities; they can only be projected (i.e. imagined) mathematical entities/features, without any physical existence. Or, as Roger Schlafly’s blog highlights: “natura non facit saltus.” And, I would seek a detailed picture of the interaction of an “electron-type” (i.e. the Fermi/matter particle) sonon with a “photon-type” (i.e. the Bose/force-carrier) sonon—including, whether, and if yes, precisely how an electron-sonon absorbs a photon-sonon, what the pair physically looks like after the absorption; what makes the electron-sonon emit the photon-sonon, etc. 6. To (finally!) wind up: If in theory you take a clean departure off Couder’s model (it can continue to be a part of a motivation section but little more), supply the correspondence with the postulates, and then if you could also supply a comprehensive account (ideally, with a C++ program) of an elementary but complete case (e.g. double-slit diffraction), apart from addressing issues like the above, I would be very, very happy to read it. And, I am sure, many others, would, too. So, kindly keep us posted. 115. Scott Says: Luboš #96: Why isn’t the holographic principle, which reduces the naïvely infinite-dimensional Hilbert space of QFT to the finite-dimensional Hilbert space of quantum gravity, a counterexample to the above claim? 116. Luboš Motl Says: Dear Scott #115, the holographic principle isn’t an example because, as you correctly said, the Hilbert space of quantum gravity is infinite-dimensional only “naively”, not according to a working theory. There hasn’t ever been a working, internally consistent theory of quantum gravity that would have an infinite-dimensional Hilbert space even for finite regions. This is an inconsistent assumption and this situation is different from quantum mechanics of 6 qubits which is an internally consistent – and experimentally tested – theory in physics. Of course that from some broader viewpoint, namely if you allow some somewhat inconsistent theories to the mix, the holographic principle *is* an example of exactly what I say has no examples. But as I already discussed above, it’s an example that happened and could happen only in some very extreme regime and it had physical consequences. The claims of the type “quantum computers aren’t allowed by the right laws of physics” are, on the contrary, claims about a completely non-extreme, low-energy physics that has been tested indefinitely so one can’t find any meaningful inequality that would separate the regime in which QM works as tested and in which it would be replaced by a “smaller” theory. 117. Simon J.D. Phoenix Says: One can observe violations of the mathematical inequality we call the Bell inequality with single particles – and this can be used to furnish a single particle QKD scheme (the correlations existing between state preparation and measurement). The eavesdropping test then amounting to determining whether or not the inequality is violated. I think it was Boole who showed that if we have 3 random variables A, B and C, then the joint probability P(A,B,C) that correctly reproduces the marginals P(A,B) etc can only be constructed if the marginals satisfy what we call the Bell inequality today. Any proposed classical model of entanglement must therefore be able to reproduce this ‘non-existence property’ for P(A,B,C) for certain choices of A, B, and C and that’s before we add non-locality into the mix. I was surprised by the A + B paper – Ross is a well-known, and respected, figure in the security community and he’s done some really cool stuff. Readers of this blog will understand what I mean when I implore Ross not to become a Christian! 118. Lou Scheffer Says: Lubos #96 says: Even special relativity and QM could be counterexamples. In Newtonian mechanics, the velocity could be anything; in SR it’s limited to the subset less than the speed of light. In classical mechanics, a harmonic oscillator can have any energy, but in QM only a subset are available, reducing the state space from an uncountable infinity to a countable one. 119. T H Ray Says: Scott #115 I don’t want to comment on the merits of the Anderson-Brady paper in a site dedicated to its perfunctory dismissal. Where the philosophy of science is concerned, however, Lubos has quite valid arguments. Even though the holographic principle is not a true theory, it would not counterexample his claim. Just as relativity subsumes Newtonian physics in the limit, holography constrains the physics of quantum mechanics to the limit of physical manifolds. In this respect, at least, it extends relativity in the same context that relativity extended Newtonian mechanics. Point is, as Lubos implies, there is a backward-forward relation between all physical theories and principles; none exist in a thought-vacuum. This may be different way of saying that there is something rather than nothing, but it isn’t trivial. (Hope you can stay out of the weather today in Boston.) 120. Luboš Motl Says: Dear Lou #118, nope, the same problem with the range of validity affects your other examples, too. One may define the validity of classical physics to be one in which the speeds at smaller than a limit, c, and validity of classical physics in regimes in which the angular momentum or action or (delta x)*(delta p) are much greater than Planck’s constant. The point is that these old theories weren’t established as theories for all regimes, however extreme they are. So non-relativistic mechanics fails at high velocities, classical physics fails at tiny angular momentum, actions, or attempted tiny uncertainties of momentum and position, the local theories of gravity fail when one tries to compress too much information (like black hole entropy) into a small region. But the new theories always confirm the state space in the relevant approximation. That’s different than the claim here because 5 qubits isn’t extreme in any sense, yet those folks want to claim that basic QM becomes invalid. There isn’t any quantity such as speed, angular momentum, action, products of uncertainties, entropy density per area or anything else that would be extreme in mundane low-energy systems with 5 qubits, so if one claims that QM is wrong, he’s claiming that it’s giving totally wrong predictions everywhere which it clearly doesn’t. Don’t tell me you don’t understand what I am saying. In all your conventional examples, the newer theory almost perfectly confirms the older theory’s description of all mundane experiments one may do in the labs. This case is claimed to be different because even the mundane things are claimed to be wrong in the old theory – QM. 121. John Sidles Says: Luboš Motl proposes a another Great Truth: Luboš Motl proposes (#116) “Claims of the type ‘quantum computers aren’t allowed by the right laws of physics’ are claims about a completely non-extreme, low-energy physics that has (or has not?) been tested indefinitely.” LOL  we appreciate the duality of Luboš’ Great Truth when we reflect (as one example) upon the persistent confusion and controversy — both experimental and theoretical — regarding the quantum Third Law, and in particular quant-ph/0703152 illustrates how subtle these issues can be. When relativistic gauge field theory enters (as it always does in designing practical experiments) the “non-extreme, low-energy physics” becomes even more subtle. Example, what obstructions have (so far) prevented the theoretical literature from reliably assessing the feasibility of scalable Aaronson/Arkhipov n-photon source/detector systems? Conclusion  Introductory quantum texts — like Feynman’s Lectures and Nielsen and Chuang’s Quantum Computation and Quantum Information for example — commonly skirt certain “completely non-extreme, low-energy physics” theoretical issues … but 21st century experimentalists and engineers are not permitted this luxury! That is why numerous creative & insightful articles are continuing to extend our still-immature understanding of these “completely non-extreme” quantum physics topics. A long journey toward understanding awaits us … which is good! 122. Lou Scheffer Says: Lubos #120, There are two different statements here. The first is that a new theory is bogus if it cannot reproduce the well-known results from the previous theory. On this we agree completely. The second is that you can tell that a theory is bogus if it reduces the state space of the previous theory. This I do not believe since it is entirely possible that the problem with the old theory is that the state space was too big (bigger than reality). This is exactly what happened with QM – the old theory, with the old state space, gave results that contradicted experiment (there was no ultraviolet catastrophe, and atoms did not radiate until they collapsed). By reducing the state space to quantized values these problems were fixed. Importantly, this restriction did not screw up previous well-verified results using macroscopic objects, which were shown to be a limit of the new theory. I am in no way defending the new QM theory discussed here – we both agree it’s bogus. However, it’s not bogus just because the new state space is smaller – it’s bogus because it contradicts existing experiments. 123. Luboš Motl Says: Dear Lou #122, I believe that I have already clarified the statement that one cannot reduce the state space of the previous theory. I am talking about the state space for a particular situation – such as an 8-qubit experiment in a low-energy lab considered here. In all the historical examples, the space of states was preserved and/or “infinitesimally” deformed or extended by things that are invisible in the everyday situation, and so on. In this 8-qubit case, it’s claimed that the space of states has to be something qualitatively different which *does* violate the known observations because the known observations imply the laws of physics that inevitably hold for the 8-qubit situation as well – simply because there’s no conceivable variable that would become more extreme in the 8-qubit case and that would invalidate QM in this context while preserving its experimentally tested success in the well-known contexts. 124. John Sidles Says: Great Truth (version III) Luboš Motl now asserts (#122): “In all the historical examples, the space of states [of new dynamical physics?] was preserved and/or ‘infinitesimally’ deformed or extended by things that are invisible in the everyday situation, and so on.” Even by a generous interpretation, it’s hard (for me) to extract useful lessons from Luboš’ most recent assertion. The evolution of the concept of entropy provides an instructive case history. In classical physics entropy is a well-posed geometric entity: the logarithm of symplectic volume of a level-set. And in quantum physics entropy is given as a well-posed algebraic entity: von Neuman’s logarithmic trace. Yet it’s far from self-evident (to me) that the latter entropy is an “infinitesimally deformed” (in Luboš’ phrase) version of the former entropy. Q1  Are there any thermodynamical textbooks that even attempt a formal mathematical demonstration that these two definitions of entropy are (for practical purposes) equivalent? Q2  Are there any texts that provide even a qualitative explanation of why this question hasn’t been easy to answer rigorously? Conclusion  One lesson of history (as it seems to me) is that Lou Scheffer’s post #122 provides solid common sense guidance. Thank you, Lou, for that excellent post! 125. Luboš Motl Says: Dear John #124, the reason why you don’t understand that von Neumann entropy is just the quantum deformed version of the log of the volume in the phase space is that you don’t understand basic physics. The logarithm of the volume in the phase space is just the Shannon entropy from a statistical distribution that is uniform over the volume (and normalized) and the von Neumann entropy is nothing else than the Shannon entropy in which the probability distribution has been uplifted to an operator, just like everything in quantum mechanics. At any rate, the generalization is totally straightforward because the eigenvalues of the density matrix play exactly the same role as the individual values of the classical probability distribution on the phase space. In both cases, the logarithmic formulae are multiplied by Boltzmann’s constant k, in order to get the entropy that was first extracted in the thermodynamic limit and that has values of order one in situations with macroscopic numbers of degrees of freedom. If you don’t understand that all these formulae for entropy are really the same, you should repeat your basic undergraduate courses of statistical mechanics and thermodynamics instead of pretending that you are participating in discussions about cutting-edge physics. 126. Robert Brady Says: Wrap-up — Thank you all for your comments. It has been stimulating to interact on a site dedicated to a collective refutation of our papers on quantum computing and the irrotational motion of a compressible inviscid fluid. My key take away: in order to convince the quantum computing community, we need to analyse the symmetries and interactions of two-body and many-body sonon spins, and show explicitly they obey the statistics in Bell’s original paper. It is not enough simply to observe that the relevant equations are the same as those of Cramer’s and Mead’s models and therefore they have this property. And yes, it is reasonable for this community to ask for further progress in that direction. Time for us to roll up our sleeves — if others do not get there before us. Thank you again 127. Hal Swyers Says: Hold your horses compadre. This is pretty far off the mark. I think its pretty clear to anyone that the state space of observables hasn’t really reduced at all. I can arbitrarily boost any reference frame and make any portion of the state space accessible. What QM did was show that there are certain stable time independent solutions which have a somewhat privileged status. I would argue that there is a limit on our sampling of state space, but state space has shrunk in any sense. 128. Scott Says: Hal Swyers #127: Yes, it’s an ironic feature of QM that it shrunk the “effective” state space of the orbiting electrons, only by dramatically expanding the “true” state space! 129. Bram Cohen Says: Has anyone else noticed that Brady is using this as an opportunity to link to his papers as many times as possible to boost his SEO? 130. John Sidles Says: Luboš Motl says: “If you don’t understand that all these formulae for entropy [Boltzmann, von Neumann, Shannon] are really the same, you should repeat your basic undergraduate courses of statistical mechanics and thermodynamics.” Thank you for your generous advice, Luboš Motl, which exhibits a commendable verbal vigor! Now let us consider its foundations in mathematics and physics. As was mentioned earlier (#70), it is regrettably true a great many natural dynamical systems are excluded from “undergraduate courses of statistical mechanics and thermodynamics.” E.g, when considering the classical-to-quantum pushforward it is natural to ask questions like: What is the Boltzmann/von Neumann/Shannon entropy of a (classical) ideal gas of Chaplygin Sleighs interacting by weak potentials? How can such systems be quantized most naturally? The dual set of quantum-to-classical pullback questions is similarly rich, and here — as one example among hundreds — it is a fun exercise to read the titles of Guifre Vidal recent articles as provisional answers to wonderful questions: A real space decoupling transformation for quantum many-body systems (2012, arXiv:1205.0639); and Entanglement renormalization and gauge symmetry (2010, arXiv:1007.4145); and Infinite time-evolving block decimation algorithm beyond unitary evolution (2010, PRB v78p155117). Obviously very many more such recent articles could be cited! The thrust of comment (#70) is that the 20th century undergraduate mathematical curriculum leaves 21st century physics and engineering students ill-prepared to appreciate the burgeoning literature on classical-to-quantum pushforwards, and the dual literature of quantum-to-classical pullbacks … not to mention the literature whose dynamical state-spaces are more-than-Newton/less-than-Hilbert (e.g. the tensor-product state-spaces cited above). What are the wonderful questions, to which the above-mentioned articles (and hundreds more) are providing us with enticing (but provisional and incomplete) answers? Aye, lasses and laddies, now *that’s* a central question for the 21st century STEM enterprise! Proposition  It may eventuate that the state-space of string theory — whatever that state-space may be! — is not the state-space of Nature. As for whether Nature’s state-space permits fault-tolerant quantum computing (or does not permit it) this issue too is wholly undecided at the present time. Yet in all eventualities, the transformative advances in the capabilities of practical systems engineering — classical, quantum, and the emerging hybridized methods — that are associated to the mathematical naturality of the pushforward/pullback methods of string theory, and the mathematical naturality of the informatic methods of quantum information theory, are sufficient already to more-than-justify society’s investment in stringy quantum informatic arcanæ. 131. Hal Swyers Says: Scott #128 No doubt! Certainly QM is better endowed than CM, it just like to flaunt the fact! I think future generations will look back and laugh at those advocating for smaller spaces and ask, “Well, how then would we make the cancellations?” 132. Amir Safavi-Naeini Says: I think “Bell’s Theorem? ‘Tis but a flesh wound!” should be a new category on your blog. (though it would overlap pretty strongly with “Rage Against Doofosity”..) 133. Hal Swyers Says: couple of minor corrections… in #125 “but state space has shrunk in any sense.” “but state space has not shrunk in any sense.” in #131 “it just like to flaunt the fact!” “it just doesn’t like to flaunt the fact! 134. Greg Kuperberg Says: Yes, especially if you also retract claims that experimental Bell inequality violations are illusory. Then you would convince the community that you’re not trying to contradict quantum mechanics. You wouldn’t convince anyone that quantum computers can’t work. 135. chorasimilarity Says: Why does not (pen+paper+QM textbook) count as a classical simulation of any QM system? 136. Scott Says: Amir #132: Thanks very much! I’ve adopted your suggestion. 137. John Sidles Says: chorasimilarity asks “Why does not (pen+paper+QM textbook) count as a classical simulation of any QM system?” The key is the efficiency of that simulation. It is striking, that in all of human history, no-one has ever measured an experimental data-set that (under standard complexity-theoretic assumptions) provably cannot be simulated with computational resources that are polynomial in the bit-length of that data-set. Moreover, thanks to ongoing “Moore’s Law” advances in computer hardware and simulation algorithms, the principle “all feasible experiments are efficiently simulable” is nowadays striking true even of quantum dynamical systems that formerly were considered to be computationally intractable. The Skeptic’s Postulate  The empirical simulability of feasible experiments reflects a law of nature that requires either fundamental modifications to QM, or else fundamental modifications to our appreciation of the experimental implications of QM (or both). The Enthusiast’s Postulate  No fundamental extensions of QM are required: it is necessary only that we be adequately ingenious in designing scalable means of fault-correction in quantum computing and/or scalable means of sourcing/sinking n-photon quantum states in Aaronson-Arkhipov experiments (etc.). The ensuing unsimulable data-sets will experimentally demonstrate that the Skeptic’s Postulate is wrong! The Shtetl Optimized comments (so far) shows us plainly that the weakest QM skeptical arguments are comparably unconvincing to the weakest QM enthusiastic arguments. As for convincingly strong arguments, none of the preceding 135 Shtetl Optimized comments has demonstrated (to me) that either the Skeptics or the Enthusiasts have any! 138. Joe Shipman Says: Why do you compare monarchists and segregationists to QM-deniers? The monarchists and segregationists may have been losing the political battle, but it is not a consequence of their political theory that their success is assured, so failure does not invalidate the righteousness of their opinions. This is in contrast with Marxism, which DOES assert its historical inevitability and is therefore falsified by failure. On the contrary, the monarchists can point to the failures of politics in democracies due to the shortsightedness of politicans who only see as far ahead as the next election and find their theories confirmed by experience of non-monarchism. The case of the segregationists is similar; you can argue against them on moral grounds, but it’s not obvious that experience has disproved their theories. 139. Scott Says: Joe Shipman #138: You make an interesting point. I suppose my analogy was based on the empirical fact that most (though not all) firm believers in a political ideology, also predict a future where increasing numbers of people will agree with them—or at the least, they don’t predict that their ideology will nearly vanish from the face of the earth. (If they did expect that, then being the herd animals most humans are, they’d probably switch ideologies!) Even if they predict a huge “temporary” setback (e.g., losing a war), they typically also predict that far enough in the future, the world will come to see the martyrdom and heroism of their cause. For this reason, I submit that, while it’s not necessary as a matter of principle, in fact most political ideologies are pretty tightly coupled to empirical predictions about the future of humankind: for example, “the world will come to see the rightness of superior races enslaving or exterminating inferior ones.” And many of those predictions have been pretty dramatically falsified. And those falsifications have indeed created huge problems for the modern “ideological descendants” of the people who made the predictions, at least if they care about history at all (many don’t). You’re right that all of this is most obvious in the case of Marxism (or, say, apocalyptic religions), which have included predictions—often falsified ones!—as explicit parts of their ideology. (Arguably Nazism also counts, because of its explicit prediction of the “Thousand-Year Reich.”) But my claim is that even the ideologies that don’t include “explicit” predictions—e.g., liberal democracy, segregationism, monarchism—almost always contain “implicit” predictions that are accepted by almost all their adherents, as a major reason for subscribing to the ideology at all. 140. John Sidles Says: Scott’s thesis that quantum skepticism≡monarchy is a Great Analogy … whose opposite therefore assuredly also is a Great Analogy. To appreciate this we ask: A Quantum Trivia Question  In the years 1918–1933, seven physicists received Nobel awards for their seminal roles in the conception of quantum mechanics. How many of the seven were born natives of monarchies? Answer  Six-of-seven were born citizens of monarchies (Planck, Einstein, Bohr, Heisenberg, Schrödinger, and Dirac). The sole exception is named by the Nobel website as “Prince Louis-Victor Pierre Raymond de Broglie” … a hereditary French title of nobility! When we remark that David Hilbert was himself born under the reign of Prussian monarch William I, and that Nobel Prizes have always been presented by the reining Swedish Monarch (presently Carl XVI Gustaf!), the conclusion is unassailably evident: An Inarguable Fact  Belief in the absolute physical reality of Hilbert/Dirac quantum dynamics is at present, and historically always has been, nurtured by monarchy. Comments (#44) and (#54) adduce more evidence for the Great Analogy of (quantum enthusiasm)≡(nurture by monarchy), yet surely the historical evidence already cited will suffice to convince any “calm person”! 141. srp Says: Scott #139: Your understanding of conservative intellectual sensibilities and ideology is seriously incomplete. Declinism, fatalism, etc. is default mode for a good chunk of the right. You can observe this for yourself at the famous end by noting the deep-rooted pessimism of Whitaker Chambers, who was sure he had switched to the losing side. You can observe it at the anonymous end by perusing blog comments on any anti-immigrant or social-conservative site. Excessive wallowing in gloom is actually a perennial vice that the more self-aware conservatives try to police. One reason Reagan made such an impact on the movement is that as a converted FDR Democrat he brought a dose of that optimistic happy-warrior spirit from the other side. 142. Scott Says: srp #141: I’m well-aware of conservatives who wallow in doom-and-gloom prophecies; I even know the sort of fatalistic blog comments on social-conservative sites that you’re talking about. But I thought the whole appeal of a doom-and-gloom prophecy is the idea that, when the apocalypse finally arrives, the world will see that you were right! 143. srp Says: Scott #142: I’m not sure who the post-apocalyptic audience would be for the I Told You So.. The really hard-core types believe in cyclical theories in which The Gods of the Copybook Headings are independently rediscovered after the decline and fall stage. But some do take satisfaction that they will be someday vindicated. Of course,modern environmentalism has similar “after you’re all boiling/poisoned/missing the pretty biota/genetically mutated/Soylent Green you’ll see I’m right” tendencies. Civil libertarian types sometimes entertain post-police state ITYS fantasies, too. Human nature cuts across ideologies and philosophies. 144. John Sidles Says: Scott proposes  “My claim is that even the ideologies that don’t include ‘explicit’ predictions — e.g., liberal democracy, segregationism, monarchism — almost always contain ‘implicit’ predictions that are accepted by almost all their adherents, as a major reason for subscribing to the ideology at all.” Concrete historical support for Scott’s thesis may be found in two very readable surveys of belief systems: Rabbi Abba Hillel Silver’s scholarly A history of Messianic speculation in Israel from the first through the seventeenth centuries (1927) and Martin Gardner’s humorous (yet still scrupulously factual) works that include Fads and Fallacies in the Name of Science (1957) and Urantia: the Great Cult Mystery (1995). E.g. in Silver we read: The pathetic eagerness to read the riddle of Redemption and to discover the exact hour of the Messiah’s advent […] proceeded with varying intensity clear down the ages. At times it seems to be the idle speculation of leisure minds, intrigued by the mystery; at other times it is the search of people in great tribulation. […] Great political changes, boding weal or woe, accelerated the tempo of expectancy. […] The rich fancy of the people, stirred by the impact of these great events, sought to find in them intimations of the Great Fulfillment. Recent quantum computing works like the first-edition and second-edition QIST Roadmaps (LA-UR-02-6900, 2002 and LA-UR-04-1778, 2004) surely can be read as science, yet Scott’s proposition builds upon the great tradition of Silver and Gardner, in encouraging us to read the QIST Roadmaps also as technological prophecy founded upon belief in the absolute physical reality of Dirac/Hilbert state-spaces. Is there an element of ideological/Messianic fervor among the most ardent enthusiasts for quantum computing? A faith sufficiently strong, that the evident shortfall of the QIST timelines cannot shake it? What are the consequences of subjecting faith to the trials of science? If it happens that the messiah of FTQC tarries, for how many generations should physicists retain their devout faith in the inerrant scripture of Hilbert and Dirac? And in particular, why are the strongest defenses of orthodoxy so commonly lacking in humility and humor? These are the trans-disciplinary questions that first Silver’s essays, then Gardner’s essays, and now Scott’s essays, encourage us to ask. In Silver’s history we read Rabbi Jonathon’s “Perish all those who calculate the end, for men will say, since the predicted end is here and the Messiah has not come, he will never come”, and to make the same point more positively (and subtly), there is Maimonides’ creed “I believe with a full heart in the coming of the Messiah, and even though he may tarry, I will wait for him on any day that he may come!” Does Maimonides’ creed apply to the QIST Roadmaps? Should it? These are terrific questions! 145. wolfgang Says: scott #142, referencing srp #142 8-) >> the whole appeal of a doom-and-gloom prophecy I think the appeal is to ‘know’ that everything other people are doing is futile and foolish. The best example imho is zerohedge.com – predicting financial doom-and-gloom since 2009 (when the financial crisis hit bottom). Their followers witness other people (fund managers etc.) make lots of money (and they follow this in great detail), but they ‘know’ that in the end all will be lost, which explains why they are such a popular website. 146. Scott Says: “scott #142, referencing srp #142″ Sorry, fixed :-) 147. jonas Says: Oh no! Now you’re saying that you won’t be able to have the quantum and anti-quantum fanatics fight in a gladiator arena and cancel out each other because they’re on the same side? 148. Eliezer Yudkowsky Says: Come to the Dark Side, we have cookies! 149. John Sidles Says: Scott (#11) avers  “The arc of science is long, but it bends toward quantum.” ;) Excellent! :) Aside: the fabled Quote Investigator has researched the fascinating origins and evolution of this fine saying. Yet on the other hand we have: Henry Ford (apocryphal?) “If I had asked my customers what they wanted, they would have told me a  faster horse   proof of P≠NP  quantum computer.” 150. RNH Says: John Sidles, are you a currently practicing physicist? 151. John Sidles Says: Scott Aaronson poses the question: “of whether there’s any pair of quantum computing skeptics whose arguments for why QC can’t work are compatible with one another’s.” For the answer to Scott’s question to be “no”, then all forty-six of Gil Kalai’s tabulated objections to quantum computing (beginning here and ending here) would have to be mutually exclusive … which on probabilistic grounds alone, scarcely seems likely! :) Gil’s list is wonderfully compatible (as it seems to me) with celebrated passages by Donald Knuth and Derek deSolla Price: Derek deSolla Price  It is not just a clever historical aphorism, but a general truth, that “thermodynamics owes much more to the steam engine than ever the steam engine owed to thermodynamics.” […] The dominant force of the process we know as the Scientific Revolution was the use of a series of instruments of revelation that expanded the explicandum of science in many and almost fortuitous directions […] Historically the arrow of causality is largely from the technology to the science.[…] The history of science is only partly a flow of intellectual steps. The other part is the craft of experimental science, which is the part of the the history of technology. Each radical innovation in this craft tradition gives rise, not to the testing of new hypotheses and that theories, but rather to the provision of new information which affects what scientific theories must explain.[…] This process, which I describe as ‘artificial revelation,’ is at the root of many paradigm shifts, perhaps not all, but most. In these cases the paradigm shift comes about because of a change in the technology of science which may be rather trivial and is almost always an intruder from some vastly different current in the history of technology. These passages motivate us to regard Gil Kalai’s forty-six skeptical avenues as intertwining paths — some paths surely more promising than others … but which? — that lead generally toward the conception (in deSolla’s idiom) of a 21st century “synthetic revelation” that extends “the explicandum of quantum dynamics” with sufficient rigor that (in Knuth’s idiom) “the present Art of quantum dynamical simulation becomes a Science.” The Knuth/Price considerations lead us to reflect that perhaps it scarcely matters — for the next few decades anyway — whether the quantum dynamical state-space of Nature is absolutely non-Hilbert/Dirac versus effectively non-Hilbert/Dirac (to answer Scott’s question by citing two distinct-yet-compatible grounds for quantum computing skepticism). In either eventuality scalable quantum computing is infeasible (or is it?) … and yet many other capabilities — that are similarly wonderful, and possibly are more strategically important than quantum computing for a crowded overheating planet — are associated to various subsets of Gil Kalai’s forty-six skeptical possibilities. Conclusion  It is a truth universally acknowledged that various combinations of Gil Kalai’s forty-six skeptical avenues represent eminently hopeful paths for the future of quantum research. 152. John Sidles Says: RNH asks  “John Sidles, are you a currently practicing physicist?” Such questions presuppose that there exists a distinct boundary between science and engineering, and yet this boundary isn’t easy to specify: did von Neumann write to Wiener (in 1946) as a fellow-scientist or as a fellow-engineer? To the extent that 21st century scientists and engineers are advancing toward increasingly overlapping objectives, that are described by increasingly overlapping mathematical languages (and are pursued by increasingly overlapping enterprises) perhaps the science-versus-engineering question — for many (most? all?) quantum researchers — is amenable to the same answer as was recounted by Lyndon Johnson, who told of a student teacher who was asked in a Texas job interview: “Do you teach evolution the Bible way or the Darwin way?” … to which the eager job-seeker answered: “I can teach it either way!”  :) Conclusion  Strict-constructionists can reasonably argue that workers who are largely or entirely confident that the state-space of Nature is rigorously Hilbert/Dirac should self-describe as engineers. John Bell was among the first to embrace this practice (in 1983) … there have since been many more. 153. Sam Hopkins Says: Scott: what is the point of critiquing and debating quantum skeptics with whom you disagree so fundamentally? We sometimes think that debate will lead to greater understanding (when, for instance, your opponent is Gil Kalai), but you don’t think anything like that of a lot of these crackpots. So work towards the quantum computer. When you can crack RSA codes, these skeptics will look like the Flat Earth Society. 154. Raoul Ohio Says: Sam H., I think you have missed the point. SO is largely reports from Scott as he tries to figure out the landscape on the QC frontier. Debating fine points with John, Greg, Gil, Lubos, and other insightful participants is no doubt a fun part of the exercise. Scott also takes on his share and more of refuting doofosity. Many avoid this chore, which is rather like pushing a garbage truck up a hill. Scott appears to be about 50/50 on sparing with QC skeptics and QC true believers. That is usually a good place to be in a debate, when both sides attack you. 155. Rahul Says: No blog posts for 2 weeks?! I keep checking but no luck. 156. John Sidles Says: The lamps have been going out, not only here at Shtetl Optimized, but throughout the quantum blogosphere. Has the 20th century’s explicandum of quantum information theory become too narrow to sustain viable 21st century STEM enterprises? Are the associated quantum explicandae insufficiently inspiring and/or enabling and/or natural? The remedies to these maladies are well-known: lively articles, enabling technologies, novel explicandae, new mathematical frameworks, and provocative posts and comments! 157. Bram Cohen Says: Note that Brady didn’t actually answer whether factoring 17*19 would invalidate his model or why he spent so much time denying the validity of the numerous experiments demonstrating Bell Inequality violations if he accepts their results. 158. vznuri Says: hi scott, you retrograde luddite you. think you’re spectacularly on the wrong side of this issue, but it could take decades to prove it. however, there is some recent physics/cosmology results by Beane that arguably tie into this that you should be aware of. do you have any comment on those elsewhere? hot off the press, a rebuttal that cites ‘t hooft, wolfram, cellular automata, solitons 159. Scott Aaronson shreds a paper claiming QC won’t work. | Gordon's shares Says: [...] Link. I missed this somehow. He left smoking cinders. Miss his writing, he’s spending too much time with his baby. (that was a joke) [...] 160. Robert Brady Says: Bram #157: If our model is correct then it will be an order of magnitude harder to factor numbers like 17*19 in a simple geometry. And another order of magnitude harder with larger numbers, and so on. I hope the reason for this is clear from the papers. It is an experimental fact that there are violations of the Bell inequalities. This is usually interpreted to tell us that particles have a delocalised character. As you will know, it is common in fluid dynamics for there to be structures with a well-defined position whose energy is delocalised. A vortex is one well-studied example, and the sonon quasiparticles are another. In this blog it appears to be assumed that quasiparticles in fluid dynamical systems cannot violate the Bell inequalities, even though their energy is delocalised. I should very much like to understand why this is believed. Could any of the contributors to this blog provide a reference? Thank you. 161. John Sidles Says: Another voice is heard: Steven Weinberg concludes (in his new textbook Lectures on Quantum Mechanics) “My own conclusion (not universally shared) is that today there is no interpretation of quantum mechanics that does not have serious flaws, and that we ought to take seriously the possibility of finding some more satisfactory other theory, to which quantum mechanics is merely a good approximation”. There is a discussion of Weinberg’s view on the Quantum Frontiers weblog that (as it seems to me) deserves more comments than it is receiving. 162. Attila Szasz Says: yet another reason not to want a world with a small limited number of qm-proper particles 163. John Sidles Says: Thomas Vidick’s weblog MyCQState is presently hosting an outstanding essay/survey — written by Thomas himself — that is titled A Quantum PCP Theorem?. That the Anderson/Brady preprint has received 150+ comments, while Thomas’ excellent essay has (thus far) received no attention at all, is an imbalance that (as it seems to me) we can all help to remedy. 164. Raoul Ohio Says: The semi-regular announcements of QC breakthroughs recounted in ScienceDaily got off to an early start this week: One thing that makes this week’s alleged breakthrough unusual is that there are some actual “QM looking” equations given in a figure. But — they are blurry so you can’t quite see what they are trying to say. If you click the “enlarge” button, the equations get bigger, but they are still blurry. Could this be a deep metaphor? 165. chrisd Says: Robert #160: I don’t pretend to be an expert on sonons, but one of your presentations suggests that they are “incapable of quantum collapse”, so in particular the measurement postulate does not hold and it’s not clear that sonons can appear in a superposition state, let alone exhibit entanglement. As such, they don’t appear “quantum” so would not be expected to violate a Bell’s inequality. And i emphatically disagree that such violations are interpreted as showing the particles have a “delocalized character”. All they show is that quantum mechanics is incompatible with local realism. Big difference. 166. Doriano Brogioli Says: Dear Prof. Aaronson, for sure, nobody will ever find that the “whole framework of exponentially-large Hilbert space was completely superfluous”! However, I would like to ask you (and to the readers) your idea about the fact that BQP is a subset of PSPACE. It seems that something huge is needed, but not necessarily a “large Hilbert space”: a very long calculation time can do the work. Do you think that this can say something fundamental on what QM is? Not an easy equation in a huge Hilbert space but an extremely difficult problem in a smaller space? 167. Bram Cohen Says: Brady #160 writes: “If our model is correct then it will be an order of magnitude harder to factor numbers like 17*19 in a simple geometry. And another order of magnitude harder with larger numbers, and so on.” By ‘order of magnitude’ do you mean factor of 2 or factor of 10? By ‘harder’ do you mean more energy, or more precision, or something else? By ‘larger’ do you mean twice as large, or above some threshold, or what? Fluid dynamical systems are based entirely on local phenomena. See every paper on simulating them ever written. Anything which vaguely looks like action at a distance will have to operate by going through the intervening materials, and the speed of propogation of the effects will be limited by the speed of light. 168. John Sidles Says: In token of respect and gratitude for recent quantum-informatic blogosphere posts by Gil Kalai, Aram Harrow, Thomas Vidack, John Preskill, and the Aaronson/Arkhipov collaboration (and other folks too!), I have posted to Gödel’s Lost Letter (what attempts to be) a unitary appreciation of their various perspectives in relation to the 2014 Simons Institute workshop Quantum Hamiltonian Complexity … which looks like it will be a terrific workshop! 169. Rahul Says: Scientists discover a way around Heisenberg’s Uncertainty Principle According to a pair of scientists from the University of Rochester and the University of Ottawa, there may be a way around Heisenberg’s famous Uncertainty Principle. According to a report published this week in Nature Photonics, a recently developed technique that allows scientists to directly measure the polarization states of light could be the key. The direct measurement technique, developed in 2011, allows scientists to measure the wavefunction – a way of determining the state of a quantum system. The pair of scientists say the new technique relies on a “trick” that measure the first property of a system, leaving the remaining parties untouched. The careful measurement relies on the “weak measurement” of the first property followed by a “strong measurement” of the second property, the pair writes in the report. 170. Robert Brady Says: Bram #167 I appreciate your questions and comments. Yes, Euler’s equation contains only local interactions. Nevertheless, the energy and angular momentum of a vortex are delocalised in the fluid. I believe this means a vortex has at least some properties which are not localised at the core. A sonon has the same delocalised properties. Would you be convinced by an explicit proof of the spin correlation in Bell’s original paper (which he shows violates his inequality)? I am afraid I can’t quantify how ‘hard’ it would be to break the current experimental glass ceiling. You would have to get a single particle to lose coherence with system A, fall into coherence with B, revert to A and so on, with the net effect that it remains in coherence with both. I think this would be exceedingly difficult, but it is at least mathematically conceivable. If someone can achieve it they might be able to break the glass ceiling. If it’s not clear why, the presentation (on my web site) might be helpful. 171. Scott Says: Rahul #169: Weak measurement is a decades-old idea. And it doesn’t in any way, shape, or form violate the Uncertainty Principle (as nothing can, without violating QM itself—in which case you would’ve heard about it!). In the case of weak measurement, the “catch” (i.e., the one crucial fact popular articles never tell you) is that you need an ensemble of many copies of the system to implement the measurement. YAWN… next! :-) 172. Scott Says: Doriano #166: Actually yes, I’ve been telling people for a while that BQP⊆PSPACE is a deep and underappreciated fact about the foundations of quantum mechanics! (One of my laugh lines is that Feynman won the Nobel Prize in physics basically for pointing out that BQP⊆P#P⊆PSPACE—i.e., that you can organize QFT calculations as a giant sum rather than keeping a whole wavefunction in memory.) On the other hand, I don’t see this as a challenge to the Hilbert space formalism, but as a property of the formalism: a property of “modesty,” if you like. We never observe a naked state vector in the wild; we only ever observe the outcomes of measurements. And if you only care about predicting the outcomes of measurements specified in advance, you can ditch the notion of “states” almost entirely, and organize your calculations in a more efficient way (just how much more efficient being an active research topic). But as soon as you ask for the “state” of the system — i.e., for an object sufficient to probabilistically predict the outcome of any possible measurement that could be made in the future — the exponential character of Hilbert space comes roaring back. 173. John Sidles Says: Unless the dynamical system couples to a continuum of vacuum states, or (equivalently?) a thermal bath, or (equivalently??) is a product-state pullback. For some reason (yet what might that reason be?) Nature requires that both her external reality and human laboratory experiments respect these coupling-to-continuum constraints. That’s why it’s been heartening in recent years (for us system engineers) to witness the gradual weakening of theoretical faith in the absolute reality of unitary evolution on finite-dimensional Hilbert spaces! 174. Bram Cohen Says: Brady, a vortex is not a particle, it’s a phenomenon across a whole area, like how a sound is a pattern of pressure or a differential in temperature is a potential energy source. The point of intersection of the two blades of a scissors can move forward faster than the speed of light, but that isn’t a violation of the speed of light limit, because that point isn’t a particle, it’s a phenomenon which changes what particles it’s talking about over time. 175. Robert Brady Says: Bram #174. Yes. Well put. A vortex is able to escape Bell’s inequality because it is a phenomenon across a whole area. But it has a duality. An ideal vortex is completely characterised by its central position and circulation, and so it can be (and, in fluid mechanics, is) treated like a 2-D particle. Sonons have the same duality, in 3D. To settle this, would you accept an explicit demonstration that sonon quasiparticles have spin-half symmetry and behave precisely like the quantum mechanical particles analysed in Bell’s original paper, including violating Bell’s inequality? 176. Mitch Says: “Bell’s Theorem? Just a flesh wound!” Scott. You seem like a smart guy who knows his way around QM, so I have a few questions about the “Bell’s theorem” proof. Bell main criticism of von Neumann’s no-go theorem is as follows: “The essential assumption can be criticized as follows. At first sight the required additivity of expectation values seems very reasonable, and it is rather the nonadditivity of allowed values (eigenvalues) which requires explanation. Of course the explanation is well known: A measurement of a sum of noncommuting observables cannot be made by combining trivially the results of separate observations on the two terms — it requires a quite distinct experiment.” Yet in every “proof” of Bell’s theorem I’ve come across, expectation values from QM are simply combined linearly in an inequality expression (which is valid BTW) to claim violation. So when Bell wrote in is argument against von Neumann that: “It was not the objective measurable predictions of quantum mechanics which ruled out hidden variables. It was the arbitrary assumption of a particular (and impossible) relation between the results of incompatible measurements either of which might be made on a given occasion but only one of which can in fact be made.” Why is this not also a criticism of Bell’s own theorem? How can Bell’s theorem be valid if the proof relies on a linear combination of expectation values, of incompatible measurements contrary to the principles of QM? Leave a Reply
43ac9033ed4fd297
Take the 2-minute tour × I am trying to solve a Schrödinger equation for a particle hitting a step potential using NDSolve in Mathematica. Here is my code: mu = 6.; m = mu; R = 5.; Vs2 = 4./(2*m*R^2); Vs = -10./(2*m*R^2) + Vs2; Energy = 0.001 VCC[r_] = Vs*UnitStep[R - r] + Vs2*UnitStep[r - R]; L = 0; system = {RC''[r] + 2/r*RC'[r] + (-L*(L + 1)/r^2 - 2*mu*(VCC[r] - Energy))*RC[r] == 0, RC[0.001] == 1.0, RC'[0.001] == 0.0 }; syssol = NDSolve[system, {RC[r]}, { r, 0.001, 1000.}, MaxSteps -> 10000000]; Plot[Evaluate[{RC[r]} /. syssol], {r, 0.001, 200.0}, PlotRange -> {-1.1, 1.1}] There should be decaying wave when particle hits the potential step, but NDSolve gives an increasing result. I am sure there is some trick to fix this, so I am waiting for you help. share|improve this question Before posting this question, you should first have followed the advice in the comments to your identical question on StackOverflow: stackoverflow.com/questions/9855619/… –  Jens Mar 25 '12 at 18:38 What is your m ? Also take a look at this (it's a bit different case but you can see how everything is set up): demonstrations.wolfram.com/ScatteringOverPotentialStep –  Vitaliy Kaurov Mar 25 '12 at 18:39 By the way, this has exact solutions, so I'm guessing this could be a homework problem and hence not appropriate for this forum. –  Jens Mar 25 '12 at 18:43 you forgot to define m here (this is to make it simpler for someone to cut and paste this code directly) –  acl Mar 25 '12 at 18:43 and, the root of you problem is mathematical, not an error in programming. did you try to do it analytically, for instance? –  acl Mar 25 '12 at 18:44 1 Answer 1 noeckel’s answer on StackOverflow is spot on. This is not a Mathematica issue, this is a mathematical issue. Namely, Mathematica is giving you the correct solution to the system of differential equation and boundary conditions given. The conditions given (and in particular the derivative imposed at the origin) are incompatible with the expected decay. Bear in mind that, at $r \geq 5$, your wavefunction will have two components of the form $\exp(\alpha r)$ and $\exp(-\alpha r)$. For each set of boundary conditions, you get a different linear combination of these two, and the only conditions that make sense are those for which the diverging term is zero. share|improve this answer +1. If memory serves me well, usual conditions for the radial wave function are the absence of increasing exponent at infinity and finiteness at zero (or at least, convergence of the integral of Abs[f[r]]^2 over the volume around zero. The divergence of this integral would correspond to the classical fall-on-the-center scenario, for potentials ~ 1/r^a with a > 2 IIRC). This rules out fixing the derivative at zero, as you said, and fixing the w.f. value itself does not make sense either, since it is fixed by w.f. normalization (that is, unless I forgot everything, it's been a while :)) –  Leonid Shifrin Mar 25 '12 at 19:24 sorry guys, m = mu. Also, I guess this is not exactly mathematical issue because mathematica does not realize bounded problems (f''(r)-k^2*f(r)==0 type problems), so you have to tune one of the parameters (energy, potential etc.) in order to get decaying solution. I am able to do that up to r=50-60, but I would like to get it for farther distance. –  serelha Mar 25 '12 at 23:11 By the way, I have an analytic solution for this type problem, and my purpose is to compare my analytic result to numerical calculation. This is not homework. –  serelha Mar 25 '12 at 23:17 Your Answer
bb13e9a8de0ae5b9
Infrared Spectroscopy - mirrored from UCLA - Adapted from : R. L. Pecsok L. D. Shields, Modern Methods of Chemical Analysis (Wiley, New York, 1968); and A.T. Schwartz et al., Chemistry in Context (American Chemical Society, Washington, DC 1994). A portion of the electromagnetic spectrum is shown in Figure 1, along with the names associated with various regions of the electromagnetic spectrum. Our eyes can detect only a very limited range of wavelengths, the visible spectrum between about 300 and 800 nm. Figure 1 The Electromagnetic Spectrum Atoms and molecules can absorb electromagnetic radiation, but only at certain energies (wavelengths). The diagram in Figure 2 illustrates the relationships between different energy levels within a molecule. The three groups of lines correspond to different electronic configurations. The lowest energy, most stable electron configuration is the ground state electron configuration. Certain energies in the visible and uv regions of the spectrum can cause electrons to be excited into higher energy orbitals; some of the possible absorption transitions are indicated by the vertical arrows. Very energetic photons (uv to x-ray region of the spectrum) may cause an electron to be ejected from the molecule (ionization). Photons in the infrared region of the spectrum have much less energy than photons in the visible or uv regions of the electromagnetic spectrum. They can excite vibrations in molecules. There are many possible vibrational levels within each electronic state. Transitions between the vibrational levels are indicated by the vertical arrows on the left side of the diagram. Microwave radiation is even less energetic than infrared radiation. It cannot excite electrons in molecules, nor can it excite vibrations; it can only cause molecules to rotate. Microwave ovens are tuned to the frequency that causes molecules of water to rotate, and the ensuing friction causes heating of water-containing substances. Figure 3 illustrates these three types of molecular responses to radiation. Figure 2 Energy Levels in Molecules Figure 3 Molecular responses to radiation What do we mean by molecular vibrations? Picture a diatomic molecule as two spheres connected by a spring. When the molecule vibrates, the atoms move towards and away from each other at a certain frequency. The energy of the system is related to how much the spring is stretched or compressed. The vibrational frequency is proportional to the square root of the ratio of the spring force constant to the masses on the spring. The lighter the masses on the spring, or the tighter (stronger) the spring, the higher the vibrational frequency will be. Similarly, vibrational frequencies for stretching bonds in molecules are related to the strength of the chemical bonds and the masses of the atoms. Molecules differ from sets of spheres-and-springs in that the vibrational frequencies are quantized. That is, only certain energies for the system are allowed, and only photons with certain energies will excite molecular vibrations. The symmetry of the molecule will also determine whether a photon can be absorbed. The number of vibrational modes (different types of vibrations) in a molecule is 3N-5 for linear molecules and 3N-6 for nonlinear molecules, where N is the number of atoms. So the diatomic molecule we just discussed has 3 x 2 - 5 = 1 vibration: the stretching of the bond between the atoms. Carbon dioxide, a linear molecule, has 3 x 3 - 5 = 4 vibrations. These vibrational modes, shown in Figure 4, are responsible for the "greenhouse" effect in which heat radiated from the earth is absorbed (trapped) by CO2 molecules in the atmosphere. The arrows indicate the directions of motion. Vibrations labeled A and B represent the stretching of the chemical bonds, one in a symmetric (A) fashion, in which both C=O bonds lengthen and contract together (in-phase), and the other in an asymmetric (B) fashion, in which one bond shortens while the other lengthens. The asymmetric stretch (B) is infrared active because there is a change in the molecular dipole moment during this vibration. To be "active" means that absorption of a photon to excite the vibration is allowed by the rules of quantum mechanics. [Aside: the infrared "selection rule" states that for a particular vibrational mode to be observed (active) in the infrared spectrum, the mode must involve a change in the dipole moment of the molecule.] Infrared radiation at 2349 (4.26 um) excites this particular vibration. The symmetric stretch is not infrared active, and so this vibration is not observed in the infrared spectrum of CO2. The two equal-energy bending vibrations in CO2 (C and D in Figure 4) are identical except that one bending mode is in the plane of the paper, and one is out of the plane. Infrared radiation at 667 (15.00 um) excites these vibrations. Aside: another way of illustrating the out-of-plane mode is to place circled + or - signs on the atoms, signifying motion above of below the plane of the paper, respectively. Thought question: Why do you think it takes more energy (shorter wavelengths, higher frequencies) to excite the stretching vibration than the bending vibration? Figure 4 Vibrations of CO2. In addition to bond stretching and bond bending, more complicated molecules vibrate in rocking and twisting modes, which arise from combinations of bond bending in adjacent portions of a molecule. (These are sketched in the handout you received in lecture.) Torsions involve changes in dihedral angles. This type of mode is analogous to twisting the lid off the top of a jar. No bonds are stretched, and no bond angles change, but the spatial relationship between the atoms attached to each of two adjacent atoms will change. The torsional mode for ethane is illustrated below. Without going into details at this point, we can note some general trends. The stronger the bond, the more energy will be required to excite the stretching vibration. This is seen in organic compounds where stretches for triple bonds such as C[[equivalence]]C and C[[equivalence]]N occur at higher frequencies than stretches for double bonds (C=C, C=N, C=O), which are in turn at higher frequencies than single bonds (C-C, C-N, C-H, O-H, or N-H). The heavier an atom, the lower the frequencies for vibrations that involve that atom. The characteristic regions for common infrared stretching and bending vibrations are given in Figure 5. Further details are given in the tables at the end of this handout. How are infrared spectra obtained, and what do they look like? An infrared spectrometer consists of a glowing filament that generates infrared radiation (heat), which is passed through the sample to be studied. A detector measures the amount of radiation at various wavelengths that is transmitted by the sample. This information is recorded on a chart, where the percent of the incident light that is transmitted through the sample (% transmission) is plotted against wavelength in microns (um) or the frequency (). Remember that energy is inversely proportional to wavelength. If we define wavenumber (a.k.a. "reciprocal centimeters") = 1/ (), we have a parameter that is directly proportional to energy. Figure 6 shows the infrared spectrum of a gaseous sample of carbon dioxide. Note that the intensity of the transmitted light is close to 100% everywhere except where the sample absorbs: at 2349 (4.26 um) and at 667 (15.00 um). Figure 5 Chart of Characteristic Vibrations Figure 6 Infrared spectrum of Carbon Dioxide Because the IR spectrum of each molecule is unique, it can serve as a signature or fingerprint to identify the molecule. This feature, along with the fact that it is a non-destructive technique, have made infrared spectroscopy a valuable method in chemical analysis. Areas in which it is used extensively include pharmaceutical analysis, quality control in industrial processes, environmental chemistry, geology and astronomy. One difficulty, however, is that the infrared (IR) spectra of molecules with more than a few atoms can be very complicated. How do we know what vibration each absorption band in the IR spectrum corresponds to? There are really three possible answers.  1. It is possible to perform elaborate chemical calculations that allow us to develop pictures of each vibrational mode. How accurate these calculations are depends on the method used. As part of the laboratory assignment, you will examine the results of two different methods, as described further below. As with all modeling, these calculations are subject to a variety of errors, and so the quality of the results varies widely with the method and the complexity of the molecule being studied. 2. In many cases, it is not important to know the exact nature of each vibration. Rather, we might just want to whether certain functional groups (e.g. -COOH, -NH2, etc.) are present in the molecule. It turns out that the some molecular vibrations can be approximately described just in terms of the motions of a few of the atoms, while the other atoms move only slightly or not at all. This approximation is called "functional group analysis". It is particularly useful as a tool for qualitative analysis of organic molecules, and for monitoring the progress of organic reactions. 3) In other cases, we may not even care what the modes are! We may just want to obtain a spectrum of our sample, and compare it to a library of spectra of known compounds, in order to identify our sample. This procedure is common in environmental and forensic analyses. As part of your laboratory and take-home assignment, you will participate in the first two types of analyses. Laboratory Assignment 1. Computer exercises: A. Using the Spartan program. Introduction: The Spartan[TM] molecular modeling program uses quantum mechanics to solve the Schrodinger wave equation for the molecule of interest. Once the positions of the atoms and the molecular wavefunction are known, the various molecular vibrations and their frequencies are calculated. The Spartan program can use several different quantum mechanical methods to determine the molecular geometry (the lowest energy arrangement of the atoms in molecule) and the vibrational frequencies. All of the methods involve solving the Schrödinger equation for a system with many electrons; it is a formidable task, because it requires calculating many very complicated integrals. Ab initio ("from first principles") calculations do just that, without any approximations. If we provide a sufficiently exact description of the wavefunction on each atom and accurately account for their interactions in the molecule, the results (wavefunction, geometry, vibrational frequencies) will be very accurate. Poorer or simpler descriptions of the atomic wavefunctions and/or the interactions between electrons in the molecule will cause the results to be less accurate. Bond lengths may differ somewhat from actual (experimental) values, and vibrational frequencies will also be in so-so agreement with experiment. As part of the lab, you will examine the results of an "STO-3G" level ab initio calculation of the geometry and vibrations of CO2. As you might imagine, ab initio calculations can be very time consuming. For molecules having more than a few second-row atoms, accurate calculations can take hours or days, even on the most powerful computers! Simplifying the calculations by setting some of the integrals equal to zero or to constants results in great time savings, often with little loss of accuracy. The parameters are set to give good agreement with experiment for a test set of molecules. This type of calculation is called "semi-empirical". Note that the calculated frequencies may deviate from experimental values by as much as 200 . Activities on the Silicon Graphics, or Hewlett-Packardworkstation: The four molecules you will study are carbon dioxide, acetic acid, propionamide, and allyl benzoate. The calculations have already been done, and have been stored in the computer. You will examine the results. The files you will look at are: * carbon_dioxide_STO3G * carbon dioxide_PM3 * acetic_acid_IR * propionamide_IR * allyl benzoate_IR * First, write the structure of each molecule, and calculate the number of normal modes. (You can check organic chemistry textbooks, or the Merck Index, for the structures of compounds with which you are not familiar.) * After opening the Spartan file, select "VIBRATIONS" from the output menu, and make a list of the vibrational frequencies (round to the nearest integer!). Clicking on a frequency in the table will cause the corresponding vibrational mode to be displayed. You can use the mouse to rotate the molecule or to move it around on the screen to make it easier to see the displacements of the atoms. For each mode, identify the atoms involved and the types of vibration (stretch, bend, rock, torsion). (You can do this by sketching the molecule, and using arrows to show the direction of motion.) Do this for all of the modes of carbon dioxide and acetic acid, and for all the modes at wavenembers higher than 2100 and for every fifth one below 2100 for propionamide. For allyl benzoate, identify the range of frequencies that arise from the different types of vibration, (i.e. C-H stretch). * Compare the vibrational frequencies and modes of CO2 calculated by the two methods, and with the experimental data given earlier in this handout. Questions: What types of modes occur at the highest frequencies? At the lowest? Are there similar modes in the different molecules? If so, describe several of them. If not, suggest a reason. 2. Post-lab exercise You will be given an infrared spectrum of an "unknown" molecule, along with its molecular formula. (a) Using the method described below, calculate the number of sites of unsaturation (see explanation below). (b) Draw several different Lewis structures consistent with the molecular formula. (For each arrangement of atoms, be sure that you draw the best Lewis structure!) (c) Using the attached data tables, complete the IR diagnostic worksheet for the spectrum of your unknown. Propose a structure for the unknown that is consistent with your analysis. 1) The data tables give characteristic experimental frequencies that are typical for the vibrations of different functional groups. Because they are based on experimental data, just like the spectrum of your unknown, you can assume that the frequency ranges are accurate. 2) The tables make the approximation that the observed bands in the IR spectra arise from vibrational modes involving the motion of only a few atoms, e.g. a -NH2 group. But you know from the calculations you did in lab that real vibrational modes are much more complicated, and may involve many atoms on a molecule. Thus, there are assumptions and approximations involved in using the functional group tables! 3) The Spartan program provides convenient graphics that allow us to visualize vibrational modes. In reality, the amplitude of the motions are much smaller than were shown on the screen! Actual bond stretching vibrations involve a change in bond lengths of only, at most, 1-2% (i.e. about 0.01 Å!), while actual angle bending distortions are <= 5deg.. 3) For more information on computational chemistry methods, such as the ab initio and semi-empirical methods used here, you will find a tutorial on the World Wide Web. Using Netscape or MOSAIC, go to Determining Sites of Unsaturation In organic compounds, the term saturated means that all the atoms have single bonds and that if the compound is a hydrocarbon, the general formula is . If there are any double or triple bonds, or if there is a ring in the molecular structure, then the compound is said to have sites of unsaturation. For example, hexane, (I) is saturated. Hexene (II) and cyclohexane(III), both isomers with the formula , have two fewer hydrogen atoms, or one site of unsaturation each. When other atoms besides carbon and hydrogen are present, a formula can be used to determine the number of sites of unsaturation. where the summation is over all atoms in the molecular formula. The valence is the number of bonds that the atom forms. Thus, the valence for C = 4; N = 3; O = 2; For H, Cl and the other halogens, the valence = 1. (1) Calculate the number of sites of unsaturation in the molecule . Answer # = {[(2(4-2)+4(1-2)+2(2-2)]+2}/2 = 1 (2) Calculate the number of sites of unsaturation in . Answer: # = {[(6(4-2)+6(1-2)]}/2 = 4 Go back to General Chemistry II homepage
b06fa1903052541e
Take the 2-minute tour × First, let me state that I'm a lot less experienced with physics than most people here. Quantum mechanics was as far as I got and that was about 9 years ago, with no use in the meantime. A lot of people seem to think that the act of observing the universe we are changing the universe. This is the standard woo hand waving produced by people who don't understand the physics -- thank you Fritjof Capra. In reading about quantum physics out there on the Internet, I find this idea is propagated far too often. Often enough that I question whether my understanding is accurate. My questions: 1. Does observing a particle always mean hitting it with something to collapse the wave function? 2. Is there any other type of observation? 3. What do physicists mean when they say "observe"? Related, but doesn't get to the question of what observation is: What is the difference between a measurement and any other interaction in quantum mechanics? share|improve this question 6 Answers 6 up vote 10 down vote accepted Assuming that the incoming "first" particle is prepared in a pure state, interaction with another particle does seem necessary. Such an interaction might simply be the spontaneous emission of a photon or other particle by the original incoming particle, however. Most importantly, such an interaction is not itself sufficient. For a measurement event to occur (wave function collapse in the Von Neumann formalism) we must also "physically lose track" of the some of the information of the interacting particle after the interaction has taken place, so that we must replace the entangled state description of the second particle after the interaction with a probabilistic mixture of such states, forcing a description of the first particle after the interaction in terms of a real valued probability density matrix rather than as the complex valued pure state amplitude we started with. This change of description automatically includes an increase in entropy, which also occurs physically. Unless the second, interacting, particle either escapes the apparatus or interacts with a third particle which so escapes, i.e. "interacts with the environment", no measurement has yet occurred, the entire interaction is in principle reversible, and the complex amplitude description remains appropriate. Measurement requires "loss" (via decoherence) of the entangling information by further entangling with the environment and dissipation. The escaping third particle is often an emitted photon or phonon. See the reference in the linked answer What is the difference between a measurement and any other interaction in quantum mechanics?, particularly the 1939 article by London and Bauer (but avoid their metaphysics) for details. More recently, see this book on quantum measurement theory, particularly page 102 referring to the view of Zeh. You may have noticed that some ambiguity remains in this descriptipn. This has been analyzed in great detail and resolved by Zurek, but it gets a little tricky. See e.g. http://arxiv.org/abs/1001.3419 and references therein. share|improve this answer "Collapse the waveform" is a loaded term, that would not be agreed to by all physicists. There are a great many "no-collapse" interpretations out there in which there is no special role for measurement that directly alters the wavefunction. There are also collapse-type interpretations in which the collapse happens more or less spontaneously, as in Roger Penrose's theory whereby gravitational effects cause any superposition above a certain mass threshold to collapse incredibly quickly. As a practical matter, it's hard to think of a measurement technique that couldn't be described as hitting one particle with another. Most quantum optical experiments rely on scattering light off an atom in order to detect the state of the atom, many charged-particle experiments involve running the particles into a surface or a wire in order to detect it, and so on. I think that the solid-state qubit experiment done by people like Rob Schoelkopf at Yale would probably count as an exception, because I believe they use a SQUID to detect the state of their artificial atoms via magnetic fields. If you want to get really picky, though, you could probably consider that a particle interaction as well, though, in some QED sense. Even there, though, the act of measurement does not leave the initial system unchanged. While there would not be general agreement with the specific phrasing "observation changes the universe," the idea that quantum systems behave differently after a measurement is central to the theory, and can't be avoided. share|improve this answer The thing is there are a variety of different opinions that, since they can not be distinguished by experiment, are around and used by different people to interpret experiments. The conventional view of quantum mechanics, although it has eroded over time, is that a sharp disctinction has to be made between the classical and the quantum. The apparatus has to be described classicaly, while the quantum describes the measurement results of the experiment. Von Neumann has then tried to show that the distinction need not be sharp and that you can include the apparatus in the quantum description, but it then has to be observed itself by another apparatus which has to be described classically. Wigner argued that this regression of the quantum/classical divide can be carried up to the mind, hence why there is a lot of woo latching onto these ideas, because they seem to justify the importance of the human mind over anything else in the world. Other approaches have argued that there is no distinction between classical and quantum, at least. One is the Many Worlds Interpretation, which states that a superposition of states is representing actually realized states but in different universes. Another is the Bohmian or pilotwave interpretation, which states that the wave equation is describing a wave that guides particles. An extra equation is then supplemented to the Schrödinger equation to show how this guiding happens. In both theories, there is no need to speak about measurement, at least, not in any deeper sense than what would be done in classical physics. Here's a non-exhaustive list of interpretations of quantum mechanics So, within the context of the Copenhagen interpretation and the von Neumann/Wigner paradigm, the answer to the title question would be yes, there is a difference between measurement and hitting with another particle. Within the context of Bohmian or MWI interpretations, the answer would be no. share|improve this answer How do you know the output of an experiment? You "observe" it. How do you actually do that? Well, with your eyes. How does that work? Photons emitted or reflected by a surface hit special molecules in your eyes. This leads to a signal which is transmitted into the rest of your brain. So the actual observation eventually happens by exchanging photons. Now you want to observe really small particles. Particles that are so small that interaction with a photon actually changes them. Which begs the question whether you can examine something like an electron without hitting it with a photon or anything that might change it's state. This is really hard. So in most cases, observing something does influence it. There are a couple of tricks like observing something that is by itself influenced by the particle you want to measure (say the electric field of a moving electron could influence another molecule and you could do your measurement on the molecule). But in the end, all these processes are based on resonance. If you want to measure anything, you must create a resonance of some kind and that always means two-way interaction at some level. share|improve this answer This question reminds me of the Zen koan: What is the sound of one hand? According to this site (which also elaborates on the koan) if: ... and/or understand quantum mechanics :-) Ok ... maybe not quite well enough for a degree, but you get my point ;) share|improve this answer all particles are the sums or products of other particle interactions, however higher energy collisions are required for the threshold of observations to be made. share|improve this answer Your Answer
76c8b3e188c64d95
Short strong hydrogen bonds studied by inelastic neutron scattering and computational methods Date of Award Degree Type Degree Name Doctor of Philosophy (PhD) Bruce S. Hudson Hydrogen bonds, Neutron scattering, Bond energies, Potassium hydrogen bistrifluoroacetate Subject Categories Chemistry | Physical Sciences and Mathematics Hydrogen bonding can be simply defined as any interaction between molecules that involve the participation of hydrogen, or stated in another fashion: "a hydrogen bond exists when a hydrogen atom H is bonded to more than one other atom". Within this simple definition is a diverse range of interactions that is difficult to explain utilizing conventional ionic and covalent bonding. Hydrogen bonds that exhibit large bond energies are termed short, strong hydrogen bonds and possess unique potential surfaces. These potential surfaces are termed low barrier because the energy barrier that separates the potential wells is lower or equal to the zero point level. Little or no energy is required to move the hydrogen from one well to the other. This potential surface requires special attention in calculating quantum mechanical solutions for these systems due to the quartic shape of these short-strong hydrogen bond potential surfaces. In this thesis I have examined several short, strong hydrogen bonded systems using inelastic neutron scattering and have calculated the corresponding neutron vibrational spectra. I have also made a detailed investigation into the potential surface of the strong hydrogen bond in potassium hydrogen bistrifluoroacetate. Results have shown that it is possible to calculate accurate structural coordinates and vibrational spectra that agree with the experimental. The calculations give an incorrect energy minimization resulting in incorrect vibrational band placement in the inelastic neutron spectrum from the use of a harmonic fit of an anharmonic potential surface. The anharmonic potential surface resultant from the barrier between double wells positioned below the zero point level is calculated for the hydrogen bond. This can be correctly modeled by calculating the potential surface using a fixed OO distance, and solving the Schrödinger equation along this potential. This is the first comparison of neutron vibrational spectra and calculated spectra to provide an understanding of the limitations of computational methods to examine strong hydrogen bonds. This is a new and powerful tool to accurately examine the strength and structure of strong hydrogen bonds.
0cc8e3e6f50b6241
Register Log in Scientific Method / Science & Exploration Happy 100th birthday to the Bohr atom Bohr's simple model led to quantum mechanics and beyond. Danish physicist Niels Bohr, whose model of atoms helped explain the spectrum of light emitted and absorbed by different elements, as illustrated by the spectrum emitted by the Sun. Niels Bohr's model of the hydrogen atom—first published 100 years ago and commemorated in a special issue of Nature—is simple, elegant, revolutionary, and wrong. Well, "wrong" isn't exactly accurate—incomplete or preliminary are better terms. The Bohr model was an essential step toward an accurate theory of atomic structure, which required the development of quantum mechanics in the 1920s. Even in its preliminary state, the model is good enough for many calculations in astronomy, chemistry, and other fields, saving the trouble of performing often-complex calculations with the Schrödinger equation. This conceptual and mathematical simplicity keeps the Bohr model relevant. Despite a century of work, atomic physics is not a quiet field. Researchers continue to probe the structure of atoms, especially in their more extreme and exotic forms, to help understand the nature of electron interactions. They've created anti-atoms of antiprotons and positrons to see if they have the same spectra as their matter counterparts or even to see if they fall up instead of down in a gravitational field. Others have made huge atoms by exciting electrons nearly to the point where they break free, and some have made even more exotic "hollow atoms," where the inner electrons of atoms are stripped out while the outer electrons are left in place. Bohr and his legacy The Bohr atomic model is familiar to many: a dense nucleus of positive charge with electrons orbiting at specific energies. Because of that rigid structure (in contrast to planets, which can orbit a star at any distance), atoms can only absorb and emit light of certain wavelengths, which correspond to the differences in energy levels within the atom. Bohr neatly solved the problem of that feature of the hydrogen spectrum and (along with contributions by other physicists) a few more complex atoms. Even though the Bohr model was unable to provide quantitative predictions for many atomic phenomena, it did explain the general behavior of atoms—specifically why each type of atom and molecule has its own unique spectrum. Bohr also established the difference between strictly electronic phenomena—emission and absorption of light, along with ionization—from radioactivity, which is a nuclear process. Bohr's model assumed a very simple thing: electrons behave like particles in Newtonian physics but are confined to specific orbits. Like the Copernican model of the Solar System, which opened up a mental space for later researchers to develop the view we have today, the Bohr model solved a major problem in understanding atoms, establishing a conceptual framework on which the mathematical structure of quantum mechanics would be built. Neils Bohr was a Danish physicist, but he worked extensively in other countries, collaborating with scientists from many nations. The work that made his name, for example, was based on postdoctoral research in England with J.J. Thompson, who discovered the electron, and Ernest Rutherford, who established the existence of atomic nuclei. Bohr famously continued to work on quantum theory throughout his life and made significant contributions to nuclear physics in the early days of that field. He also helped physicists escape Europe during the Nazi persecution of "undesirable" groups, including Jews and those perceived as working on "Jewish science." (Hollywood, it's time for a Bohr movie.) Atoms and electrons in the 21st century One important implication of Bohr's research was that electrons behave differently within atoms and materials than they do in free space. As Nobel laureate Frank Wilczek pointed out in his comment in the Nature retrospective, we're still learning exactly how interactions shape electrons—and how electrons' properties result in different behavior at high and low energies. Bohr's original model conceived of electrons as point particles moving in circular orbits, but later work by the likes of Louis de Broglie and Erwin Schrödinger showed that the structure of atoms arose naturally if electrons have wavelike properties. Bohr was one who grappled with the meaning of the wave-particle duality in the 1920s, something modern experiments are still probing in creative ways. Similarly, atomic physics is pushing into new frontiers, thanks to both gentler and more aggressive probing techniques. Rydberg atoms—atoms in which the electrons are excited nearly to the ionization point—are physically very large, some almost big enough to be macroscopic. These aren't very stable since such a loose connection between electron and atom means nearly anything can strip the electron away. However, the weaker interactions also mean a Rydberg atom behaves nearly like Bohr's model predicted: a miniature solar system with a planetary electron. This sort of system allows physicists precise control of atomic properties. Another exciting area of research involves ionization, but of a different sort than usual. As Bohr's model indicated, electrons fall into orbits, and those closest to the nucleus are harder to strip from the atom than those farther out. Thus, ionization typically happens when electrons are removed from the outermost orbits. However, the pulsed X-ray laser at SLAC can remove just the inner electrons, leaving an ion with a thin shell of electrons outside—what SLAC physicist Linda Young referred to as a "hollow atom." This is an unstable configuration since the electrons prefer to be in a lower energy state, so hollow atoms rapidly collapse, emitting bursts of high-energy photons and electrons. Some researchers are considering potential applications for these energy outbursts. Bohr's model was the beginning, but the story of atoms he helped write is still ongoing. The tale is (you knew it was coming) far from Bohring. Expand full story You must to comment. You May Also Like Need to register for a new account? If you don't have an account yet it's free and easy.
9e0bac955e454699
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer I haven't understood this thing: Physics is invariant for CPT trasform...But the Heat or diffusive equation $\nabla^2 T=\partial_t T$ is not invariant for time reversal...but it's P invariant..So CPT simmetry would be violed... What I haven't undestood? Thank you share|cite|improve this question up vote 10 down vote accepted The heat equation is a macroscopic equation. It describes the flow of heat from hot objects to cold ones. Of course it can not be time-reversible, since the opposite movement never happens. Well, I say 'of course' but you actually have stumbled on something important. As you say, the fundamental laws of nature should be CPT invariant, or at least we expect them to be. The reason the heat equation is not CPT invariant is that it is not a fundamental law, but a macroscopic law emerging from the microscopic laws governing the motions of elementary particles. There is however a problem here, how does this time asymmetry arise from microscopic laws that are themselves time reversal invariant? The answer to that is given by statistical mechanics. While the microscopic laws are time-reversible (I'll focus on T, and leave CP aside), not all states are equally likely with respect to certain choices of the macroscopic variables. There are more configurations of particles corresponding to a room filled with air than with a room where all the air would be concentrated in one corner. It is this asymmetry that forms the basis of all explanations in statistical mechanics. I hope that clears things up a bit. share|cite|improve this answer Ok! A last question now: the Schrodingher equation isn't invariant for time reversal, for the first derivative in t. But is not that a microscopical law? – Boy Simone Dec 2 '10 at 12:54 Actually, the Schrödinger equation is invariant. But you have to take the complex conjugate of $\psi$. Since $\psi^*$ and $\psi$ have the same probability distributions $|\psi|^2$, the physics remains the same. – Raskolnikov Dec 2 '10 at 12:58 Great! Thank you :-) – Boy Simone Dec 2 '10 at 13:14 Nice summary. It worth noting, however, that this is a deep enough topic that multi-hundred page books have been written on the matter. – dmckee Dec 2 '10 at 19:33 @dmckee: Of course, I didn't mean to give an exhaustive explanation. In fact, I left my explanation open to many attacks on purpose. I hope that Boy will think further and come to these questions by himself. But a thorough answer would indeed need a thorough course in statistical mechanics. – Raskolnikov Dec 2 '10 at 22:43 CPT theorem is not a theorem for all of physics but only for a quantum field theory (QFT). Also CPT invariance doesn't mean that QFT is necessarily invariant with respect to any of C, P and T (or PT, TC and CP, which is the same by CPT theorem) transform. Indeed, all of these symmetries are violated by weak interaction. Second, even if the macroscopic laws were completely correct it wouldn't mean that they need to preserve microscopic laws. E.g. most of the microscopic physics is time symmetric (except for small violation by the weak interaction) but second law of thermodynamics (which is universally true for any macroscopic system just by means of logic and statistics) tells you that entropy has to increase with time. We can say that the huge number of particles breaks the microscopical time-symmetry. Now, the heat equation essentially captures dynamics of this time asymmetry of the second law. It tells you that temperatures eventually even out and that is an irreversible process that increases entropy. share|cite|improve this answer thank you for the answer! And why, in your example, huge number of particles breaks the microscopical time-symmetry? Why don't macroscopic effects preserve microscopical invariance CPT of quantum-field theory? – Boy Simone Dec 2 '10 at 12:39 @Boy: that has to do with statistical mechanics. You should really ask this as a separate question because answer is not completely simple. But in short: any given macroscopic state (given e.g. by energy and pressure) of the system can be realized by many microscopic states. Now your answer boils down to basic questions in probability theory: the more microscopic states there are, the more likely the resulting macroscopic state is. So system is more likely to move to move from the less probable state to more probable state and not in the other way. – Marek Dec 2 '10 at 12:46 Your Answer
8ac7e59b3fe35d94
Fundamental Research Funding Michael Nielsen, who’s so smart it’s like he’s posting from tomorrow, offers a couple of provocative questions about the perception of a crisis in funding for basic science: First, how much funding is enough for fundamental research? What criterion should be used to decide how much money is the right amount to spend on fundamental research? Second, the human race spent a lot lot more on fundamental research in the second half of the twentieth century than it did in the first. It’s hard to get a good handle on exactly how much, in part because it depends on what you mean by fundamental research. At a guess, I’d say at least 1000 times as much was spent in the second half of the twentieth century. Did we learn 1000 times as much? In fact, did we learn as much, even without a multiplier? Well, at least there’s nothing controversial there… These are excellent questions, but they’re also uncomfortable questions. They’re not unaskable, but it’s almost unsportsmanlike to ask them directly of most scientists. They’re questions that really need to be confronted, though, and a think a good deal of the image problems that science in general has at the moment can be traced to a failure to grapple more directly with issues of funding and the justification of funding. Taking these in reverse, I think it’s hard to quantify the amount of “learning” that went on in the 20th Century– are you talking only about fundamental discoveries (the Standard Model, the accelerating universe), or do expermental realizations of old ideas (Bose-Einstein Condensation) count? In the latter half of the 20th century, we probably worked out the quantum details of 1000 times as many physical systems as in the first half, but that sort of thing feels a little like stamp collecting– adding one new element to a mixture and then re-measuring the band structure of the resulting solid doesn’t really seem to be on the same level as, say, the Schrödinger equation, but I’m at a loss for how to quantify the difference. If we think only about really fundamental stuff, I think it’s interesting to look at the distribution of funding, which has become much more centralized out of necessity. It’d be hard to argue that the increase in fundamental knowledge has kept pace with the increase in funding, but to some degree, complaining about that is a little like a first-year grad student grumbling that it was much easier to get your name on an equation back in 1930. All the easy problems have been done, meaning that you have to sink a lot of resources into research these days to make even incremental progress. Experiments have gotten more expensive, and as a result, the number of places they can be done has gotten smaller– when the LHC finally comes on line, it will pretty much be the only game in town. And that sort of necessarily limits the total amount of stuff you can hope to discover– if there’s only one facility in the world at which you can do some experiment, you’re not going to be able to make as many discoveries as you could with 1000 different facilities doing the same sorts of experiments. This isn’t restricted to high-energy physics, either. Somebody at DAMOP this year remarked that we seem be be asymptotically approaching particle physics– the number of lasers and gadgets involved in a typical BEC experiment is increasing every year, the author lists are getting longer, and fewer groups are able to really compete. The more important question, though, is should we really expect or demand that learning be proportional to funding? And what, exactly, do we as a society expect to get out of fundamental research? For years, the argument has been based on technology– that fundamental research is necessary to understand how to build the technologies of the future, and put a flying car in every garage. This has worked well for a long time, and it’s still true in a lot of fields, but I think it’s starting to break down in the really big-ticket areas. You can make a decent case that, say, a major neutron diffraction facility will provide materials science information that will allow better understanding of high-temperature superconductors, and make life better for everyone. It’s a little harder to make that case for the Higgs boson, and you’re sort of left with the Tang and Velcro argument– that working on making the next generation of whopping huge accelerators will lead to spin-off technologies that benefit large numbers of people. It’s not clear to me that this is a winning argument– we’ve gotten some nice things out of CERN, the Web among them, but I don’t know that the return on investment really justifies the expense. And this is where the image problem comes in– I think science suffers in the popular imagination in part because people see vast sums of money being spent for minimal progress on really esoteric topics, and they start to ask whether it’s really worth it. And the disinclination of most scientists to really address the question doesn’t help. Of course, it’s not like I have a sure-fire argument. Like most scientists, I think that research is inherently worth funding– it’s practically axiomatic. Science is, at a fundamental level, what sets us apart from other animals. We don’t just accept the world around us as inscrutable and unchangeable, we poke at it until we figure out how it works, and we use that knowledge to our advantage. No matter what poets and musicians say, it’s science that makes us human, and that’s worth a few bucks to keep going. And if it takes millions or billions of dollars, well, we’re a wealthy society, and we can afford it. We really ought to have a better argument than that, though. As for the appropriate level of funding, I’m not sure I have a concrete number in mind. If we’ve got half a trillion to piss away on misguided military adventures, though, I think we can throw a few billion to the sciences without demanding anything particular in return. 1. #1 goffredo August 15, 2007 Hi Chad this is indeed a very interesting thread you started. I will follow it closely. As a physcist I feel I am terribly imprepared to argue in an up-to-date way in favor of science funding. Indeed I’ve realized that the arguments I traditionally used, and you have mentioned several, are falling on more and more deaf ears. There are instead some serious people – sociologists, historians of science, philosophers of science – that are studying this “post-big-science” period and describe real and great changes in the way science interacts with society and politians in particular. I am left feeling very “ignorant” and naive. I am curious to read what others will say on your blog. Ciao for now 2. #2 Steinn Sigurdsson August 15, 2007 I think the metric being used is incorrect, it is not “how much” – with problems of inflation adjustment and per capita normalization – it is how many peope. In physics, I certainly don’t think there are 1000 times as many researchers working now as there were a century ago (well, maybe within the US itself, but that is an artefact of physics research in the US just having started 1000 years ago); worldwide there are maybe 100 times as many people, but some of that is from a larger population and most of the rest is from broader demographic access, both within nations and more nations entering the game. The proportionate effort in physics is up, but only a modest amount. Further, the benefits come precisely from the “stamp collecting” that is done after the breakthroughs, which is why more people are needed, to follow through on the breakthroughs. There may well be 1000 times more bio/med researchers now than a century ago, but I think the pace of progress has also been proportionate, with many breakthroughs and ongoing followups. 3. #3 Uncle Al August 15, 2007 Reappropriate Head Start annual budget into the NSF. That solves three problems: 1) The national supply of nascent idiots dwindles, 2) physical science funding more than doubles, 3) the morbidly bloated Head Start administrative population can be given toothbrushes and told to scrub down Boulder Dam. (Safety nets to lay on the ground below.) 4. #4 Jonathan Vos Post August 15, 2007 I’ve been told that Leonardo da Vinci, as a young man, attended a Vatican-sponsored conference on the most important research problems. These included: the likes of “what organ of Jesus was pierced by the Roman spear-point?” Late in the conference, a debate broke out on whether rocks in shapes of shells and fish, found on mountains in Italy, were random, made by God for a good reason relating to Noah but unclear to us, or made by Satan to delude us. da Vinci stood up and said something along the lines of: “Has it occurred to anyone that these are the remains of actual shellfish and bony fish from a time when the ocean level was much higher, or the mountains much shorter, which have gradually turned into stone?” Someone grabbed him and sat him down hard, whispering: “You fool! Not only are you endangering the funding of the right people with such impertinent questions, but that you risk trial for Heresy?” da Vinci thereafter kept his speculations on orogeny and evolution to himself. I’m not sure that things have changed much in this regard. 5. #5 Drugmonkey August 15, 2007 Another question might be how much money are we wasting on college education in physics which will, for the most part, go entirely unused for the rest of the students’ lifetime. …just askin’ 6. #6 Jake August 15, 2007 But does counting physicists really get you the right number? Should the tens of thousands of technicians, machinists, and engineers required to build the LHC count as people involved in physics research, even though they aren’t researchers themselves? 7. #7 Mike Kozlowski August 15, 2007 Did you just synonoymize “fundamental research” and “physics”? I’m in favor of fundamental research — there absolutely should be people doing non-R&D work in biology, where there’s just tons of really important stuff to figure out, for instance, but… It sounds to me like physics is basically done. Sure, there’s stuff we don’t know, but there’s no clear road for figuring it out, so everyone’s just fucking around doing random experiments that illustrate unimportant things because the big issues are unknowable. Is there really any reason to suspect that the giant, hyper-expensive colliders are going to help us learn anything genuinely interesting, or is it just more “stamp collecting”, and we’d be better off giving the money to biologists and computer scientists who are doing the real fundamental work that’ll make the end of the 21st century so different from the beginning? 8. #8 Doug Natelson August 15, 2007 Mike, I disagree very strenuously that physics is “basically done”. To pick an example not even from my field, we don’t know the basic nature of 94% of the energy content of the universe, and it’s far from clear that the answer to this question is “unknowable”. From within my own discipline, we still don’t understand basic things about the collective responses of electrons – how does high temperature superconductivity work? It’s pretty hard to argue that solving this would be unimportant, from the technological standpoint. You do realize that the computer you’re using to read this is based on the results of physics research, some of it comparatively recent, right? 9. #9 CCPhysicist August 15, 2007 Steinn is close to the mark. I included a modified and annotated version of a figure produced by the AIP when I started writing about physics jobs. There is about a factor of 100 (from about 15 to 1500) increase in the number of physics PhDs from the early to late 20th century in the US. But PhD production is not “workers” in fundamental research. In the early century their university research work was funded by industry (GE, for example) and their students went to work in industry. By 1970, research had become an industry due to government’s (short lived) decision that massive investment in physics would pay off for the US just as it had during WW II. The view that physics is the only fundamental research is highly parochial, however. The vast majority of funding is in life sciences, where the PhD (over)production is spectacular – and a worldwide phenomenon. Those people believe that the genome is pretty fundamental science, and the people funding it agree. 10. #10 John Novak August 15, 2007 Well, as hard as it may be for the scientists to answer the question, it’s just about as hard for most everyone else to ask the question without sounding like a phillistine. (On the other hand, I have actually said to scientists of your acquaintance, “I have gotten comfortable with my inner phillistine– no, you may not have a hundred billion dollars for another supercollider.” A billion maybe. Ten, even, but anything approaching one percent of the GNP is a little much to ask, I think….) The really good question, though, that I rarely see scientists ask, is the one you did ask: what do we as a society expect to get out of it. It’s really easy to say, “We get knowledge,” but when the facilities are costing billions of dollars, that’s hard to justify. Not impossible, just hard. Another easy answer is, “Better technology,” the problems with which you rightly point out. The thing is not all science is physics. There is plenty of worthwhile basic research in chemistry, biology, medicine, etc, to be done which will be disproportionately beneficial. Hell, even getting rid of high energy and low temperature physics, there’s still great stuff to be done in physics itself– go make me a broadband negative index of refraction material in optic band, for instance. (No, seriously. Go do that. I’ll even take the prototype in S through Ku band.) Biology is getting the big research bucks, lately, because it’s obvious even to politicians (and, to be fair, has been for about five to ten years) that biotechnology is going to have as radical an effect on the economy and basic quality of life as electronics did in the past. 11. #11 goffredo August 16, 2007 A few thoughts: Good Science raises questions and does not only give answers. One could say, somewhat obnoxiously, that science, self-referentially, keeps inventing ways of jusifying its own existence, of finding ways of spending more money, of creating new needs. But, on the other ethical and philosophical extreme, the military do the very same. But hey! It happens across the whole cultural spectrum: all groups, cultures, do their best to propagate themselves. Economy works that way! The difficulty is to decide what is ethical and beneficial, and what isn’t. “Good” military spending means keeping your country competitive and capable of defending itself. “Bad” spending would mean spending money for heavy tanks when heavy tanks are tactically obsolete. If the people that decide how the money is to be spent are infuenced by tank-people then there is a conflict of interest. No tank is perfect and with unlimited funds a tank-person would wish to go on spending and spending and spending. Science, as a cultural enterprise, does the same thing and the same risks (conflicts of interest) are at work. There is “good” science and “bad” science. In my opinion the first thing to do is to avoid thinking that all science is automatically good. Since I find it wise to be realistic, I put that type of mistake on the same level of foolishness as thinking that all military spending is automatically bad. The second is to avoid conflicts of interest. If foul-play can happen then it will happen. Good governance should try to make the cross-sections of foul-play as small as possible (close channels for it to occur and keep eye open for spotting new channels openning up). The third is to avoid ideologies that blind or distort views of the natural world and of society. I certainly wouldn’t want a government that couldn’t neutralize the preconceptions and ideologies of the indivuduals that make it up. I feel that one reason there is a mounting skepticism towards science is the realization, by the people, that science is self-referential and that it runs squalid risks of all social groups. In my opinion to make distinguos we must go back to philosophy. To try to understand the universe, to discover that there is more out there then what our immediate perceptions and bad habits tell us, is more ethical than spending on weapons for defence, or of spending for consumer products. 12. #12 Chad Orzel August 16, 2007 Not intentionally. I talk mostly in terms of physics, because that’s what I know about, but “fundamental research” includes a lot of bio and chem and other stuff. I usually remember to include a caveat to that effect, but I was in a hurry yesterday (and today as well), so I got a little sloppy. 13. #13 Grad August 16, 2007 To try an compare the 20th century to the 21st is almost always going to have the 21st look worse. The explosion in almost every field that happened last century isn’t about to be repeated–no matter how much money we through at it. So this isn’t the argument that you want to use to or the metric your want to use when thinking about funding.
3df3fd097d84840b
Take the 2-minute tour × In 1 dimension what is the solution of the Schrödinger equation with potential $$ V(x) = V_r + i V_i $$ Potentials are constant. share|improve this question 1 Answer 1 The Hamiltonian $$H=T_{\text{kin}}+V_r+iV_i$$ will not be Hermitian as $$(iV_i)^*=-iV_i.$$ Technically, you can make an ansatz $$\Psi(x,t)=A\int\text{d}k\ \hat\Psi(k)\ \text e^{i(kx-\omega(k)t)}.$$ plug it into the differential equation and find $$\omega(k)=\frac{\hbar^2 k^2}{2m}+V_r+iV_i,$$ or $$\hbar k=\pm\sqrt{2m(E-iV_i)},$$ where $E$ is some real number/numbers. You also want a boundary condition. (As a vague remark, modelings of complex energies, which necessarily turn a phase like $\text e^{-i\omega}$ into something like the descending expression $\text e^{-\omega'}$, are associated with decay. But again, a particle that vanishes in time like that is probably not what you want to talk about.) share|improve this answer Comment to the answer(v2): It should be stressed in the answer that the wavenumber $k$ is a manifestly real variable, not complex. Therefore the corresponding energies $E_k\equiv \hbar\omega_k$ are not real but complex. –  Qmechanic Oct 15 '12 at 13:59 as a comment i heard that a Schroedinguer equatio with the potential $ V(X) = ix^{3} $ may have ALL the energies real :) –  Jose Javier Garcia Oct 15 '12 at 18:30 Your Answer
9fa2fb5ffc9525b5
Take the 2-minute tour × I am reading a book on quantum field theory, while I have never been trained as a physicist. I found a big gap in language and have trouble understanding what physicists mean by "quantum field". If I understand correctly, after quantizing twice (field and operator). quantum field should be an operator valued distribution. Am I right? I would appreciate it if some mathematician who are familiar with physics could kindly explain "quantum field" in terms of mathematics. Thank you in advance. share|improve this question 1 Answer 1 I am not sure what you math level is, so I'll try to make it as simple as possible. In standard quantum mechanics, we formalize the state of a physical system as belonging to Hilbert space. Hilbert space is basically a multidimensional space in which the number of dimensions are not discrete (such as in 3d space), but continuous. You can intuitively think of any real valued function f as a vector in Hilbert space, where the value f(x) is the coordinate of vector f along dimension x. The Schrödinger equation describes the evolution of the "quantum state" (a vector in Hilbert space). In second quantification you go a further step of abstraction. Now the "quantum field" is a mathematical object that belongs to Fock space. If it means something to you, Technically, the Fock space is (the Hilbert space completion of) the direct sum of the symmetric or antisymmetric tensors in the tensor powers of a single-particle Hilbert space H. The state of the quantum field allows you to calculate the distribution of values for repeated measurements on the system (using additional rules). share|improve this answer Your Answer
d57296b38cb83afe
The Nature of Things Presidential Address given to the British Society for the Philosophy of Science J.R. Lucas on June 7th, 1993 I stand before you a failed materialist. Like Lucretius, from whom I have borrowed my title, I should have liked to be able to explain the nature of things in terms of ultimate thing-like entities of which everything was made, and in terms of which all phenomena could be explained. I have always been a would-be materialist. I remember, when I was six, telling my brother, who was only two, in the corner of a garden in Guildford, that everything was made of electricity and believing that electrons were hard knobbly sparks, and later pondering whether position was a sort of quality, and deciding that it was, absolutely considered, but that relative positions, that is to say patterns, as seen in the constellations in the sky, were only in the eye of the beholder. I am still impelled to a very thing-like view of reality, and would like to explain electricity in terms of whirring wheels, and subatomic entities as absolutely indivisible point-particles, each always remaining its individual self, and possessed of all the qualities that really signified. I find it painful to be dragged into the twentieth century, and though my rational self is forced to acknowledge that things aren't what they used to be, I find it hard to come to terms with their not being what I instinctively feel they have got to be, and am still liable to scream that that the world-view we are being forced to adopt cannot be true, and that somehow it must be fitted back into the Procrustean bed of our inherited prejudices. But I am not going to ask you to listen to my screams. Rather, I shall share with you my attempts to overcome them, and work out new categories for thinking about the nature of the world, and a correspondingly less rigid paradigm of possible explanation. It has taken me in two different directions. On the one hand reality is much softer and squodgier than I used to think. It is not only that the knobbliness is less impenetrable, as quantum tunnelling takes over, nor that it is fuzzier, without the sharp outlines of yestercentury, but, more difficult to comprehend, the very concept of haecceitas, as Duns Scotus called it, this-i-ness, or transcendental individuality, in Michael Redhead's terminology, 1 has disappeared from the categories of ultimate reality. On the other hand, reason has become much wider and more hospitable to new insights from various disciplines. The two changes are connected. Our concept of a thing, in order to be more truly a thing, has been developed into that of a substance, and substances have come to need to have more and more perfections, and we have therefore come to identify as substances more sophisticated combinations of more recherché features; and with this change in what we regard as a thing has come also a corresponding change in our canons of explanation. It will be my chief aim this evening to show how our changed apprehension of reality has opened up new vistas of rationality, and how the wider concept of rationality we have been led to adopt has in turn altered our view of what constitute real substances. The corpuscularian philosophy posited the ultimate constituents of the universe as qualitatively identical but numerically distinct, possessing only the properties of spatial position and its time-derivatives, and developing according to some deterministic law. In the beginning, on this view, God created atoms and the void. The atoms, or corpuscles, or point-particles, were thing-like entities persisting over time, each for ever distinct from every other one, each always remaining the same, each capable of changing its position in space while retaining its individual identity. Spatial position constituted the changeable parameter which explained change without altering the corpuscle's own identity. Space was the realm of mostly unactualised possibility, of changes that might, but mostly did not, occur. But space also performed the logical function of both distinguishing between qualitatively identical corpuscles---two thing-like entities cannot be in the same place at the same time---and providing, in spatio-temporal continuity, a criterion of identity over time. It was thus possible for each point-particle to be like every other one, but to be a different particular individual, and this particularity affected the corpuscularians' ideal of explanation, articulated by Laplace, and much refined in our own time by Hempel. Scientists seek generality, and eschew the contingent and the coincidental. In the Hempelian paradigm, the focus of interest is on the covering law, which is general, and not on the initial conditions, which just happen to be what they are, and can only themselves be explained by the way earlier states of the universe happened to be. Boundary conditions, being the particular positions and velocities of particular point-particles, are too particular to constitute the sort of causes that scientists, in their search for generality, are willing to take seriously as genuinely explanatory. The corpuscularian philosophy had many merits. It reflected our experience of things: stable objects that persist over time, clearly individuated by exclusive space-occupancy, capable of change without losing their identity. As a metaphysical system it had great economy and power. All macroscopic things, all events and phenomena, were to be explained in terms of the positions and movements of these ultimate entities. There was a clear ontology, a clear canon of explanation, and a clear demarcation between physically necessary laws and purely contingent initial conditions. Of course, there were also grave demerits. From my own point of view---though I have failed to persuade Robin Le Poidevin of this 2---time is essentially tensed, and it counts against the corpuscularian scheme that it did not account for the direction of time or the uniqueness of the present: more influential in the history of science was the account of space, and the difficulty in formulating a plausible account of how corpuscles could interact with one another, which in due course led us to replace corpuscularian by field theories, as being better able to account for the propagation of causal influence. The vacuum, though adequate for giving things room to exist and move in, was too thin to let them interact with one another, and Voltaire has had to return from London to Paris. But it was not only space that proved too thin to do its job. The ultimate thing-like entities not only failed to accommodate the things of our actual experience, but have turned out not to be thing-like at all. Although the atoms of modern chemistry and physics are moderately thing-like, subatomic entities are not. We do not obtain predictions borne out by observation if we count as different the case of this electron being here and that there and the case of that being here and this there. Instead of thinking of the word `electron' being a substantive referring to a substantial, identifiable thing, we do better to think of it as an adjective, with some sense of `negatively charged hereabouts'. We do not feel tempted to distinguish two pictures, one of which is red here and red there, and the other of which is red there and red here; the qualities referred to by adjectives lack haecceitas, this-i-ness, and are real only in so far as they are instantiated. We are forced to deny this-i-ness to electrons and other sub-atomic entities in order to accommodate empirical observations, but it is not just a brute fact, but rather the reflection of the probabilistic structure of quantum mechanics. The loss of determinateness in our ultimate ontology is the concomitant of our abandoning determinism in our basic scheme of explanation. Probabilities attach naturally not to specific singular propositions, but to general propositional functions, or, as Colin Howson puts it, 3 generic events, or, in Donald Gillies' terminology, 4 repeatable situations. Although you can intelligibly ask what the probability is of my dying in the next twelve months, the answer is nearly always only an estimate, extrapolated from the probabilities of Englishmen, males, oldie academics, non-smokers, non-diabetics, and other relevant general types, not dying within the subsequent year. Calculations of probabilities depend on the law of large numbers, assumptions of equiprobability, or Bayes' Theorem, which all ascribe probabilities to propositional functions dealing with general properties rather than to singular propositions asserting ultimate particularities. If we accept the probabilistic view of the world, we can no longer picture the universe as made up of particular thing-like entities that Newton could have asked Adam to name, but as a featured something, whose underlying propensities could be characterized in quantum-mechanical terms, and whose features calculated up to a point, and found to be borne out in experience. The loss of particularity legitimises a paradigm shift in our canon of explanation. In his Presidential Address, Professor Redhead noted the shift from a Nineteenth Century ideal, in which we could deduce the occurrence of events granted a unified theory together with certain boundary conditions, to a Twentieth Century schema, which, although less demanding, in as much as it is not deterministic, is more demanding, in that it seeks to explain the boundary conditions too. 5 Outside physics that has always been the case---and often within physics too. It is one of the chief objections to the Hempelian canon, an objection expressed by many of those present here tonight---Nancy Cartwright, John Worrall, Peter Lipton---that it fails to accommodate the types of explanation scientists actually put forward. 6 It depends on the science concerned what patterns of law-like association, to use a phrase of David Papineau's, count as causes. 7 Different sciences count different patterns of law-like associations as causes because they ask different questions and therefore need to have different answers explaining differently with different becauses. The fact that different sciences ask different questions is of crucial importance. Once we distinguish questions from answers, we can resolve ancient quarrels between different disciplines. 8 The biologists have long felt threatened by reductionism, and felt that there was something amiss with the claim that it was all in the Schrödinger equation, or as Francis Crick put it, ``the ultimate aim of the modern movement in biology is in fact to explain all biology in terms of physics and chemistry''. 9 But their claim that there was something else, not in the realm understood by physicists, smacked of vitalism, and was rejected out of hand by all practising physicists. Vitalism made out that answers were in principle unavailable, whereas what is really at issue is not a shortage of answers but an abundance of questions. It was not a case of biologists asking straightforward physicists' questions and claiming to get non-physicists' answers, but of their asking non-physicists' questions, to which the physicists' answers were germane, but could not, in the nature of the enquiry, constitute an exhaustive answer to what was being asked. Biologists differ from physicists in what they are interested in---no hint of vitalism in pointing out that the life sciences investigate the phenomenon of life---and in pursuing their enquiries pick on features which are significant according to their canons of interest, not the physicists'. What is at issue is not whether there is some physical causal process of which the physicists know nothing, but whether there are principles of classification outside the purview of physics. It is a question of concepts rather than causality. My favourite, excessively simpliste example is that of the series of bagatelle balls running down through a set of evenly spaced pins and being collected in separate slots at the bottom: we cannot predict into which slot any particular ball will go, but we can say that after a fair number have run down through the pins, the number of balls in each slot will approximate to a Gaussian distribution. There is nothing vitalist about a Gaussian distribution, but it is a probabilistic concept, unknown to Newtonian mechanics. In order to recognise it, we have to move from strict corpuscularian individualism to a set, an ensemble, or a Kollectiv of similar instances, and consider the properties of the whole lot. More professionally, all the insights of thermodynamics depend on not following through the position and momentum of each molecule, but viewing the ensemble in a more coarse-grained way, and considering only the mean momentum of those molecules impinging on a wall, or the mean kinetic energy of all the molecules in the vessel. Equally the chemist and the biologist are not concerned with the life histories of any particular atoms or molecules, and reckon one hydrogen ion as good as another, and one molecule of oxygen absorbed in the lungs of a blackbird as good as another. 10 The chemist is concerned with the reaction as a whole, the biologist with the organism in relation to its environment and other members of its species. A biologist is not interested in the precise accounting for the exact position and momentum of every atom, even if that were feasible. Such a wealth of information would only be noise, drowning the signal he was anxious to discern, namely the activities and functioning of organisms, and their interactions with one another and with their ecological environment. It is the song of Mr Blackbird as he tries to attract the attention of Mrs Blackbird that concerns the ethologist. He is not concerned with exactly which oxygen molecules are in the blackbird's lungs or blood stream, but in the notes that he trills as dawn breaks, and their significance for his potential mate. If he were presented with a complete Laplacian picture, his first task would be to try and discern the relevant patterns of interacting carbon, oxygen, hydrogen and nitrogen atoms that constituted continuing organisms, and to pick out the wood from the trees. In this change of focus the precise detail becomes irrelevant. He is not, in Professor Watkins' terminology, a methodological individualist. What interests him is not the life history of particular molecules of oxygen, but the metabolic state of the organism, which will be the same in either case. Different disciplines, because they concentrate on different questions, abstract from irrelevant detail, in order to adduce the information that is relevant to their concerns. In practice scientists have long recognised that in order to see the wood they must often turn their attention away from the trees. But whereas that shift was to be defended simply as a matter of choice on their part, now it is legitimised by our new understanding of logical status of the boundary conditions we are interested in. If our ultimate theory of everything can talk only in general terms, and cannot assign positions and velocities to particular atoms, it follows that it is no criticism of other theories that they can talk only in general terms too. Hitherto there has been a sense of information being thrown away, information which was there and ultimately important, so that we were, in some profound way, being given less than the whole truth. There was a Laplacian Theory of Everything which was in principle knowable and in principle held the key to all ologies. Every other discipline was only a partial apprehension of ultimate truth, useful perhaps because more accessible for our imperfect minds, but conveying only imperfect information none the less. Just as we rely on journalists to reduce the welter of information about the Balkans or South America to manageable size, so chemists and biologists seemed to select and distil from total truth to tell us things in a form we were capable of taking in. Compared with the high priests of total truth, they were mere popularisers. I may discern Gaussian patterns in long runs of bagatelle balls, but they are patterns only in the eye of an ill-informed beholder: better informed, I should see why each ball went into the slot that it did, and be aware of the occasions when a non-Gaussian distribution emerged. My Gaussian discernment would seem a rough and ready approximation, like describing France as hexagonal, which is fair enough for some purposes, but falls far short of being fully true. Even though the things we pick on as worthy of note and in need of explanation---the shape of the Gaussian curve, the significance of bird-song---lie outside the compass of the limited concepts and explanation of a Theory of Everything, the possession of perfect information trumps curiosity. The case is altered if there is no fully particularised ultimate reality, and no complete theory of it. We cannot claim that ultimately there are trees which exist in their own right, whereas the woods are only convenient shorthand indications of there being trees there: we cannot trump the different, admittedly partial, explanations put forward by different disciplines by a paradigm one that claims to be complete, nor can we suppose that there is some bottom line that establishes a final reckoning to which all other explanations must be held accountable. All natural sciences concern themselves with general features of the universe, and there is no reason to discountenance any science because it selects some general features rather than others. Questions about boundary conditions cannot, then, be faulted on grounds of their being general, and not ultimately particular. The answers, too, are to be assessed differently, once the mirage of a complete Laplacian explanation is dispelled. Not only is it irrelevant to the ethologist's purposes, which particular mate the blackbird seeks to attract, or which oxygen molecules are in the blackbird's lungs or blood stream, it is, in its precise detail, causally irrelevant too. The blackbird's song is not addressed to a particular Mrs Blackbird in all her individuality, but to potential Mrs Blackbirds in general, and if one mate proves hard to win, another will do. Much more so at lower levels of existence: if one worm escapes the early bird, another will be equally succulent; if one molecule of oxygen is not absorbed by his haemoglobin, another will. Explanations are inherently universalisable, and if the physical universe is one of qualitatively identical features that cannot, even in principle, be numerically distinguished, then the explanations offered by other disciplines are ones that cannot, even in principle, be improved upon by a fuller physical explanation. Indistinguishability and indeterminism imply a looseness of fit on the part of physical explanation which take away its Procrustean character. The new world-view makes room for there being different sciences which are autonomous without invoking any mysterious causal powers beyond the reach of physical investigation. The autonomy I am arguing for is, in the words of Bechner, 11 theory autonomy rather than a process autonomy: we use new concepts to ask new questions, rather than find that old questions have suddenly acquired surprising new answers. But this distinction between questions and answers offers a solution to the problem of reductionism only if there is some further fundamental difference between the concepts involved in framing the questions asked by different sciences. Otherwise, they might still be vulnerable to a take-over bid on the part of physics. A reductionist programme whereby every concept of chemistry and physics is exhaustively defined in terms of physical concepts alone might still be mounted. Thus far I have only cited examples---Gaussian curves, temperature, blackbird song---where reductive analysis seems out of the question. But the unavailablity of reductive analyses is much wider than that. Tony Dale bowled me out recently, when I had overlooked the fact that the concept of a finite number cannot be expressed in first-order logic. The very concept of a set, and more generally of a relational structure, is a holistic one. But rather than multiply examples, let me cite an in-principle argument. Tarski's theorem shows that the concept of truth cannot be defined within a logistic calculus: roughly, although we can teach a computer many tricks, we cannot program it to use the term `true' in just the way we do. It therefore seems reasonable to hold that other concepts, too, are irreducible, and the failure of the reductionist programme is due not to some mysterious forms of causality but to our endless capacity to form new concepts and in terms of them to ask new questions and seek new types of explanation. The new world-view we are being forced to adopt not only permits us to concern ourselves, qua scientists, with general features, but impels us to do so. Even the corpuscularian philosophy gave somewhat short shrift to the things of ordinary experience. Most configurations of atoms were transitory. Even rocks were subject to the attrition of time, and the mountains, far from being eternal, were being eroded by the wind and the rain. Processes could in principle withstand the ravages of time, and at first glance Liouville's theorem seemed to suggest that point-particles whose initial conditions were close to one another would end up close still. But although, indeed, there was a one-one correlation between initial and final conditions, the correlation was much less stable than at first sight appeared. True, the volume in phase-space remains constant, but its shape does not, and may become spreadeagled with the elapse of time, so that the very smallest difference in initial conditions can lead to a wide difference in outcome. Poincaré pointed out the logic of the roulette wheel, 12 and we now regularly hear of the damage done by irresponsible butterflies on the other side of the universe destroying the reliability of met office forecasts. No longer can Newton number the ultimate things among the (kumaton anerithmon gelasma), the innumerable laughter of quantum waves, but if he wants atoms, must raise his sights to those stable solutions of the Schrödinger time-independent equation, which one way or another, will be realised. And although some solid objects are likely to remain substantially the same over time, most collocations of atoms are evanescent. If we seek stability amid the flux of molecular movement, we are likely to find it at a higher level of generality where chaos theory can indicate the recurrence of relatively stable patterns. In the Heraclitean swirl eddies may last long enough to be identified. Flames are processes, but possess the thingly property of subsisting and sometimes of being identified and individuated. So if we want permanence, we shall be led to focus on certain general features, certain types of boundary condition, which can persist over reasonable stretches of time. Just as chemists look to the time-independent Schrödinger equation to show them what stable atomic configurations there are, and would like to be able to work out in detail what molecules are stable too, so at a much higher level, biologists take note of organisms and species of organisms, which are the basic things of their discipline. Organisms are homeostatic, self-perpetuating and self-replicating. They are processes, like flames, but longer lasting and with greater adaptability in the face of adventitious change. They react to adverse changes in the environment so as to keep some variables the same, which together constitute the same organism that survives over time in the face of alterations in the environment. There is thus an essential difference between organism and environment which differentiates all the life sciences from the physical ones. Thinghood has become modal as well as diachronic. It is not enough to continue to be the same over time: organisms need to be able to change in some respects in order to remain the same in others, more important. Even if I were to alter the environment by watering the garden, moving the bird table, replacing the coconut with peanuts, the flora and fauna, though responding in various ways to the altered situation, would mostly persist as the self-same organisms as if I make no alterations. This invariance under a limited range of altered circumstance is more like the invariance of operation of natural laws than the continuance of atomic matter, but goes further; laws of nature would operate even if initial conditions were different, but do not characteristically alter their mode of operation so as to restore some antecedent condition, whereas biological organisms typically do, provided the alteration of initial conditions is not too drastic. Homeostasis is a familiar concept in science---but logically a treacherous one. A homeostatic system tends to maintain the same state, and sameness can easily shift without our noticing it. The simple negative feedback of a flame or an eddy or a thunderstorm results in the process not being interrupted by every adventitious alteration of circumstance, but the persistence is short-lived none the less. Living organisms last longer, and are better able to withstand the attrition of time, because they react to counter the effect of a wider variety of circumstances. The requirement of persistence alters what we count as the substance that persists, and per contra as the concept of substance develops, so also does our idea of what counts as survival, and more generally what goals the substance seeks to secure and maintain. We begin to recognise as important explanatory schemata not only the survival of the organism, but the survival of the species, and now, even, the survival of the biosphere. And we begin to see not only the individual's maximising its own advantage as a rational goal, but the value of co-operative action, if we are to escape from the Prisoners' Dilemma and not be driven by individual selfishness into collective sub-optimality. Beyond that, I find it difficult to peer, but still hope dimly to discern the lineaments of what, if I may borrow a suggestive phrase from Nicholas Maxwell, 13 we might describe as an aim-oriented rationality. The concept of homeostasis is borrowed from control engineering. It leads on naturally into information theory, and information theory provides the key concepts for understanding genetics. As self-perpetuation gives rise to self-replication, there is a greater need for the exact specification of the self, and the chromosome needs to be understood not only biochemically as a complicated molecule of DNA, but as a genetic code specifying what the new organism is to be like. Once again, the change of emphasis from the particular physical configuration to the general boundary condition, and the looseness of fit between the probabilistic explanations of the underlying physics and the quite different explanations of the emergent discipline allow us to accommodate the new insights without falling into obscurantist obfuscation. 14 Homeostasis also implies sensitivity. If an organism is to be independent of its environment, it must respond to it so as to counteract the changes which the changes in the environment would otherwise bring about within the organism itself: if I am to maintain a constant body temperature, I must sweat when it is hot outside and shiver when it is cold. Even plants must respond to light and to the earth's gravitational field. The greater the independence and the more marked the distinction between the self and the non-self, the greater the awareness the self needs to have of the non-self, and the more it needs to register, so as to be able to offset, untoward changes in the world around it. We are still in the dark as to what exactly consciousness is or how it evolved, but can see in outline why it is needed. A windowless monad cannot survive the changes and chances of this fleeting life---sensitivity to clouds on the horizon no bigger than a man's hand is the price of not being destroyed by unanticipated storms. My interest lies in the end of this line of development. We can give a general characterization of what it is for a system to be able to represent within itself some other system, and so can think of organisms in terms not of biochemistry or evolutionary biology but of information theory and formal logic. And from this point of view we can consider not only consciousness but self-consciousness, and a system that can represent within itself not just some other system but itself as well. There are a whole series of self-reflexive arguments. Popper, a former President of our Society, has devoted much energy to arguing from them to an open universe; in particular, he argues from the impossibility of self-prediction. MacKay argues similarly---other people may predict what I am going to do, but I cannot. 15 Many people, Haldane, Joseph, Malcolm, Mascall, Popper, Price, Wick and others, have been concerned about rationality, and have argued that if determinism or materialism were true, we could not be rationally convinced of it. 16 Reductive metaphysics, which reduces rationality to something else---the movement of physical particles, for example---cannot leave room for the rational arguments which alone could establish its truth. I myself found these arguments intriguing, and indeed, compelling, but extraordinarily difficult to formulate in a cast-iron way. Eventually I came up with an argument based on Gödel's theorem, which is indeed a version of these arguments, and is intended to show in one swoop the failure of any reductionist programme as regards reason. I have received much stick for using Gödel's theorem to show that the mind is not a Turing machine, but I am quite impenitent on that score, and believe that the argument goes much further, and shows not only the impossibility of reducing reason to the mere following of rules, but the essential creativity of reason. We can never formalise reason completely or tie it down to any set of canonical forms, for we can always step outside and view all that has been thus far settled from a fresh standpoint. In particular we can find fresh features that seem significant, and seek fresh sorts of explanation of them. It does, I believe, establish the essential openness of the universe, granted only that there is at least one rational agent. If there be rational agents, since we are rational agents, it follows that the course of events in the universe cannot be reduced to a system of things evolving according to a determinate algorithm, but that there are always new opportunities and further possible exercises of rationality. The interplay between things and explanations is illuminating. Instead of starting with things, we are able to identify things only at higher levels of organization, and the higher we go the more thingly properties we find. Atoms have stability (usually), but are qualitatively identical with many others. Organisms have more individuality, and are less commonly clones, but still view their environment if not in terms of chemical similarity nevertheless in terms of fungibility, readily replacing one food supply by another. Nor is it only the environment that organisms regard fungibly: although some birds are faithfully monogamous, many are not, and if one Mrs Blackbird fails to respond to the musical blandishments of her would-be mate, another will serve his reproductive purposes just as well. Human love likewise is not uniformly faithful to the individual ideal, but with human beings we can see this as a derogation from humanity, and can construct a coherent concept of unique individuality, according to which this person is irreducibly himself, and essentially different from anybody else. 17 Our idea of thinghood leads us from the utterly simple and essentially similar atoms of the corpuscularians to infinitely complex and unique persons, each necessarily different from every other. The different ideals of thinghood support different paradigms of explanation. Since different sorts of feature characterize things at different levels, and the features that characterize at the higher levels cannot be completely defined in terms of those that play a part in lower-level explanations, the higher-level explanations cannot be reduced to lower-level ones. As we have seen, a Gaussian curve cannot be defined in terms of a Laplacian explanation, for it essentially involves the notion of an ensemble or Kollectiv. Higher-level systems are not derivable from some fundamental system, but are, instead, autonomous. We cannot predict the exact position or velocity of a sub-atomic entity, but by means of the time-independent Schrödinger equation we can say what properties a hydrogen atom would have if it existed, and we can have good reason for supposing that many such atoms will exist, since they are stable configurations of quantum-mechanical systems. The explanations sought by a chemist are in terms of energy levels and the valency bonds they generate: those sought by the biologist are in terms of the maintenance of life and the continuation of the species. And as these explanations differ, so also do the things they are explanations about. Explanations influence what is to count as a thing, and ideas of what it truly is to be a thing influence what questions we ask, and what explanations we seek to discover. 18 We can see this, if we like, as a form of emergent evolutionary development: new levels of being evolve from lower, chemical elements from the flux after the Big Bang, molecules, organisms, consciousness, and self-consciousness, in the fullness of time; but we can also see it in terms of a hierarchy of Platonic forms and explanations, each going beyond the limits of its predecessors, and at the higher levels reaching out to ever new kinds of creative rationality. To summarise, then. The new scientific world-view differs from traditional corpuscularianism in not postulating some ultimate thing-like entities whose motions determine completely the state of the world not only at that time but at all subsequent ones too. Instead of there being particular point-particles, there are only general features, and instead of a rigid determinist law, there are only probabilities, which are, indeed, enough to enable us to make reliable predictions about many aspects of the world, but do not foreclose the possibility of other types of explanation being the best available. Other types of explanation are answers to other types of questions, and it is because we ask different questions that the different sciences are different. These different questions pick on different general features, often different types of boundary condition; and once we acknowledge that there is no metaphysical reason to reduce the generic characterization of boundary conditions typical of other sciences to the paradigm physical terms of Laplacian corpuscularianism, we can accept these other sciences as sciences in their own right, since, metaphysics apart, we have good reason to resist reductionism as applied to questions rather than answers. The abolition of ultimate things thus opens the way to our acknowledging the autonomy of the various sciences. At the same time, the notion of a thing leads us to pick out various types of boundary condition as instantiating, to a greater and greater degree, certain characteristic features of being a thing---permanence, stability, ability to survive adventitious alterations in the environment, and the like. As we follow these through, we find a natural hierarchy of the sciences in which we ask questions about more and more complicated entities, possessing more and more thing-like perfections. Things have gone up market. By an almost Hegelian dialectic our notion of a thing becomes transmuted into that of a substance, and in so far as we remain pluralists at all, we move from the minimal qualitatively identical, though numerically distinct, atoms of the corpuscularians to the infinitely complex, though windowed, monads of a latter-day Leibniz. Whether Lucretius would have been pleased at this outcome of the complex interplay between ontological intimations of existence and rationalist requirements of explicability, I do not know. But he could hardly complain at my taking this as my theme, here at an address to the British Society for the Philosophy of Science taking place in the London School of Economics, whose motto is taken from Virgil's description of him, and also expresses the common sentiment of all our members, Felix qui potuit rerum cognoscere causas Happy he who understands the explanations of things To return from footnote to text, click on footnote number 1. Michael Redhead, ``A Philosopher Looks at Quantum Field Theory'', in Harvey Brown and Rom Harré, eds., Philosophical Foundations of Quantum Mechanics, Oxford, 1988, p.10. 2. Robin Le Poidevin, Change, Cause and Contradiction, London, 1991, esp. ch.8. 3. C.A.Howson and P.Urbach, Scientific Reasoning: the Bayesian Approach, La Salle, Illinois, USA, 1989, p.19. 4. D.A.Gillies, Objective Probability, Cambridge, 1973, esp. ch.5. 5. Reprinted in S.French and H.Kamminga, eds., Correspondence, Invariance and Heuristics, (Kluwer Academic Publishers, Holland), 1993, p. 329. 6. Nancy Cartwright, How the Laws of Physics Lie, Oxford, 1983, ch.2, esp. pp.44-46. Peter Lipton, Inference to the Best Explanation, London, 1991, esp. ch.3; John Worrall, ``The Value of a Fixed Methodology'', British Journal for the Philosophy of Science, 39, 1988. 7. David Papineau, British Journal for the Philosophy of Science, 47, 1991, p.399. 8. I owe this point to H.C.Longuet-Higgins, The Nature of Mind, Edinburgh, 1972, ch.2, pp.16-21, esp. p.19; reprinted in H.C.Longuet-Higgins, Mental Processes, Cambridge, Mass., 1987, ch.2, pp.13-18, esp.p.16. I am also particularly indebted to C.F.A.Pantin, The Relations between the Sciences, Cambridge, 1968; and to A.R.Peacocke, God and the New Biology, London, 1986, and Theology for a Scientific Age, Oxford, 1990. Michael Polanyi emphasized the importance of boundary conditions and their relevance to the different sorts of explanation sought by different disciplines. In his ``Tacit Knowing'', Reviews of Modern Physics, October, 1962, pp.257-259, he cites the example of a steam engine, which although entirely subject to the laws of chemistry and physics, cannot be explained in terms of those disciplines alone, but must be explained in terms of the function it is capable, in view of its construction, perform. What is interesting about the steam engine is not the laws of chemistry and physics, but the boundary conditions, which in view of those laws, make it capable of transforming heat into mechanical energy; it is the province of engineering science, not physics. The example of the steam engine is illuminating in that no question of vitalism arises. See also, Michael Polanyi, ``Life Transcending Physics and Chemistry'', Chemical and Engineering News, August 21, 1067, pp.54-66; and ``Life's Irreducible Structure'', Science, 160, 1968, pp.1308-1312. 9. F.H.C.Crick, Of Molecules and Man, University of Washington Press, Seattle and London, 1966, p.10. 10. That the biologist is primarily concerned with boundary conditions of a special type is pointed out by Bernd-Olaf Küppers, Information and the Origin of Life, M.I.T. Press, Cambridge, Mass, U.S.A., 1990, p.163. 11. Compare the distinction drawn by M.Bechner between theory autonomy and process autonomy in his ``Reduction, Hierarchies and Organicism'' in F.J.Ayala and T.Dozbhanski, Studies in the Philosophy of Biology: Reduction and Related Problems, London, 1974, p.170; cited by A.R.Peacocke, God and the New Biology, London, 1986, p.9. 12. Henri Poincaré, Science and Method, tr. F.Maitland, London, 1914, p.68. 13. Nicholas Maxwell, From Knowledge to Wisdom, Oxford, 1984, esp. ch.4. 15. D.M.MacKay, ``On the Logical Indeterminacy of a Free Choice'', Mind, LXIX, 1960, pp.31-40. 16. See K.R.Popper, The Open Universe, ed. W.W.Bartley, III, London, 1982, ch.III, $$ 23,24. Popper traces the argument back to Descartes and St Augustine. A further list is given in J.R.Lucas, The Freedom of the Will, Oxford, 1970, p.174. Further arguments and fuller references may be found in Behavioral Sciences, 1990, 13, 4. 17. I argue this in my ``A Mind of One's Own'', Philosophy, October, 1993. 18. Compare A.R. Peacocke, Theology for a Scientific Age, Oxford, 1990, p.41: Because of widely pervasive reductionist presuppositions, there has been a tendency to regard the level of atoms and molecules as alone `real'. However, there are good grounds for not affirming any special priority to the physical and chemical levels of description and for believing that what is real is what the various levels of description actually refer to. There is no sense in which subatomic particles are to be graded as `more real' than, say, a bacterial cell a human person or a social fact. Each level has to be regarded as a slice thorough the totality of reality, in the sense that we have to take account of its mode of operation at that level. Return to bibliography Return to home page
50b4fcfac43166dc
De Donder–weyl Theory and a Hypercomplex Extension of Quantum Mechanics to Field Theory Igor V. Kanatchikov Laboratory of Analytical Mechanics and Field Theory Institute of Fundamental Technological Research Polish Academy of Sciences Świȩtokrzyska 21, Warszawa PL-00-049, Poland e-mail: , (Submitted May 1998 —— Accepted September 1998) to appear in Rep. Math. Phys. vol. 42 (1998) hep-th/xxxxxxx A quantization of field theory based on the De Donder-Weyl (DW) covariant Hamiltonian formulation is discussed. A hypercomplex extension of quantum mechanics, in which the space-time Clifford algebra replaces that of the complex numbers, appears as a result of quantization of Poisson brackets of differential forms put forward for the DW formulation earlier. The proposed covariant hypercomplex Schrödinger equation is shown to lead in the classical limit to the DW Hamilton-Jacobi equation and to obey the Ehrenfest principle in the sense that the DW canonical field equations are satisfied for the expectation values of properly chosen operators. 1 .  Introduction It is commonly believed in theoretical physics that a generalization of the Hamiltonian formalism to field theory requires a distinction between the space and time variables and implies the treatment of fields as infinite dimensional mechanical systems. However, another approach is possible. It treats the space and time coordinates on equal footing (as analogues of a single time parameter in mechanics) and does not explicitly refer to an idea of a field as a mechanical system evolving in time by treating the field rather as a system varying both in space and in time. The approach has been known as the De Donder–Weyl (DW) theory in the calculus of variations since the thirties [3] although its applications in physics have been rather rare. For recent discussions of mathematical issues of DW theory and further references see [4, 5, 6]. Usually the Hamiltonian formalism serves as a basis for the canonical quantization. It is quite natural, therefore, to ask whether the DW formulation, viewed as a field theoretic generalization of the Hamiltonian formalism in mechanics, can lead to a corresponding quantization procedure in field theory. In the present paper we discuss an approach to such a quantization (for earlier discussions see [7, 8]). It is our hope that the study of quantization based on the DW theory can contribute to our understanding of the fundamental issues of quantum field theory and to provide us with a new framework of quantization which could be useful in situations where the applicability of the conventional canonical quantization in field theory can be in doubt. Note also that the manifest covariance of the approach can make it especially appealing in the context of quantization of gravity and extended objects. 2 .   De Donder-Weyl theory: a reminder Let us recall the essence of the DW formulation. Given a Lagrangian density , where are field variables, denote their space-time derivatives and , are space-time coordinates, we can define the new set of Hamiltonian-like variables: , called polymomenta, and , called the DW Hamiltonian function, which allow us to write the Euler-Lagrange field equation in an appealing, manifestly covariant first order form referred to as the DW Hamiltonian field equations. Similar to the Hamiltonian formulation in mechanics an analogue of the Hamilton-Jacobi (HJ) theory can be developed for the DW Hamiltonian field equations. It is formulated in terms of HJ functions on the field configuration space which fulfill the DW HJ equation The quest of a formulation of a quantum field theory which in the classical limit would give rise to the DW HJ equation has been one of the motivations of the present study. Let us consider an example of interacting scalar fields described by the Lagrangian density Then the polymomenta and the DW Hamiltonian function are given by the DW Hamiltonian field equations take the form and the DWHJ equation reads 3 .   Poisson bracket of forms: properties and the equations of motion To develop an analogue of the canonical quantization procedure we need the Poisson bracket possessing appropriate algebraic properties, a notion of the canonically conjugate variables, and a representation of the field equations in terms of the Poisson bracket. In previous papers [6, 7, 9] we have shown that the proper analogue of the Poisson bracket for the DW Hamiltonian formulation can be defined on horizontal differential -forms , which naturally play a role of dynamical variables. The following notations are used throughout The sign denotes the inner product of a multivector field with a form, such that e.g. . The same symbol denotes either the partial derivative with respect to the variable or a tangent vector according to the context. For details of the construction of the Poisson bracket and its properties we refer to [6, 9]. For us it is most important here that the bracket defined on forms leads to several generalizations of the Poisson algebra of functions in mechanics, and that it also enables us to represent the equations of motion of dynamical variables in terms of the bracket with the DW Hamiltonian function. In particular, on the class of specific forms, called in [6, 7] Hamiltonian, the bracket determines the structure of the so-called Gerstenhaber algebra, a specific graded generalization of the Poisson algebra. By definition, it is a graded commutative algebra equipped with a graded Lie bracket operation which fulfills the graded Leibniz rule with respect to the graded commutative product in the algebra. The grade of an element of the algebra with respect to the product differs by one from its grade with respect to the bracket operation. The graded commutative (associative) product on Hamiltonian forms is what we called the co-exterior product and denoted . It is defined as follows111Prof. Z. Oziewicz pointed out to the author that this product was introduced much earlier by Plebanski, see [11].: where denotes the Hodge duality operator acting on horizontal forms and is its inverse. As a consequence, , deg, and a form of degree has a grade with respect to the co-exterior product. The bracket operation on Hamiltonian forms is graded Lie, with the grade of a bracket with a -form being , so that the bracket of a -form with a -form is a form of degree . The bracket also fulfills the graded Leibniz rule with respect to the -product All these properties characterize the space of Hamiltonian forms as a Gerstenhaber algebra. Hamiltonian forms of non-zero degree are polynomials of -forms with respect to the -product, with the coefficients being arbitrary functions of the field and space-time variables (cf. eq. (2.4) in [9]). Note that the variables can be viewed as canonically conjugate to the field variables since their Poisson bracket is In fact, owing to the implicit graded canonical symmetry in the theory there are other canonical pairs of forms of various degrees corresponding to the field variables and polymomenta (cf. sect. 4.1). The corresponding canonical brackets are of particular interest from the point of view of the canonical quantization. Note, that a bracket of any two Hamiltonian forms can be calculated using the canonical brackets and the graded Leibniz property of the bracket, independently of the construction in our previous papers which uses the notion of the polysymplectic form and the related map from forms to multivector fields. However, still it is not clear how the co-exterior product, the space of Hamiltonian forms, and the canonical brackets could be invented or motivated independently of the construction in [6]. The equations of motion can be written in terms of the Poisson bracket of forms. An analogy with mechanics suggests that they are given by the bracket with the DW Hamiltonian function. However, the degree counting shows that the bracket with exists only for Hamiltonian forms of degree : . For these forms the equations of motion can be written in the form where denotes the operation of the “total co-exterior differential” is a “horizontal co-exterior differential”: and for the Euclidean (Minkowskian) signature of the space-time metric. It is evident that co-exterior differentials identically vanish on forms of degree lower than sharing the property with the operation of the bracket with . Note, that the form of the equations of motion in (3.4) is different from that presented in our previous papers in which the left hand side has been written in terms of the operator , where d is a total exterior differential defined as follows In fact, the action of on -forms coincides with that of , so that the essence of the equations of motion in both representations remains the same. However, the use of better conforms with the natural product operation of Hamiltonian forms and also with the fact that the bracket with exists only for forms of degree . Note also, that the Poisson bracket formulation of the equations of motion can be extended (in a weaker sense) to arbitrary horizontal forms. For this purpose one have to make sense of the bracket with the DW Hamiltonian -form . The result is that the bracket with corresponds to the total exterior differential of a form [6]. 4 .   Elements of the canonical quantization 4.1 .   Quantization of the canonical brackets The problem of quantization of the Gerstenhaber algebra of Hamiltonian forms is by itself, independently of its application to field theory, an interesting mathematical problem, which could be approached by different mathematical techniques of quantization, such as a deformation quantization or a geometric quantization. However, in this paper we shall follow a more naive approach based on extending the rules of the canonical quantization to the present framework. Let us recall that in quantum mechanics it is sufficient to quantize only a small part of the Poisson algebra given by the canonical brackets. Moreover, it is known to be impossible to quantize the whole Poisson algebra due to the limits imposed by the Groenewold-van Hove theorem (see e.g. [12]). Therefore, to begin with let us confine ourselves to an appropriate small subalgebra in the algebra of Hamiltonian forms. From the properties of the graded Poisson bracket discussed in the previous section it follows that the subspace of -forms and -forms constitutes a Lie subalgebra in the Gerstenhaber algebra of Hamiltonian forms. Let us quantize the canonical brackets in this subalgebra. Nonvanishing brackets are given by [6] As usual, we associate Poisson brackets to commutators divided by and find the operator realizations of the quantities involved on an appropriate Hilbert space. In the Schrödinger -representation from quantization of (4.1a) it follows that the operator corresponding to the -form can be represented by the partial derivative with respect to the field variables: Quantization of the bracket in (4.1b) does not add anything new. However, quantization of (4.1c) is nontrivial. Let us write in the form , where the operator has to be found. Then, from the commutator corresponding to (4.1c) we obtain where denotes a composition law of operators which implies some not known in advance multiplication law of horizontal operators and . The right hand side of (4.3) will be equal to , as it is required by (4.1c), if the following two conditions are fulfilled: Hence, the composition law is a symmetric operation, i.e. where the quantity of the dimension [length] appears in order to account for the physical dimensions of and . Since is essentially an infinitesimal volume element the absolute value of can be expected to be very small. As a result, the theory under consideration requires the introduction of a certain analogue of the fundamental length from the elementary requirement of matching of the dimensions. Note that the realization (4.5) in terms of Dirac matrices is not uniquely determined by (4.4). In sect. 5.3 we show that this choice is consistent with the Ehrenfest theorem. Still, an open question to be investigated is whether or not other hypercomplex systems can be useful for the realization of the commutation relations following from quantization of the Poisson brackets of forms. 4.2 .   DW Hamiltonian operator In order to quantize a simple field theoretic model given by (2.3) we have to construct the operator corresponding to the DW Hamiltonian in (2.4). Note that we cannot just naively multiply operators. For example, from (4.4) for the operator one would obtain , whereas the correct answer consistent with the bracket in (4.1a) is . To find the operator of in (2.4) let us quantize the bracket From the corresponding commutator using the representations of and , and the commutator , where is the Laplacian operator in the field space, we obtain Hence the DW Hamiltonian operator for the system of interacting scalar fields takes the form For a free scalar field , and the above expression is similar to the Hamiltonian of the harmonic oscillator in the field space. 5 .   Generalized Schrödinger equation Our next step is to formulate a dynamical law. An analogue of the Schrödinger equation here has to fulfill the following natural requirements: • the familiar quantum mechanical Schrödinger equation should be reproduced if the number of space-time dimensions ; • the DW HJ equation should arise in the classical limit; • the classical field equations in the DW canonical form should be fulfilled for the expectation values of the corresponding operators. We also imply that basic principles of quantum theory such as the superposition principle and the probabilistic interpretation should be inbuilt in the desired generalization. Additional hint comes from the bracket form of the equations of motion (3.4) and an analogy with quantum mechanics. They suggest that the sought-for Schrödinger equation has the symbolic form , where and denote appropriate analogues of the imaginary unit and the exterior differentiation respectively. The above considerations have led us to the following generalization of the Schrödinger equation where is the operator corresponding to the DW Hamiltonian function, the constant of the dimension [length] appears again on dimensional grounds, and is a wave function over the configuration space of the field and space-time variables. Equation (5.1) leaves us with two options as to the nature of the wave function . The latter can be either a hypercomplex number where , or a Dirac spinor (the choice of the Dirac spinors is based on the fact that they exist in arbitrary space-time dimensions and signatures) which actually can be understood as an element of a minimal left ideal in the Clifford algebra [10, 14]. The choice in favor of spinors is made in sect. 5.2 on the basis of the consideration of the scalar products. 5.1 .   Quasiclassical limit and DW HJ equation Let us show that (5.1) leads to the DW HJ equation in the quasiclassical limit. It is natural to consider the following generalization of the quasi-classical ansatz where and are functions of both the field and space-time variables. The exponent in (5.3) is understood as a series expansion so that one has the analogue of the Euler formula where can be both real and imaginary, and . Thus the nonvanishing components of the quasiclassical wave function (5.3) are as follows The wave function of the form is sufficient to close the system of equations which follows from (5.1). Indeed, in this case (5.1) reduces to and the remaining equation which follows from the -component, is equivalent to the integrability condition of (5.7) if is assumed to be independent of -s. Now, let us substitute (5.5) to (5.6) and (5.7) with the DW Hamiltonian operator given by (4.8) and collect together the terms appearing with the cos and sin functions respectively. Then from (5.6) we obtain and (5.7) yields By contracting (5.10) with and using (5.9) we obtain Similarly, eq. (5.8) and eq. (5.11) contracted with yield With the aid of (5.12) and (5.13) equation (5.8) can be written in the form Obviously, in the classical limit (5.14) reduces to the DW HJ equation (2.6). However, besides (5.14) the quasiclassical ansatz (5.3) leads to two supplementary conditions (5.12) and (5.13) on HJ functions . These conditions are just trivial identities at . At they represent a kind of duality between the field theoretical Hamilton-Jacobi formulation in terms of functions and the mechanical-like Hamilton-Jacobi equation (in the space of field variables) for the eikonal function , with the analogue of the time derivative given by the directional derivative . On the one hand, a possible speculation could be that this is just another manifestation of a quantum duality between the particle and the field (wave) aspects of a quantum field. On the other hand the appearance of the supplementary conditions alien to the DW HJ theory can be related to the fact that the ansatz (5.3) does not represent the most general hypercomplex number: instead of components we have in (5.3) only independent functions. The most general ansatz would be However, its substitution to (5.1) leads to cumbersome expressions which we have been unable to analyze and, moreover, it is not clear whether the antisymmetric quantities etc. can be interpreted within some generalized (maybe Lepagean?) Hamilton-Jacobi theory for fields. Note, that all the above conclusions can be extended to the case when the wave function in (5.1) is a spinor. For this purpose the quasiclassical ansatz for the spinor wave function can be taken in the form where denotes a constant reference spinor, e.g. , which allows us to convert a Clifford number to an element of an ideal of the Clifford algebra, i.e. to a spinor. The same extends to the presented above more general ansatz. 5.2 .   Scalar products: hypercomplex vs. spinor wave functions Let us return now to the issue of the probabilistic interpretation of the wave function. Note first that if we restrict ourselves to the hypercomplex wave functions of the form , then equations (5.6), (5.7) and their complex conjugates lead to the conservation law under the assumption that the wave function sufficiently rapidly decays at and that the operator of the DW Hamiltonian is Hermitian with respect to the scalar product of functions in -space. From (5.16) it follows that the spatial integral over a space-like hypersurface is preserved in time (or, equivalently, does not depend on the variation of the hypersurface ) and, therefore, could be viewed as a norm of the hypercomplex wave function. As this norm involves the integration over a space-like hypersurface it could be useful for the calculation of the expectation values of global observables. However, its significant drawback is that it is not necessarily positive definite as a consequence of the similarity of (5.17) with the scalar product in the Klein-Gordon theory. The similarity is evident from (5.7) which essentially states that . As a matter of fact, for the purposes of the present theory we need rather a scalar product for the calculation of the expectation values of operators representing local quantities. This scalar product should be scalar (to not change the tensor behavior of operators under averaging) and involve only the integration over the field space dimensions (to keep the local character of the quantities under averaging). For the hypercomplex wave function of the type the scalar product could be chosen in the form which is, however, not positive definite in general. In fact, non-existence of the appropriate scalar product for wave functions taking values in algebras different from the real, complex, quaternion and octonion numbers follows from the natural axioms ensuring the availability of the probabilistic interpretation and general algebraic considerations (see e.g. [13]). Moreover, our attempts to use the just mentioned scalar product in order to obtain an analogue of the Ehrenfest theorem have failed. To avoid, at least partially, the difficulties above, we assume that the wave function in (5.1) is a Dirac spinor. Indeed, in this case the analogue of the global scalar product (5.17) where denotes the Dirac conjugate of , is positive definite. We also can write a scalar product for the averaging of local quantities Then the expectation values of operators are calculated according to the formula However, since the scalar product is not positive definite the validity of the averaging in (5.20) is questionable. Moreover, from the generalized Schrödinger equation (5.1) written in the form and its conjugate we derive so that the scalar product in (5.19) is space and time dependent, the property which makes it unsatisfactory analogue of the scalar product of wave functions in quantum mechanics. Nevertheless, we show in what follows that the use of the formula (5.20) for the expectation values allows us to obtain an analogue of the Ehrenfest theorem. 5.3 .   The Ehrenfest theorem Let us consider the evolution of the expectation values of operators calculated according to (5.20). Using the generalized Schrödinger equation (5.1) for the evolution of the expectation value of the polymomentum operator we obtain where the Hermicity of with respect to the scalar product of functions in -space is used. Similarly, By comparing these results with the DW Hamiltonian equations (2.1) we conclude that the latter are fulfilled in average as a consequence of (i) the generalized Schrödinger equation (5.1), (ii) the rules of quantization leading to the realization (4.5) of the operators, and (iii) the prescription (5.19) for the averaging of the operators representing local dynamical variables. Thus we have arrived at the field theoretic counterpart (within the approach under discussion) of the Ehrenfest theorem known in quantum mechanics. Its validity could be seen as a justification and consistency check of the whole approach of the present paper. However, the situation is not that perfect because the norm used for the calculation of the expectation values is neither positive definite nor constant over the space-time. This brings about potential problems with the probabilistic interpretation and points to the need for improvement of the current formulation or for a better understanding of its physical content. Moreover, the analogue of the Ehrenfest theorem can be obtained only for specially chosen operators and the principle behind this choice is not clear. Note also, that the presented proof of the Ehrenfest theorem is not sensible to the identification of -s with the Dirac matrices. In principle, the use of other hypercomplex units (which appear in various first-order relativistic wave equations) in the realization of operators and and in the generalized Schrödinger equation can also be consistent with the Ehrenfest theorem, but we lack an appropriate interpretation of this observation. 6 .   Conclusion The De Donder-Weyl formulation provides us with the alternative covariant canonical framework for quantization of field theory. On the classical level it possesses the analogues of the appropriate geometric and algebraic structures, such as the Poisson bracket with corresponding Lie and Poisson algebraic properties, the notion of the canonically conjugate variables, and the Poisson bracket formulation of the equations of motion. Within the DW formulation field theory is treated essentially as a generalized multi-parameter, or “multi-time” generalized Hamiltonian system with the space-time variables entering on equal footing as generalizations of the time parameter in mechanics. The configuration space is a finite dimensional bundle of field variables over the space-time, of which the field configurations are the sections, instead of the usual infinite dimensional space of the field configurations on a hypersurface of the constant time. The analogue of the canonical formalism for the DW formulation [6] arises as a graded version of the canonical formalism in mechanics, with the role of dynamical variables played by differential forms. Quantization of the canonical brackets leads us to a hypercomplex extension of the quantum mechanical formalism, with the usual complex quantum mechanics recovered in the limiting case of a one-dimensional “field theory”, that is in mechanics. In higher dimensions the Clifford algebra of the corresponding space-time manifold plays the central role. Namely, in the Schrödinger picture considered here the quantum operators are realized as the differential operators with hypercomplex coefficients, and the wave functions take values in the spinor space, which is known to be the minimal left ideal in the Clifford algebra [14]. The generalized Schrödinger equation formulated in sect. 5 can also be viewed as a multi-parameter hypercomplex extension of the quantum mechanical Schrödinger equation: the right hand side of the latter, , is generalized to the Dirac operator . This equation, with some reservations, is shown to lead in the classical limit to the field theoretic DW Hamilton-Jacobi equation and to give rise to the analogue of the Ehrenfest theorem for the evolution of the expectation values of operators corresponding to field variables and polymomenta. However, a potential problem with the proposed generalized Schrödinger equation is that the scalar product involved in the proof of the Ehrenfest theorem is not positive definite and not constant over the space-time. Although we have entertained here a point of view that the space-time Clifford algebras play the central role, some of the results, except the derivation in sect. 5.1 of the DW HJ equation in the quasiclassical limit and quantization of the canonical brackets in sect. 4.1, seem to hold true if -s are generating elements of other hypercomplex systems used in the first order relativistic wave equations, such as the Duffin-Kemmer ring. How the corresponding non-Clifford hypercomplex extensions of quantum mechanics can be reconciled with quantization based on the DW theory, whether they follow from quantization similar to that in sect. 4.1, and which extension (Clifford or non-Clifford) can be suitable in physics are the questions we hope to address in our further research. Though the proposed quantization scheme reproduces essential formal ingredients of quantum theory, the prospects of its physical applications remain obscure. The obstacle is that the conceptual framework of the present approach is different from the usual one, that makes the translation to the conventional language of quantum field theory a difficult task. A possible link could be established with the functional Schrödinger picture in quantum field theory [15]. On the one hand, the Schrödinger wave functional is a probability amplitude of finding the field in the configuration on the hypersurface with the time label . On the other hand, it is natural to interpret our wave function as a probability amplitude of finding the field value in the space-time point . Hence, the Schrödinger wave functional could appear as a kind of composition of amplitudes given by our wave functions taken at all points of the space. A more technical discussion of this issue in [8] points to a relation between both at least in the ultra-local approximation of vanishing wave vectors. However, beyond this unphysical approximation the relation remains conjectural and requires further study. Note added: In the recent preprint [16] M. Navarro considered an approach to quantization in field theory which is similar to the approach of the present paper. The work has been partially supported by the Stefan Banach International Mathematical Center to which I express my gratitude. I wish to thank M. Flato, Z. Oziewicz, J. Sławianowski and C. Lopez Lacasta for discussions, J. Stasheff for his useful comments on the author’s paper [9], and S. Pukas for his helpful suggestions. For everything else, email us at [email protected].
546ccc5ac45e93f8
@article{9005, abstract = {Studies on the experimental realization of two-dimensional anyons in terms of quasiparticles have been restricted, so far, to only anyons on the plane. It is known, however, that the geometry and topology of space can have significant effects on quantum statistics for particles moving on it. Here, we have undertaken the first step toward realizing the emerging fractional statistics for particles restricted to move on the sphere instead of on the plane. We show that such a model arises naturally in the context of quantum impurity problems. In particular, we demonstrate a setup in which the lowest-energy spectrum of two linear bosonic or fermionic molecules immersed in a quantum many-particle environment can coincide with the anyonic spectrum on the sphere. This paves the way toward the experimental realization of anyons on the sphere using molecular impurities. Furthermore, since a change in the alignment of the molecules corresponds to the exchange of the particles on the sphere, such a realization reveals a novel type of exclusion principle for molecular impurities, which could also be of use as a powerful technique to measure the statistics parameter. Finally, our approach opens up a simple numerical route to investigate the spectra of many anyons on the sphere. Accordingly, we present the spectrum of two anyons on the sphere in the presence of a Dirac monopole field.}, author = {Brooks, Morris and Lemeshko, Mikhail and Lundholm, D. and Yakaboylu, Enderalp}, issn = {10797114}, journal = {Physical Review Letters}, number = {1}, publisher = {American Physical Society}, title = {{Molecular impurities as a realization of anyons on the two-sphere}}, doi = {10.1103/PhysRevLett.126.015301}, volume = {126}, year = {2021}, } @article{9010, abstract = {Availability of the essential macronutrient nitrogen in soil plays a critical role in plant growth, development, and impacts agricultural productivity. Plants have evolved different strategies for sensing and responding to heterogeneous nitrogen distribution. Modulation of root system architecture, including primary root growth and branching, is among the most essential plant adaptions to ensure adequate nitrogen acquisition. However, the immediate molecular pathways coordinating the adjustment of root growth in response to distinct nitrogen sources, such as nitrate or ammonium, are poorly understood. Here, we show that growth as manifested by cell division and elongation is synchronized by coordinated auxin flux between two adjacent outer tissue layers of the root. This coordination is achieved by nitrate‐dependent dephosphorylation of the PIN2 auxin efflux carrier at a previously uncharacterized phosphorylation site, leading to subsequent PIN2 lateralization and thereby regulating auxin flow between adjacent tissues. A dynamic computer model based on our experimental data successfully recapitulates experimental observations. Our study provides mechanistic insights broadening our understanding of root growth mechanisms in dynamic environments.}, author = {Ötvös, Krisztina and Marconi, Marco and Vega, Andrea and O’Brien, Jose and Johnson, Alexander J and Abualia, Rashed and Antonielli, Livio and Montesinos López, Juan C and Zhang, Yuzhou and Tan, Shutang and Cuesta, Candela and Artner, Christina and Bouguyon, Eleonore and Gojon, Alain and Friml, Jiří and Gutiérrez, Rodrigo A. and Wabnik, Krzysztof T and Benková, Eva}, issn = {14602075}, journal = {EMBO Journal}, number = {3}, publisher = {Embo Press}, title = {{Modulation of plant root growth by nitrogen source-defined regulation of polar auxin transport}}, doi = {10.15252/embj.2020106862}, volume = {40}, year = {2021}, } @article{9020, abstract = {We study dynamics and thermodynamics of ion transport in narrow, water-filled channels, considered as effective 1D Coulomb systems. The long range nature of the inter-ion interactions comes about due to the dielectric constants mismatch between the water and the surrounding medium, confining the electric filed to stay mostly within the water-filled channel. Statistical mechanics of such Coulomb systems is dominated by entropic effects which may be accurately accounted for by mapping onto an effective quantum mechanics. In presence of multivalent ions the corresponding quantum mechanics appears to be non-Hermitian. In this review we discuss a framework for semiclassical calculations for the effective non-Hermitian Hamiltonians. Non-Hermiticity elevates WKB action integrals from the real line to closed cycles on a complex Riemann surfaces where direct calculations are not attainable. We circumvent this issue by applying tools from algebraic topology, such as the Picard-Fuchs equation. We discuss how its solutions relate to the thermodynamics and correlation functions of multivalent solutions within narrow, water-filled channels. }, author = {Gulden, Tobias and Kamenev, Alex}, issn = {1099-4300}, journal = {Entropy}, number = {1}, publisher = {MDPI}, title = {{Dynamics of ion channels via non-hermitian quantum mechanics}}, doi = {10.3390/e23010125}, volume = {23}, year = {2021}, } @phdthesis{9022, abstract = {In the first part of the thesis we consider Hermitian random matrices. Firstly, we consider sample covariance matrices XX∗ with X having independent identically distributed (i.i.d.) centred entries. We prove a Central Limit Theorem for differences of linear statistics of XX∗ and its minor after removing the first column of X. Secondly, we consider Wigner-type matrices and prove that the eigenvalue statistics near cusp singularities of the limiting density of states are universal and that they form a Pearcey process. Since the limiting eigenvalue distribution admits only square root (edge) and cubic root (cusp) singularities, this concludes the third and last remaining case of the Wigner-Dyson-Mehta universality conjecture. The main technical ingredients are an optimal local law at the cusp, and the proof of the fast relaxation to equilibrium of the Dyson Brownian motion in the cusp regime. In the second part we consider non-Hermitian matrices X with centred i.i.d. entries. We normalise the entries of X to have variance N −1. It is well known that the empirical eigenvalue density converges to the uniform distribution on the unit disk (circular law). In the first project, we prove universality of the local eigenvalue statistics close to the edge of the spectrum. This is the non-Hermitian analogue of the TracyWidom universality at the Hermitian edge. Technically we analyse the evolution of the spectral distribution of X along the Ornstein-Uhlenbeck flow for very long time (up to t = +∞). In the second project, we consider linear statistics of eigenvalues for macroscopic test functions f in the Sobolev space H2+ϵ and prove their convergence to the projection of the Gaussian Free Field on the unit disk. We prove this result for non-Hermitian matrices with real or complex entries. The main technical ingredients are: (i) local law for products of two resolvents at different spectral parameters, (ii) analysis of correlated Dyson Brownian motions. In the third and final part we discuss the mathematically rigorous application of supersymmetric techniques (SUSY ) to give a lower tail estimate of the lowest singular value of X − z, with z ∈ C. More precisely, we use superbosonisation formula to give an integral representation of the resolvent of (X − z)(X − z)∗ which reduces to two and three contour integrals in the complex and real case, respectively. The rigorous analysis of these integrals is quite challenging since simple saddle point analysis cannot be applied (the main contribution comes from a non-trivial manifold). Our result improves classical smoothing inequalities in the regime |z| ≈ 1; this result is essential to prove edge universality for i.i.d. non-Hermitian matrices.}, author = {Cipolloni, Giorgio}, issn = {2663-337X}, pages = {380}, publisher = {IST Austria}, title = {{Fluctuations in the spectrum of random matrices}}, doi = {10.15479/AT:ISTA:9022}, year = {2021}, } @unpublished{9034, abstract = {We determine an asymptotic formula for the number of integral points of bounded height on a blow-up of $\mathbb{P}^3$ outside certain planes using universal torsors.}, author = {Wilsch, Florian Alexander}, booktitle = {arXiv}, title = {{Integral points of bounded height on a log Fano threefold}}, year = {2021}, } @article{9037, abstract = {We continue our study of ‘no‐dimension’ analogues of basic theorems in combinatorial and convex geometry in Banach spaces. We generalize some results of the paper (Adiprasito, Bárány and Mustafa, ‘Theorems of Carathéodory, Helly, and Tverberg without dimension’, Proceedings of the Thirtieth Annual ACM‐SIAM Symposium on Discrete Algorithms (Society for Industrial and Applied Mathematics, San Diego, California, 2019) 2350–2360) and prove no‐dimension versions of the colored Tverberg theorem, the selection lemma and the weak 𝜀 ‐net theorem in Banach spaces of type 𝑝>1 . To prove these results, we use the original ideas of Adiprasito, Bárány and Mustafa for the Euclidean case, our no‐dimension version of the Radon theorem and slightly modified version of the celebrated Maurey lemma.}, author = {Ivanov, Grigory}, issn = {14692120}, journal = {Bulletin of the London Mathematical Society}, publisher = {London Mathematical Society}, title = {{No-dimension Tverberg's theorem and its corollaries in Banach spaces of type p}}, doi = {10.1112/blms.12449}, year = {2021}, } @article{9046, author = {Römhild, Roderich and Andersson, Dan I.}, issn = {15537374}, journal = {PLoS Pathogens}, number = {1}, publisher = {Public Library of Science}, title = {{Mechanisms and therapeutic potential of collateral sensitivity to antibiotics}}, doi = {10.1371/journal.ppat.1009172}, volume = {17}, year = {2021}, } @article{9047, abstract = {This work analyzes the latency of the simplified successive cancellation (SSC) decoding scheme for polar codes proposed by Alamdar-Yazdi and Kschischang. It is shown that, unlike conventional successive cancellation decoding, where latency is linear in the block length, the latency of SSC decoding is sublinear. More specifically, the latency of SSC decoding is O(N1−1/μ) , where N is the block length and μ is the scaling exponent of the channel, which captures the speed of convergence of the rate to capacity. Numerical results demonstrate the tightness of the bound and show that most of the latency reduction arises from the parallel decoding of subcodes of rate 0 or 1.}, author = {Mondelli, Marco and Hashemi, Seyyed Ali and Cioffi, John M. and Goldsmith, Andrea}, issn = {15582248}, journal = {IEEE Transactions on Wireless Communications}, number = {1}, pages = {18--27}, publisher = {IEEE}, title = {{Sublinear latency for simplified successive cancellation decoding of polar codes}}, doi = {10.1109/TWC.2020.3022922}, volume = {20}, year = {2021}, } @article{9048, abstract = {The analogy between an equilibrium partition function and the return probability in many-body unitary dynamics has led to the concept of dynamical quantum phase transition (DQPT). DQPTs are defined by nonanalyticities in the return amplitude and are present in many models. In some cases, DQPTs can be related to equilibrium concepts, such as order parameters, yet their universal description is an open question. In this Letter, we provide first steps toward a classification of DQPTs by using a matrix product state description of unitary dynamics in the thermodynamic limit. This allows us to distinguish the two limiting cases of “precession” and “entanglement” DQPTs, which are illustrated using an analytical description in the quantum Ising model. While precession DQPTs are characterized by a large entanglement gap and are semiclassical in their nature, entanglement DQPTs occur near avoided crossings in the entanglement spectrum and can be distinguished by a complex pattern of nonlocal correlations. We demonstrate the existence of precession and entanglement DQPTs beyond Ising models, discuss observables that can distinguish them, and relate their interplay to complex DQPT phenomenology.}, author = {De Nicola, Stefano and Michailidis, Alexios and Serbyn, Maksym}, issn = {0031-9007}, journal = {Physical Review Letters}, keywords = {General Physics and Astronomy}, number = {4}, publisher = {American Physical Society}, title = {{Entanglement view of dynamical quantum phase transitions}}, doi = {10.1103/physrevlett.126.040602}, volume = {126}, year = {2021}, } @phdthesis{9056, abstract = {In this thesis we study persistence of multi-covers of Euclidean balls and the geometric structures underlying their computation, in particular Delaunay mosaics and Voronoi tessellations. The k-fold cover for some discrete input point set consists of the space where at least k balls of radius r around the input points overlap. Persistence is a notion that captures, in some sense, the topology of the shape underlying the input. While persistence is usually computed for the union of balls, the k-fold cover is of interest as it captures local density, and thus might approximate the shape of the input better if the input data is noisy. To compute persistence of these k-fold covers, we need a discretization that is provided by higher-order Delaunay mosaics. We present and implement a simple and efficient algorithm for the computation of higher-order Delaunay mosaics, and use it to give experimental results for their combinatorial properties. The algorithm makes use of a new geometric structure, the rhomboid tiling. It contains the higher-order Delaunay mosaics as slices, and by introducing a filtration function on the tiling, we also obtain higher-order α-shapes as slices. These allow us to compute persistence of the multi-covers for varying radius r; the computation for varying k is less straight-foward and involves the rhomboid tiling directly. We apply our algorithms to experimental sphere packings to shed light on their structural properties. Finally, inspired by periodic structures in packings and materials, we propose and implement an algorithm for periodic Delaunay triangulations to be integrated into the Computational Geometry Algorithms Library (CGAL), and discuss the implications on persistence for periodic data sets.}, author = {Osang, Georg F}, issn = {2663-337X}, pages = {134}, publisher = {IST Austria}, title = {{Multi-cover persistence and Delaunay mosaics}}, doi = {10.15479/AT:ISTA:9056}, year = {2021}, } @article{9073, abstract = {The sensory and cognitive abilities of the mammalian neocortex are underpinned by intricate columnar and laminar circuits formed from an array of diverse neuronal populations. One approach to determining how interactions between these circuit components give rise to complex behavior is to investigate the rules by which cortical circuits are formed and acquire functionality during development. This review summarizes recent research on the development of the neocortex, from genetic determination in neural stem cells through to the dynamic role that specific neuronal populations play in the earliest circuits of neocortex, and how they contribute to emergent function and cognition. While many of these endeavors take advantage of model systems, consideration will also be given to advances in our understanding of activity in nascent human circuits. Such cross-species perspective is imperative when investigating the mechanisms underlying the dysfunction of early neocortical circuits in neurodevelopmental disorders, so that one can identify targets amenable to therapeutic intervention.}, author = {Hanganu-Opatz, Ileana L. and Butt, Simon J. B. and Hippenmeyer, Simon and De Marco García, Natalia V. and Cardin, Jessica A. and Voytek, Bradley and Muotri, Alysson R.}, issn = {0270-6474}, journal = {The Journal of Neuroscience}, keywords = {General Neuroscience}, number = {5}, pages = {813--822}, publisher = {Society for Neuroscience}, title = {{The logic of developing neocortical circuits in health and disease}}, doi = {10.1523/jneurosci.1655-20.2020}, volume = {41}, year = {2021}, } @unpublished{9082, abstract = {Acquired mutations are sufficiently frequent such that the genome of a single cell offers a record of its history of cell divisions. Among more common somatic genomic alterations are loss of heterozygosity (LOH). Large LOH events are potentially detectable in single cell RNA sequencing (scRNA-seq) datasets as tracts of monoallelic expression for constitutionally heterozygous single nucleotide variants (SNVs) located among contiguous genes. We identified runs of monoallelic expression, consistent with LOH, uniquely distributed throughout the genome in single cell brain cortex transcriptomes of F1 hybrids involving different inbred mouse strains. We then phylogenetically reconstructed single cell lineages and simultaneously identified cell types by corresponding gene expression patterns. Our results are consistent with progenitor cells giving rise to multiple cortical cell types through stereotyped expansion and distinct waves of neurogenesis. Compared to engineered recording systems, LOH events accumulate throughout the genome and across the lifetime of an organism, affording tremendous capacity for encoding lineage information and increasing resolution for later cell divisions. This approach can conceivably be computationally incorporated into scRNA-seq analysis and may be useful for organisms where genetic engineering is prohibitive, such as humans.}, author = {Anderson, Donovan J. and Pauler, Florian and McKenna, Aaron and Shendure, Jay and Hippenmeyer, Simon and Horwitz, Marshall S.}, booktitle = {bioRxiv}, publisher = {Cold Spring Harbor Laboratory}, title = {{Simultaneous identification of brain cell type and lineage via single cell RNA sequencing}}, doi = {10.1101/2020.12.31.425016}, year = {2021}, } @article{9093, abstract = {We employ the Gross-Pitaevskii equation to study acoustic emission generated in a uniform Bose gas by a static impurity. The impurity excites a sound-wave packet, which propagates through the gas. We calculate the shape of this wave packet in the limit of long wave lengths, and argue that it is possible to extract properties of the impurity by observing this shape. We illustrate here this possibility for a Bose gas with a trapped impurity atom -- an example of a relevant experimental setup. Presented results are general for all one-dimensional systems described by the nonlinear Schrödinger equation and can also be used in nonatomic systems, e.g., to analyze light propagation in nonlinear optical media. Finally, we calculate the shape of the sound-wave packet for a three-dimensional Bose gas assuming a spherically symmetric perturbation.}, author = {Marchukov, Oleksandr and Volosniev, Artem}, issn = {2542-4653}, journal = {SciPost Physics}, number = {2}, publisher = {SciPost Foundation}, title = {{Shape of a sound wave in a weakly-perturbed Bose gas}}, doi = {10.21468/scipostphys.10.2.025}, volume = {10}, year = {2021}, } @article{9097, abstract = {Psoriasis is a chronic inflammatory skin disease clinically characterized by the appearance of red colored, well-demarcated plaques with thickened skin and with silvery scales. Recent studies have established the involvement of a complex signalling network of interactions between cytokines, immune cells and skin cells called keratinocytes. Keratinocytes form the cells of the outermost layer of the skin (epidermis). Visible plaques in psoriasis are developed due to the fast proliferation and unusual differentiation of keratinocyte cells. Despite that, the exact mechanism of the appearance of these plaques in the cytokine-immune cell network is not clear. A mathematical model embodying interactions between key immune cells believed to be involved in psoriasis, keratinocytes and relevant cytokines has been developed. The complex network formed of these interactions poses several challenges. Here, we choose to study subnetworks of this complex network and initially focus on interactions involving TNFα, IL-23/IL-17, and IL-15. These are chosen based on known evidence of their therapeutic efficacy. In addition, we explore the role of IL-15 in the pathogenesis of psoriasis and its potential as a future drug target for a novel treatment option. We perform steady state analyses for these subnetworks and demonstrate that the interactions between cells, driven by cytokines could cause the emergence of a psoriasis state (hyper-proliferation of keratinocytes) when levels of TNFα, IL-23/IL-17 or IL-15 are increased. The model results explain and support the clinical potentiality of anti-cytokine treatments. Interestingly, our results suggest different dynamic scenarios underpin the pathogenesis of psoriasis, depending upon the dominant cytokines of subnetworks. We observed that the increase in the level of IL-23/IL-17 and IL-15 could lead to psoriasis via a bistable route, whereas an increase in the level of TNFα would lead to a monotonic and gradual disease progression. Further, we demonstrate how this insight, bistability, could be exploited to improve the current therapies and develop novel treatment strategies for psoriasis.}, author = {Pandey, Rakesh and Al-Nuaimi, Yusur and Mishra, Rajiv Kumar and Spurgeon, Sarah K. and Goodfellow, Marc}, issn = {20452322}, journal = {Scientific Reports}, publisher = {Springer Nature}, title = {{Role of subnetworks mediated by TNF α, IL-23/IL-17 and IL-15 in a network involved in the pathogenesis of psoriasis}}, doi = {10.1038/s41598-020-80507-7}, volume = {11}, year = {2021}, } @article{9098, abstract = {We study properties of the volume of projections of the n-dimensional cross-polytope $\crosp^n = \{ x \in \R^n \mid |x_1| + \dots + |x_n| \leqslant 1\}.$ We prove that the projection of $\crosp^n$ onto a k-dimensional coordinate subspace has the maximum possible volume for k=2 and for k=3. We obtain the exact lower bound on the volume of such a projection onto a two-dimensional plane. Also, we show that there exist local maxima which are not global ones for the volume of a projection of $\crosp^n$ onto a k-dimensional subspace for any n>k⩾2.}, author = {Ivanov, Grigory}, issn = {0012365X}, journal = {Discrete Mathematics}, number = {5}, publisher = {Elsevier}, title = {{On the volume of projections of the cross-polytope}}, doi = {10.1016/j.disc.2021.112312}, volume = {344}, year = {2021}, } @article{9099, abstract = {We show that on an Abelian variety over an algebraically closed field of positive characteristic, the obstruction to lifting an automorphism to a field of characteristic zero as a morphism vanishes if and only if it vanishes for lifting it as a derived autoequivalence. We also compare the deformation space of these two types of deformations.}, author = {Srivastava, Tanya K}, issn = {14208938}, journal = {Archiv der Mathematik}, publisher = {Springer Nature}, title = {{Lifting automorphisms on Abelian varieties as derived autoequivalences}}, doi = {10.1007/s00013-020-01564-y}, year = {2021}, } @article{9100, abstract = {Marine environments are inhabited by a broad representation of the tree of life, yet our understanding of speciation in marine ecosystems is extremely limited compared with terrestrial and freshwater environments. Developing a more comprehensive picture of speciation in marine environments requires that we 'dive under the surface' by studying a wider range of taxa and ecosystems is necessary for a more comprehensive picture of speciation. Although studying marine evolutionary processes is often challenging, recent technological advances in different fields, from maritime engineering to genomics, are making it increasingly possible to study speciation of marine life forms across diverse ecosystems and taxa. Motivated by recent research in the field, including the 14 contributions in this issue, we highlight and discuss six axes of research that we think will deepen our understanding of speciation in the marine realm: (a) study a broader range of marine environments and organisms; (b) identify the reproductive barriers driving speciation between marine taxa; (c) understand the role of different genomic architectures underlying reproductive isolation; (d) infer the evolutionary history of divergence using model‐based approaches; (e) study patterns of hybridization and introgression between marine taxa; and (f) implement highly interdisciplinary, collaborative research programmes. In outlining these goals, we hope to inspire researchers to continue filling this critical knowledge gap surrounding the origins of marine biodiversity.}, author = {Faria, Rui and Johannesson, Kerstin and Stankowski, Sean}, issn = {14209101}, journal = {Journal of Evolutionary Biology}, number = {1}, pages = {4--15}, publisher = {Wiley}, title = {{Speciation in marine environments: Diving under the surface}}, doi = {10.1111/jeb.13756}, volume = {34}, year = {2021}, } @article{9101, abstract = {Behavioral predispositions are innate tendencies of animals to behave in a given way without the input of learning. They increase survival chances and, due to environmental and ecological challenges, may vary substantially even between closely related taxa. These differences are likely to be especially pronounced in long-lived species like crocodilians. This order is particularly relevant for comparative cognition due to its phylogenetic proximity to birds. Here we compared early life behavioral predispositions in two Alligatoridae species. We exposed American alligator and spectacled caiman hatchlings to three different novel situations: a novel object, a novel environment that was open and a novel environment with a shelter. This was then repeated a week later. During exposure to the novel environments, alligators moved around more and explored a larger range of the arena than the caimans. When exposed to the novel object, the alligators reduced the mean distance to the novel object in the second phase, while the caimans further increased it, indicating diametrically opposite ontogenetic development in behavioral predispositions. Although all crocodilian hatchlings face comparable challenges, e.g., high predation pressure, the effectiveness of parental protection might explain the observed pattern. American alligators are apex predators capable of protecting their offspring against most dangers, whereas adult spectacled caimans are frequently predated themselves. Their distancing behavior might be related to increased predator avoidance and also explain the success of invasive spectacled caimans in the natural habitats of other crocodilians.}, author = {Reber, Stephan A. and Oh, Jinook and Janisch, Judith and Stevenson, Colin and Foggett, Shaun and Wilkinson, Anna}, issn = {14359456}, journal = {Animal Cognition}, publisher = {Springer Nature}, title = {{Early life differences in behavioral predispositions in two Alligatoridae species}}, doi = {10.1007/s10071-020-01461-5}, year = {2021}, } @article{9119, abstract = {We present DILS, a deployable statistical analysis platform for conducting demographic inferences with linked selection from population genomic data using an Approximate Bayesian Computation framework. DILS takes as input single‐population or two‐population data sets (multilocus fasta sequences) and performs three types of analyses in a hierarchical manner, identifying: (a) the best demographic model to study the importance of gene flow and population size change on the genetic patterns of polymorphism and divergence, (b) the best genomic model to determine whether the effective size Ne and migration rate N, m are heterogeneously distributed along the genome (implying linked selection) and (c) loci in genomic regions most associated with barriers to gene flow. Also available via a Web interface, an objective of DILS is to facilitate collaborative research in speciation genomics. Here, we show the performance and limitations of DILS by using simulations and finally apply the method to published data on a divergence continuum composed by 28 pairs of Mytilus mussel populations/species.}, author = {Fraisse, Christelle and Popovic, Iva and Mazoyer, Clément and Spataro, Bruno and Delmotte, Stéphane and Romiguier, Jonathan and Loire, Étienne and Simon, Alexis and Galtier, Nicolas and Duret, Laurent and Bierne, Nicolas and Vekemans, Xavier and Roux, Camille}, issn = {17550998}, journal = {Molecular Ecology Resources}, publisher = {Wiley}, title = {{DILS: Demographic inferences with linked selection by using ABC}}, doi = {10.1111/1755-0998.13323}, year = {2021}, } @article{9158, abstract = {While several tools have been developed to study the ground state of many-body quantum spin systems, the limitations of existing techniques call for the exploration of new approaches. In this manuscript we develop an alternative analytical and numerical framework for many-body quantum spin ground states, based on the disentanglement formalism. In this approach, observables are exactly expressed as Gaussian-weighted functional integrals over scalar fields. We identify the leading contribution to these integrals, given by the saddle point of a suitable effective action. Analytically, we develop a field-theoretical expansion of the functional integrals, performed by means of appropriate Feynman rules. The expansion can be truncated to a desired order to obtain analytical approximations to observables. Numerically, we show that the disentanglement approach can be used to compute ground state expectation values from classical stochastic processes. While the associated fluctuations grow exponentially with imaginary time and the system size, this growth can be mitigated by means of an importance sampling scheme based on knowledge of the saddle point configuration. We illustrate the advantages and limitations of our methods by considering the quantum Ising model in 1, 2 and 3 spatial dimensions. Our analytical and numerical approaches are applicable to a broad class of systems, bridging concepts from quantum lattice models, continuum field theory, and classical stochastic processes.}, author = {De Nicola, Stefano}, issn = {1742-5468}, journal = {Journal of Statistical Mechanics: Theory and Experiment}, keywords = {Statistics, Probability and Uncertainty, Statistics and Probability, Statistical and Nonlinear Physics}, number = {1}, publisher = {IOP Publishing}, title = {{Disentanglement approach to quantum spin ground states: Field theory and stochastic simulation}}, doi = {10.1088/1742-5468/abc7c7}, volume = {2021}, year = {2021}, }
d59c80ff00eff2e2
Quantum Mechanics: The Death of Local Realism From Pantheism to Monotheism Charlie had covered a lot of ground by this point.  He’d started in the age of mythology, at the dawn of civilization, looking at the cultural and socio-political forces that underpinned and supported the local mythology and priesthood classes of the ancient civilizations, and some of the broader themes that crossed different cultures around creation mythology, particularly in the Mediterranean region which drove Western civilization development.  There was most certainly a lot more rich academic and historical (and psychological) material to cover when looking at the mythology of the ancients but Charlie thought he had at least covered a portion of it and hit the major themes – at least showed how this cultural and mythological melting pot led to the predominance of the Abrahamic religions in the West, and the proliferation of Buddhism, Hinduism and Taoism in the East. As civilizations and empires began to emerge in the West, there was a need, a vacuum if you will, for a theological or even religious binding force to keep these vast empires together.  You saw it initially with the pantheon of Egyptian/Greek/Roman gods that dominated the Mediterranean in the first millennium BCE, gods who were synthesized and brought together as the civilizations from which they originated were brought together and coalesced through trade and warfare.  Also in the first millennium BCE we encountered the first vast empires, first the Persian then the Greek, both of which not only facilitated trade throughout the region but also drove cultural assimilation as well. In no small measure out of reaction to what was considered dated or ignorant belief systems, belief systems that merely reinforced the ruling class and were not designed to provide real true insight and liberation for the individual, emerged the philosophical systems of the Greeks, reflecting a deep seated dissatisfaction with the religious and mythological systems of the time, and even the political systems that were very much integrated with these religious structures, to the detriment of society at large from the philosophers perspective.  The life and times of Socrates probably best characterizes the forces at work during this period, from which emerged the likes of Plato and Aristotle who guided the development of the Western mind for the next 2500 years give or take a century. Jesus’s life in many respects runs parallel to that of Socrates, manifesting and reacting to the same set of forces that Socrates reacts to, except slightly further to the East and within the context of Roman (Jewish) rule rather than Greek rule but still reflecting the same rebellion against the forces that supported power and authority.  Jesus’s message was lost however, and survives down to us through translation and interpretation that undoubtedly dilutes his true teaching, only the core survives.  The works of Plato and Aristotle survive down to us though so we can analyze and digest their complete metaphysical systems that touch on all aspects of thought and intellectual development; the scope of Aristotle’s epistêmai. In the Common Era (CE), the year of the Lord so to speak (AD), monotheism takes root in the West, coalescing and providing the driving force for the Roman Empire and then the Byzantine Empire that followed it, and then providing the basis of the Islamic Conquests and their subsequent Empire, the Muslims attesting to the same Abrahamic traditions and roots of the Christian and the Jews (of which Jesus was of course one, a fact Christians sometimes forget).  Although undoubtedly monotheism did borrow and integrate from the philosophical traditions that preceded it, mainly to justify and solidify their theological foundations for the intellectually minded, with the advent of the authority of the Church which “interpreted” the Christian tradition for the good of the masses, you find a trend of suppression of rational or logical thinking that was in any way inconsistent with the Bible, the Word of God, or in any way challenged the power of the Church.  In many respects, with the rise in power and authority of the Church we see an abandonment of the powers of the mind, the intellect, which were held so fast and dear to by Plato and Aristotle.  Reason was abandoned for faith as it were, blind faith in God.  The Dark Ages came and went. The Scientific Revolution Then, another revolution takes place, one that unfolds in Western Europe over centuries and covers first the Renaissance, then the Scientific Revolution and the Age of Enlightenment, where printing and publishing start to make many ancient texts and their interpretations and commentary available to a broader public, outside of the monasteries.  This intellectual groundswell provided the spark that ended up burning down the blind faith in the Bible, and the Church that held its literal interpretation so dear.  Educational systems akin to colleges, along with a core curriculum of sorts (scholasticism) start to crop up in Western Europe in the Renaissance and Age of Enlightenment, providing access to many of the classic texts and rational frameworks to more and more learned men, ideas and thoughts that expanded upon mankind’s notion of reason and its limits, and its relationship to theology and society, begin to be exchanged via letters and published works in a way that was not possible prior.   This era of intellectual growth culminates in the destruction of the geocentric model of the universe, providing the crucial blow into the foundations of all of the Abrahamic religions and laying the foundation for the predominance of science (natural philosophy) and reason that marked the centuries that followed and underpins Western civilization to this day. Then came Copernicus, Kepler, Galileo and Newton, with many great thinkers in between of course, alongside the philosophical and metaphysical advancements from the likes of Descartes and Kant among others, establishing without question empiricism, deduction and scientific method as the guiding principles behind which knowledge and reality must be based and providing the philosophical basis for the political revolutions that marked the end of the 18th century in France, England and America. The geometry and astronomy of the Greeks as it turned out, Euclid and Ptolemy in particular, provided the mathematical framework within which the advancements of the Scientific Revolution were made.  Ptolemy’s geocentric model was upended no doubt, but his was the model that was refuted in the new system put forth by Copernicus some 15 centuries later, it was the reference point.  And Euclid’s geometry was superseded, expanded really, by Descartes’s model, i.e. the Cartesian coordinate system, which provided the basis for analytic geometry and calculus, the mathematical foundations of modern physics that are still with us today. The twentieth century saw even more rapid developments in science and in physics in particular, with the expansion of Newtonian physics with Einstein’s Theory of Relativity in the early 21st century, and then with the subsequent advancement of Quantum Theory which followed close behind which provides the theoretical foundation for the digital world we live in today[1]. But the Scientific Revolution of the 17th, 18th and 19th centuries did not correspond to the complete abandonment of the notion of an anthropomorphic God.  The advancements of this period of Western history provided more of an extension of monotheism, a more broad theoretical and metaphysical framework within which the God was to be viewed, rendering the holy texts not obsolete per se but rendering them more to the realm of allegory and mythology, and most certainly challenging the literal interpretations of the Bible and Qur’an that had prevailed for centuries. The twentieth century was different though.  Although you see some scattered references to God (Einstein’s famous quotation “God does not play dice” for example), the split between religion and science is cemented in the twentieth century.  The analytic papers and studies that are done, primarily by physicists and scientists, although in some cases have a metaphysical bent or at least some form of metaphysical interpretation (i.e. what do the theories imply about the underlying reality which they intend to explain), leave the notion of God out altogether, a marked contrast to the philosophers and scientists of the Scientific Revolution some century or two prior within which the notion of God, as perceived by the Church, continued to play a central role if only in terms of the underlying faith of the authors. The shift in the twentieth century however, which can really only be described as radical even though its implications are only inferred and rarely spoken of directly, is the change of faith from an underlying anthropomorphic entity/deity that represents the guiding force of the universe and mankind in particular, to a faith in the idea that the laws of the universe can be discovered, i.e. that they exist eternally, and that these laws themselves are paramount relative to religion or theology which does not rest on an empirical foundation.  Some Enlightenment philosophers of course would take issue with this claim, but twentieth century science was about what could be proven experimentally in the physical world, not about what could be the result of reason or logical constructs. This faith, this transformation of faith from religion toward science as it were, is implicit in all the scientific developments of the twentieth century, particularly in the physics community, where it it is fair to say that any statement or position of the role of God in science reflected ignorance, ignorance of the underlying framework of laws that clearly governed the behavior of “things”, things which were real and which could be described in terms of qualities such as mass, energy, momentum, velocity, trajectory, etc.  These constructs were much more sound and real than the fluff of the philosophers and metaphysicians, where mind and reason, and in fact perception, was on par with the physical world to at least some extent.  Or were they? In this century of great scientific advancement, advancement which fundamentally transforms the world within which we live, facilitating the development of nuclear energy, atomic bombs, and digital computer technology to name but a few of what can only be described as revolutionary advancements of the twentieth century, and in turn in many respects drives tremendous economic progress and prosperity throughout the modern industrialized world post World War II, it is science driven at its core by advanced mathematics, which emerges as the underlying truth within which the universe and reality is to be perceived.  Mathematical theories and their associated formulas that predicted the datum and behavior of not only the objective reality of the forces that prevailed on our planet, but also explained and predicted the behavior of grand cosmological forces; laws which describe the creation and motion of the universe and galaxies, the motion of the planets and the stars, laws that describe the inner workings of planetary and galaxy formation, stars and black holes. And then to top things off, in the very same century we find that in the subatomic realm the world is governed by a seemingly very different set of laws, laws which appear fundamentally incompatible with the laws that govern the “classical world”.  With the discovery of quantum theory and the ability to experimentally verify its predictions, we begin to understand the behavior of the subatomic realm, a fantastic, mysterious and extraordinary (and seemingly random) world which truly defies imagination.  A world where the notion of continuous existence itself is called into question.  The Ancient Greek philosophers could have never foreseen wave particle duality, no scientist before the twentieth century could.  The fabric of reality was in fact much more mysterious than anyone could have imagined. From Charlie’s standpoint something was lost here though as these advancements and “discoveries” were made.  He believed in progress no doubt, the notion that civilization progresses ever forward and that there was an underlying “evolution” of sorts that had taken place with humanity over the last several thousand years, but he did believe that some social and/or theological intellectual rift had been created in the twentieth century, and that some sort of replacement was needed.  Without religion the moral and ethical framework of society is left governed only by the rule of law, a powerful force no doubt and perhaps grounded in an underlying sense of morality and ethics but the personal foundation of morality and ethics had been crushed with the advent of science from Charlie’s perspective, flooding the world into conflict and materialism, despite the economic progress and greater access to resources for mankind at large.  It wasn’t science’s fault per se, but it was left up to the task of the intellectual community at large to find a replacement to that which had been lost from Charlie’s view.  There was no longer any self-governing force of “do good to thy neighbor” that permeated society anymore, no fellowship of the common man, what was left to shape our world seemed to be a “what’s in it for me” and a “let’s see what I can get away with” attitude, one that flooded the court systems of the West and fueled radical religious groups and terrorism itself, leading to more warfare and strife rather than peace and prosperity which was supposed to be the promise of science wasn’t it?  With the loss of God, his complete removal from the intellectual framework of Western society, there was a break in the knowledge and belief in the interconnectedness of humanity and societies at large, and Quantum Mechanics called this loss of faith of interconnectedness directly into question from Charlie’s perspective.  If everything was connected, entangled, at the subatomic realm, if this was a proven and scientifically verified fact, how could we not take the next logical step and ask what that meant to our world-view?  “That’s a philosophical problem” did not seem to be good enough for Charlie. Abandonment of religion for something more profound was a good thing no doubt, but what was it that people really believed in nowadays in the Digital Era?  That things and people were fundamentally separate, that they were operated on by forces that determined their behavior, and that the notion of God was for the ignorant and the weak and that eventually all of the underlying behavior and reality could be described within the context of the same science which discovered Relativity and Quantum Mechanics.  Or worse that these questions themselves were not of concern, that our main concern is the betterment of ourselves and our individual families even if that meant those next to us would need to suffer for our gain?  Well where did that leave us?  Where do ethics and morals fit into a world driven by greed and self-promotion? To be fair Charlie did see some movement toward some sort of more refined theological perspective toward the end of the twentieth century and into the 21st century, as Yoga started to become more popular and some of the Eastern theo-philosophical traditions such as Tai Chi and Buddhism start to gain a foothold in the West, looked at perhaps as more rational belief systems than the religions of the West which have been and remain such a source of conflict and disagreement throughout the world.  And the driving force for this adoption of Yoga in the West seemed to be more aligned with materialism and self-gain than it was for spiritual advancement and enlightenment, Charlie didn’t see this Eastern perspective permeating into broader society, it wasn’t being taught in schools, the next generation, the Digital Generation, will be more materialistic than its predecessors, theology was relegated to the domain of religion and in the West this wasn’t even fair game to teach in schools anymore. The gap between science and religion that emerged as a byproduct of the Scientific Revolution remained significant, the last thing you were going to find were scientists messing around with the domain of religion, or even theology for that matter.  Metaphysics maybe, in terms of what the developments of science said about reality, but most certainly not theology and definitely not God.  And so our creation myth was bereft of a creator – the Big Bang had no actors, simply primal nuclear and subatomic forces at work against particles that expanded and formed gases and planets and ultimately led to us, the thinking, rational human mind who was capable of contemplating and discovering the laws of the universe and question our place in them, all a byproduct of natural selection, the guiding force was apparently random chance, time, and the genetic encoding of the will to survive as a species. Quantum Physics Perhaps quantum theory, quantum mechanics, could provide that bridge.  There are some very strange behaviors that have been witnessed and modeled (and proven by experiment) at the quantum scale, principles that defy our notions of space and time that were cemented in the beginning of the twentieth century by Einstein and others.  So Charlie dove in to quantum mechanics to see what he could find and where it led.  For if there were gods or heroes in our culture today, they were the Einsteins, Bohrs, Heisenbergs and Hawkings of our time that defined our reality and determined what the next generation of minds were taught, those that broke open the mysteries of the universe with their minds and helped us better understand the world we live in.  Or did they? From Charlie’s standpoint, Relativity Theory could be grasped intellectually by the educated, intelligent mind.  You didn’t need advanced degrees or a deep understanding of complex mathematics to understand that at a very basic level, Relativity Theory implied that Mass and Energy were equivalent, related by the speed of light that moved at a fixed speed no matter what your frame of reference, that space and time were not in fact separate and distinct concepts, that our ideas of three dimensional Cartesian space were inadequate for describing the world around us at the cosmic scale, that they were correlated concepts and are more accurately grouped together in the notion of spacetime which more accurately describes the motion and behavior of everything in the universe, more accurately than the theorems devised by Newton at least. Relativity says that even gravity’s effect was subject to the same principles that played out at the cosmic scale, i.e. that spacetime “bends” at points of singularity (black holes for example), bends to the extent that light in fact is impacted by the severe gravitational forces at these powerful places in the universe.  And indeed that our measurements of time and space were “relative”, relative to the speed and frame of reference from which these measurements were made, the observer was in fact a key element in the process of measurement.  Although Relativity represented a major step in metaphysical or even scientific approach that expanded our notions of how the universe around us could be described, it still left us with a deterministic and realist model of the universe. But at their basic, core level, these concepts could be understood, grasped as it were, by the vast majority of the public, even if they had very little if any bearing on their daily lives and didn’t fundamentally change or shift their underlying religious or theological beliefs, or even their moral or ethical principles.  Relativity was accepted in the modern age, it just didn’t really affect the subjective frame of reference, the mental or intellectual frame of reference, within which the majority of humanity perceived the world around them.  It was relegated to the realm of physics and a problem for someone else to consider and at best, a problem which needed to be understood to pass a physics or science exam in high school or college, to be buried in your consciousness in lieu of our more pressing daily and life pursuits be they family, career and money, or other forms of self-preservation in the modern, Digital era; an era most notably marked by materialism, self-promotion and greed. Quantum Theory was different though.  Its laws were more subtle and complex than the world described by classical physics, the world described in painstaking mathematical precision by Newton, Einstein and others.  And after a lot of studying and research, the only conclusion that Charlie could definitively come to was that in order to understand quantum theory, or at least try to come to terms with it, a wholesale different perspective on what reality truly was, or at the very least how reality was to be defined, was required.  In other words, in order to understand what quantum theory actually means, or in order to grasp the underlying intellectual context within which the behaviors of the underlying particles/fields that quantum theory describes were to be understood, a new framework of understanding, a new description of reality, must be adopted.  What was real, as understood by classical physics which had dominated the minds of humankind for centuries, needed to be abandoned, or at the very least significantly modified, in order for quantum theory to be comprehended by any mind, or any mind that had spent any time struggling with quantum theory and trying to grasp it.  Things would never be the same from a physics perspective, this much was clear, whether or not the daily lives of the bulk of those who struggle to survive in the civilized world would evolve along with them in concert with these developments remained to be seen. Quantum Mechanics, also known as quantum physics or simply Quantum Theory, is the branch of physics that deals with the behavior or particles and matter in the atomic and subatomic realms, or quantum realm so called given the quantized nature of “things” at this scale (more on this later).  So you have some sense of scale, an atom is 10-8 cm across give or take, and the nucleus, or center of an atom, which is made up of what we now call protons and neutrons, is approximately 10-12 cm across.  An electron or a photon, the name we give for a “particle” of light”, cannot truly be “measured” from a size perspective in terms of classical physics for many of the reasons we’ll get into below as we explore the boundaries of the quantum world but suffice it to say at present our best guess at the estimate of the size of an electron are in the range of 10-18 cm or so[2]. Whether or not electrons, or photons (particles of light) for that matter, really exist as particles whose physical size, and/or momentum can be actually “measured” is not as straightforward a question as it might appear and gets at some level to the heart of the problem we encounter when we attempt to apply the principles of “existence” or “reality” to the subatomic realm, or quantum realm, within the context of the semantic and intellectual framework established in classical physics that has evolved over the last three hundred years or so; namely as defined by independently existing, deterministic and quantifiable measurements of size, location, momentum, mass or velocity. The word quantum comes from the Latin quantus, meaning “how much” and it is used in this context to identify the behavior of subatomic things that move from and between discrete states rather than a continuum of values or states as is presumed in classical physics.  The term itself had taken on meanings in several contexts within a broad range of scientific disciplines in the 19th and early 20th centuries, but was formalized and refined as a specific field of study as “quantum mechanics” by Max Planck at the turn of the 20th century and represents the prevailing and distinguishing characteristic of reality at this scale. Newtonian physics, or even the extension of Newtonian physics as “discovered” by Einstein with Relativity theory in the beginning of the twentieth century (a theory whose accuracy is well established via experimentation at this point), assumes that particles, things made up of mass, energy and momentum exist independent of the observer or their instruments of observation, are presumed to exist in continuous form, moving along specific trajectories and whose properties (mass, velocity, etc.) can only be changed by the action of some force upon which these things or objects are affected.  This is the essence of Newtonian mechanics upon which the majority of modern day physics, or at least the laws of physics that affect us here at a human scale, is defined and philosophically falls into the realm of realism and determinism. The only caveat to this view that was put forth by Einstein is that these measurements themselves, of speed or even mass or energy content of a specific object can only be said to be universally defined according to these physical laws within the specific frame of reference of an observer.  Their underlying reality is not questioned – these things clearly exist independent of observation or measurement, clearly (or so it seems) – but the values, or the properties of these things is relative to a frame of reference of the observer.  This is what Relativity tells us.  So the velocity of a massive body, and even the measurement of time itself which is a function of distance and speed, is a function of the relative speed and position of the observer who is performing said measurement.  For the most part, the effects of Relativity can be ignored when we are referring to objects on Earth that are moving at speeds that are minimal with respect to the speed of light and are less massive than say black holes.  As we measure things at the cosmic scale, where distances are measured in terms of light years and black holes and other massive phenomena exist which bend spacetime, aka singularities, the effects of Relativity cannot be ignored.[3] Leaving aside the field of cosmology for the moment and getting back to the history of the development of quantum mechanics (which arguably is integrally related to cosmology at a basic level), at the end of the 19th century Planck was commissioned by electric companies to create light bulbs that used less energy, and in this context was trying to understand how the intensity of electromagnetic radiation emitted by a black body (an object that absorbs all electromagnetic radiation regardless of frequency or angle of incidence) depended on the frequency of the radiation, i.e. the color of the light.  In his work, and after several iterations of hypothesis that failed to have predictive value, he fell upon the theory that energy is only absorbed or released in quantized form, i.e. in discrete packets of energy he referred to as “bundles” or” energy elements”, the so called Planck postulate.  And so the field of quantum mechanics was born.[4] Despite the fact that Einstein is best known for his mathematical models and theories for the description of the forces of gravity and light at a cosmic scale, i.e. Relativity, his work was also instrumental in the advancement of quantum mechanics as well.   For example, in his work in the effect of radiation on metallic matter and non-metallic solids and liquids, he discovered that electrons are emitted from matter as a consequence of their absorption of energy from electromagnetic radiation of a very short wavelength, such as visible or ultraviolet radiation.  Einstein established that in certain experiments light appeared to behave like a stream of tiny particles that he called photons, not just a wave, lending more credence and authority to the particle theories describing of quantum realm.  He therefore hypothesized the existence of light quanta, or photons, as a result of these experiments, laying the groundwork for subsequent wave-particle duality discoveries and reinforcing the discoveries of Planck with respect to black body radiation and its quantized behavior.[5] Wave-Particle Duality and Wavefunction Collapse Prior to the establishment of light’s properties as waves, and then in turn the establishment of wave like characteristics of subatomic elements like photons and electrons by Louis de Broglie in the 1920s, it had been fairly well established that these subatomic particles, or electrons or photons as they were later called, behaved like particles.  However the debate and study of the nature of light and subatomic matter went all the way back to the 17th century where competing theories of the nature of light were proposed by Isaac Newton, who viewed light as a system of particles, and Christiaan Huygens who postulated that light behaved like a wave.  It was not until the work of Einstein, Planck, de Broglie and other physicists of the twentieth century that the nature of these subatomic particles, both light and electrons, were proven to behave both like particles and waves, the result dependent upon the experiment and the context of the system which being observed.  This paradoxical principle known as wave-particle duality is one of the cornerstones, and underlying mysteries, of the implications of the reality described by Quantum Theory. As part of the discoveries of subatomic particle wave-like behavior, what Planck discovered in his study of black body radiation (and Einstein as well within the context of his study of light and photons) was that the measurements or states of a given particle such as a photon or an electron, had to take on values that were multiples of very small and discrete quantities, i.e. non-continuous, the relation of which was represented by a constant value known as the Planck constant[6]. In the quantum realm then, there was not a continuum of values and states of matter as was assumed in physics up until that time, there were bursts of energies and changes of state that were ultimately discrete, and yet fixed, where certain states and certain values could in fact not exist, representing a dramatic departure from the way most of think about movement and change in the “real world” and most certainly a significant departure from Newtonian mechanics upon which Relativity was based.[7] The classic demonstration of light’s behavior as a wave, and perhaps one of the most astonishing experiments of all time, is illustrated in what is called the double-slit experiment[8].  In the basic version of this experiment, a light source such as a laser beam is shone at a thin plate that that is pierced by two parallel slits.  The light in turn passes through each of the slits and displays on a screen behind the plate.  The image that is displayed on the screen behind the plate is not one of a constant band of light that passes through each one of the slits as you might expect if the light were simply a particle or sets of particles, the light displayed on the screen behind the double-slitted slate is one of light and dark bands, indicating that the light is behaving like a wave and is subject to interference, the strength of the light on the screen cancelling itself out or becoming stronger depending upon how the individual waves interfere with each other.  This behavior is exactly akin to what we consider fundamental wavelike behavior, for example like the nature of waves in water where the waves have greater strength if they synchronize correctly (peaks of waves) and cancel each other out (trough of waves) if not. What is even more interesting however, and was most certainly unexpected, is that once equipment was developed that could reliably send a single particle (electron or photon for example, the behavior was the same) through a double-slitted slate, these photons did end up at a single location on the screen after passing through one of the slits as was expected, but the location on the screen, as well as which slit the particle appeared to pass through (in later versions of the experiment which slit “it” passed through could in fact be detected) seemed to be somewhat random.  What researchers found as more and more of these subatomic particles were sent through the slate one at a time, the same wave like interference pattern emerged that showed up when the experiment was run with a full beam of light as was done by Young some 100 years prior. So hold on for a second, Charlie had gone over this again and again, and according to all the literature he read on quantum theory and quantum mechanics they all pretty much said the same thing, namely that the heart of the mystery of quantum mechanics could be seen in this very simple experiment.  And yet it was really hard to, perhaps impossible, to understand what was actually going on, or at least understand without abandoning some of the very foundational principles of physics, like for example that these things called subatomic particles actually existed, because they seemed to behave like waves.  Or did they? What was clear was that this subatomic particle, corpuscle or whatever you wanted to call it, did not have a linear and fully deterministic trajectory in the classical physics sense, this much was very clear due to the fact that the distribution against the screen when they were sent through the double slits individually appeared to be random.  But what was more odd was that when the experiment was run one corpuscle at a time, not only was the final location on the screen seemingly random individually, but the same pattern emerged after many, many single experiment runs as when a full wave, or set of these corpuscles, was sent through the double slits.  So not only did the individual photon seem to be aware of the final wave like pattern of its parent wave, but that this corpuscle appeared to be interfering with itself when it went through the two slits individually.  What?  What the heck was going on here? Furthermore, to make things even more mysterious, as the final location of each of the individual photons in the two slit and other related experiments was evaluated and studied, it was discovered that although the final location of an individual one of these particles could not be determined exactly before the experiment was performed, i.e. there was a fundamental element of uncertainty or randomness involved at the individual corpuscle level that could not be escaped, it was discovered that the final locations of these particles measured in toto after many experiments were performed exhibited statistical distribution behavior that could be modeled quite precisely, precisely from a mathematical statistics and probability distribution perspective.  That is to say that the sum total distribution of the final locations of all the particles after passing through the slit(s) could be established stochastically, i.e. in terms of well-defined probability distribution consistent with probability theory and well defined mathematics that governed statistical behavior.  So in total you could predict at some sense what the behavior would look like over a large distribution set even if you couldn’t predict what the outcome would look like for an individual corpuscle. The mathematics behind this particle distribution that was discovered is what is known as the wave function, typically denoted by the mathematical symbol the Greek letter psi, ψ or its capital equivalent Ψ, which predicts what the probability distribution of these “particles” will look like on the screen behind the slate over a given period of time after many individual experiments are run, or in quantum theoretical terms the wavefunction predicts the quantum state of a particle throughout a fixed spacetime interval.  This very foundational and groundbreaking equation was discovered by the Austrian physicist Erwin Schrödinger in 1925, published in 1926, and is commonly referred to in the scientific literature as the Schrödinger equation, analogous in the field of quantum mechanics to Newton’s second law of motion in classical physics. With the discovery of the wave function, or wavefunction, it now became possible to predict the potential locations or states of motions of these subatomic particles, an extremely potent theoretical model that has led to all sorts of inventions and technological advancements in the twentieth century and beyond.   This wavefunction represents a probability distribution of potential states or outcomes that describe the quantum state of a particle and predicts with a great degree of accuracy the potential location of a particle given a location or state of motion. Again, this implied that individual corpuscles were interfering with themselves when passing through the two slits on the slate which was very odd indeed.  In other words, the individual particles were exhibiting wave like characteristics even when they were sent through the double-slitted slate one at a time.  This phenomenon was shown to occur with atoms as well as electrons and photons, confirming that all of these subatomic so-called particles exhibited wave like properties as well as their particle like qualities, the behavior observed determined upon the type of experiment, or measurement as it were, that the “thing” was subject to. As Louis De Broglie, the physicist responsible for bridging the theoretical gap between the study of corpuscles (particles, matter or atoms) and waves by establishing the symmetric relation between momentum and wavelength which had at its core Planck’s constant, i.e. the De Broglie equation, described this mysterious and somewhat counterintuitive relationship between wave and particle like behavior: So by the 1920s then, you have a fairly well established mathematical theory to govern the behavior of subatomic particles, backed by a large body of empirical and experimental evidence, that indicates quite clearly that what we would call “matter” (or particles or corpuscles) in the classical sense, behaves very differently, or at least has very different fundamental characteristics, in the subatomic realm.  It exhibits properties of a particle, or a thing or object, as well as a wave depending upon the type of experiment that is run.  So the concept of matter itself then, as we had been accustomed to dealing with and discussing and measuring for some centuries, at least as far back as the time of Newton (1642-1727), had to be reexamined within the context of quantum mechanics.  For in Newtonian physics, and indeed in the geometric and mathematical framework within which it was developed and conceived which went back to ancient times (Euclid 300 BCE), matter was presumed to be either a particle or a wave, but most certainly not both. What even further complicated matters was that matter itself, again as defined by Newtonian mechanics and its extension via Relativity Theory, taken together what is commonly referred to as classical physics, was presumed to have some very definite, well-defined and fixed, real properties.  Properties like mass, location or position in space, and velocity or trajectory were all presumed to have a real existence independent of whether or not they were measured or observed, even if the actual values were relative to the frame of reference of the observer.  All of this hinged upon the notion that the speed of light was fixed no matter what the frame of reference of the observer of course, this was a fixed absolute, nothing could move faster than the speed of light.  Well even this seemingly self-evident notion, or postulate one might call it, ran into problems as scientists continued to explore the quantum realm. So by the 1920s then, the way scientists looked at and viewed matter as we would classically consider it within the context of Newton’s postulates from the early 1700s which were extended further into the notion of spacetime as put forth by Einstein, was encountering some significant difficulties when applied to the behavior of elements in the subatomic, quantum, world.  Furthermore, there was extensive empirical and scientific evidence which lent significant credibility to quantum theory, which illustrated irrefutably that these subatomic elements behaved not only like waves, exhibiting characteristics such as interference and diffraction, but also like particles in the classic Newtonian sense that had measurable, well defined characteristics that could be quantified within the context of an experiment. In his Nobel Lecture in 1929, Louis de Broglie, summed up the challenge for physicists of his day, and to a large extent physicists of modern times, given the discoveries of quantum mechanics as follows: The necessity of assuming for light two contradictory theories-that of waves and that of corpuscles – and the inability to understand why, among the infinity of motions which an electron ought to be able to have in the atom according to classical concepts, only certain ones were possible: such were the enigmas confronting physicists at the time…[10] Uncertainty, Entanglement, and the Cat in a Box The other major tenet of quantum theory that rests alongside wave-particle duality, and that provides even more complexity when trying to wrap our minds around what is actually going on in the subatomic realm, is what is sometimes referred to as the uncertainty principle, or the Heisenberg uncertainty principle, named after the German theoretical physicist Werner Heisenberg who first put forth the theories and models representing the probability distribution of outcomes of the position of these subatomic particles in certain experiments like the double-slit experiment previously described, even though the wave function itself was the discovery of Schrödinger. The uncertainty principle states that there is a fundamental limit on the accuracy with which certain pairs of physical properties of atomic particles, position and momentum being the classical pair for example, that can be known at any given time with certainty.  In other words, physical quantities come in conjugate pairs, where only one of the measurements of a given pair can be known precisely at any given time.  In other words, when one quantity in a conjugate pair is measured and becomes determined, the complementary conjugate pair becomes indeterminate.  In other words, what Heisenberg discovered, and proved, was that the more precisely one attempts to measure one of these complementary properties of subatomic particles, the less precisely the other associated complementary attribute of the element can be determined or known. Published by Heisenberg in 1927, the uncertainty principle states that they are fundamental, conceptual limits of observation in the quantum realm, another radical departure from the principles of Newtonian mechanics which held that all attributes of a thing were measurable at any given time, i.e. existed or were real.  The uncertainty principle is a statement on the fundamental property of quantum systems as they are mathematically and theoretically modeled and defined, and of course empirically validated by experimental results, not a statement about the technology and method of the observational systems themselves.  This is an important point.  This wasn’t a theoretical problem, or a problem with the state of instrumentation that was being used for measurement, it was a characteristic of the domain itself. Max Born, who won the Nobel Prize in Physics in 1954 for his work in quantum mechanics, specifically for his statistical interpretations of the wave function, describes this now other seemingly mysterious attribute of the quantum realm as follows (the specific language he uses reveals at some level his interpretation of the quantum theory, more on interpretations later): Whereas classical physicists, physics prior to the introduction of relativity and quantum theory, distinguished between the study of particles and waves, the introduction of quantum theory and wave-particle duality established that this classic intellectual bifurcation of physics at the macroscopic scale was wholly inadequate in describing and predicting the behavior of these “things” that existed in the subatomic realm, all of which took on the characteristics of both waves and particles depending upon the experiment and context of the system being observed.  Furthermore the actual precision within which a state of a “thing” in the subatomic world could be defined was conceptually limited, establishing a limit to which the state of a given subatomic state could be defined, another divergence from classical physics.  And then on top of this, was the requirement of the mathematical principles of statistics and probability theory, as well as significant extensions to the underlying geometry, to describe and model the behavior at this scale, all calling into question our classical materialistic notions and beliefs that we had held so dear for centuries. Even after the continued refinement and experimental evidence that supported Quantum Theory however, there did arise some significant resistance to the completeness of the theory itself, or at least questions as to its true implications with respect to Relativity and Newtonian mechanics.  The most notable of these criticisms came from Einstein himself, most infamously encapsulated in a paper he co-authored with two of his colleagues Boris Podolsky and Nathan Rosen published in 1936 which came to be known simply as the EPR paper, or simply the EPR paradox, which called attention to what they saw as the underlying inconsistencies of the theory that still required explanation.  In this paper they extended some of the quantum theoretical models to different thought experiments/scenarios to yield what they considered to be at very least improbable, if not impossible, conclusions. They postulated that given the formulas and mathematical models that described the current state of quantum theory, i.e. the description of a wave function that described the probabilistic outcomes for a given subatomic system, that if such a system were transformed into two systems – split apart if you will – by definition both systems would then be governed by the same wave function and whose subsequent behavior and state would be related, no matter what their separation in spacetime, violating one of the core tenets of classically physics, namely communication faster than the speed of light.  This was held to be a mathematically true and consistent with quantum theory, although at the time could not be validated via experiment. They went on to show that if this is true, it implies that if you have a single particle system that is split into two separate particles and subsequently measured, these two now separate and distinct particles would then be governed by the same wave function, and in turn would be governed by the same uncertainty principles outlined by Heisenberg; namely that a defined measurement of a particle in system A will cause its conjugate value in system B to be undeterminable or “correlated”, even if the two systems had no classical physical contact with each other and were light years apart from each other. But hold on a second, how could this be possible?  How could you have two separate “systems”, governed by the same wave function, or behavioral equation so to speak, that no matter how far apart they were, or no matter how much time elapsed between measurements, that you had a measurement in one system which fundamentally correlated with (or uncorrelated with, the argument is the same) a measurement in the other system that its separate from?  They basically took the wave function theory, which governs behavior of quantized particles, and its corresponding implication of uncertainty as outlined most notably by Heisenberg, and extended it to multiple, associated and related subatomic systems, related and governed by the same wave function despite their separation in space (and time) yielding a very awkward and somewhat unexplainable result, at least unexplainable in terms of classic physics. The question they raised boiled down to, how could you have two unrelated, distant systems whose measurements or underlying structure depended upon each other in a very well-defined and mathematically and (theoretically at the time but subsequently verified via experiment) empirically measurable way?  Does that imply that these systems are communicating in some way either explicitly or implicitly?  If so that would seem to call into question the principle of the fixed speed of light that was core to Relativity Theory.  The other alternative option seemed to be that the theory was incomplete in some way, which was Einstein’s view.  Were there “hidden”, yet to be discovered variables that governed the behavior of quantum systems that had yet to be discovered, what came to be known in the literature as hidden variable theories? If it were true, and in the past half century or so many experiments have verified this, it is at the very least extremely odd behavior, or perhaps better put reflected very odd characteristics, certainly inconsistent with prevailing theories of physics.  Or at least characteristics that we have come to not expect in our descriptions of “reality” that we had grown accustomed to expect.  Are these two subsystems, once correlated, communicating with each other?  Is there some information that is being passed between them that violates the speed of light boundary that forms the cornerstone of modern, classical physics?  This seems unlikely, and most certainly is something that Einstein felt uncomfortable with.  This “spooky action at a distance”, which is what Einstein referred to it as, seemed literally to defy the laws of physics.  But the alternative appeared to be that this notion of what we consider to be “real”, at least as it was classically defined, would need to be modified in some way to take into account this correlated behavior between particles or systems that were physically separated beyond classical boundaries. From Einstein’s perspective, two possible explanations for this behavior were put forth, 1) either there existed some model of behavior of the interacting systems/particles that was still yet undiscovered, so called hidden variables, or 2) the notion of locality, or perhaps more aptly put as the tenet of local determinism (which Einstein and others associated directly and unequivocally with reality), which underpinned all of classical physics had to be drastically modified if not completely abandoned. In Einstein’s words however, the language for the first alternative that he seemed to prefer was not that there were hidden variables per se, but more so that quantum theory as it stood in the first half of the twentieth century was incomplete.  That is to say that some variable, coefficient or hidden force was missing from quantum theory which was the driving force behind the correlated behavior of the attributes of these physically separate particles that were separate beyond classical means of communication in any way.  For Einstein it was the completeness option that he preferred, unwilling to consider the idea that the notion of locality was not absolute.  Ironically enough, hindsight being twenty-twenty and all, Einstein had just postulated that there was no such thing as absolute truth, or absolute reality, on the macroscopic and cosmic physical plane with Relativity Theory, so one might think that he would have been more open to relaxing this requirement in the quantum realm, but apparently not, speaking to the complexities and subtleties of quantum theory implications even for some of the greatest minds of the time. Probably the most widely known metaphor that illustrated Einstein and other’s criticism of quantum theory is the thought experiment, or paradox as it is sometimes referred to as, called Schrödinger’s cat, or Schrödinger’s cat paradox.[12]  In this thought experiment, which according to tradition emerged out of discussions between Schrödinger and Einstein just after the EPR paper was published, a cat is placed in a fully sealed and fully enclosed box with a radioactive source subject to certain measurable and quantifiable rate of decay, a rate that is presumably less than the life time of a cat.  In the box with the cat is one internal radioactive monitor which measures any radioactive particles in the box (# of radioactive particles <= 1), and flask of poison that is triggered by the radioactivity monitor if it is triggered.  According to quantum theory which governs the rate of decay with some random probability distribution over time, it is impossible to say at any given moment, until the box is opened in fact, whether or not the cat is dead or alive.  But how could this be?  The cat is in an undefined state until the box is opened?  There is nothing definitive that we can say about the state of the cat independent of actually opening the box?  The calls into question, bringing the analogy to the macroscopic level, whether or not according to quantum theory reality can be defined independent of observation (or measurement) within the context of the cat, the box and the radioactive particle and its associated monitor. In the course of developing this experiment, Schrödinger coined the term entanglement[13], one of the great still yet to be solved mysteries, or perhaps better called paradoxes, that exist to this day in quantum theory/mechanics.  Mysterious not in the sense as to whether or not the principle actually exists, entanglement has been verified in a variety of physically verifiable experiments as outlined in the EPR paper and illustrated in the cat paradox and is accepted as a scientific fact in the physics community, but a mystery in the sense as to how this can be possible given that it seems, at least on the face of it, to fly in the face of classical Newtonian mechanics, almost determinism itself actually.  Schrödinger himself is probably the best person to turn to understand quantum entanglement and he describes it as: The principle of entanglement calls into question of what is known as local realism; “local” in the sense that all the behaviors and datum of a given system are determined by the qualities or attributes of those objects within that given system bounded by spacetime as defined by Newtonian mechanics and Relativity or some force that is acting upon said system, and “real” in the sense that the system itself exists independent of observation or apparatus/elements of observation. Taking the non-local theory explanation to the extreme, and something which has promoted quite a bit of what can reasonably be called hysterical reaction in some academic and pseudo-academic communities even to this day, is that the existence of proven correlation of two pairs of entities that are separated in spacetime far enough from each other so that the speed of light boundary could not be crossed, if the two separated particles do indeed seem to hold a distinct and mathematically predictable correlation, i.e. this notion of entanglement “action at a distance” as it is sometimes called, then all of classical physics is called into question.  Einstein specifically called out these “spooky at a distance” theories as defunct, he so believed in the invariable tenets of Relativity, and it’s hard to argue with his position quite frankly because correlation does not necessarily imply communication.  But if local realism and its underlying tenets of determinism are to be held fast to, then where does that leave quantum theory? This problem gets somewhat more crystallized, or well defined, in 1964 when the physicist John Stewart Bell (1928-1990) in his seminal paper entitled “On the Einstein Podolsky Rosen Paradox, takes the EPR argument one step further and asserts, proves mathematically via a reductio ad absurdum argument, that if quantum theory is true, that in fact no hidden parameter or variable theory could possibly exist that reproduces all of the predictions of quantum mechanics and is also consistent with locality[15].  In other words, Bell asserted that the hidden variable hypothesis, or at the very least a broad category of hidden variable hypotheses, was incompatible with quantum theory itself, unless the notion of locality was abandoned or at least relaxed to some extent.  In his own words: This assertion is called Bell’s theorem and it posits that quantum mechanics and the concept of locality, which again states that an object is influenced directly only by its immediate surroundings and is a cornerstone of the theories of Newton and Einstein regarding the behavior of matter and the objective world, are mathematically incompatible and inconsistent with each other, providing further impetus as it were, that this classical notion of locality was in need of closer inspection, modification or perhaps even abandoned entirely. Although there still exists some debate among physicists as to whether or not there is enough experimental evidence to prove out Bell’s theorem beyond a shadow of a doubt, it seems to be broadly accepted in the scientific community that this property of entanglement exists beyond classical physical boundaries.  However, the question as to whether or not all types of hidden variable theories are ruled out by Bell’s theorems appears to be a legitimate question and is still up for debate, and perhaps this loop hole more so than any other is the path which Bohm and Hiley take with their Causal, or Ontological Interpretation of Quantum Theory (more below). Criticisms of Bell’s theorem and the related experiments aside however, if you believe quantum theory, and you’d be hard pressed not to at this point, you must conclude that the theory violates and is inconsistent with Relativity in some way, a rather disconcerting and problematic conclusion for the twentieth century physicist to say the least and a problem which plagues, and motivates, many modern theoretical physicists to this day. Quantum Theory then, as expressed with Bell’s theorem, Heisenberg’s uncertainty principle and this idea of entanglement, asserts that there exists a level of interconnectedness between physically disparate systems that defies at least some level the classical physics notion of deterministic locality, pointing to either the incompleteness of quantum theory or to the requirement of some sort of non-trivial modification of the concept of local realism which has underpinned classical physics for the last few centuries if not longer. In other words, the implications of quantum theory, a theory that has very strong predictive and experimental evidence which backs up the soundness and strength of the underlying math, is that there is something else is at work that connects the state of particles or things at the subatomic scale that cannot be altogether described, pinpointed, or explained.  Einstein himself struggles with this notion even toward the end of his lifetime in 1954 when he says: …The following idea characterizes the relative independence of objects far apart in space, A and B: external influence on A has no direct influence on B; this is known as the Principle of Local Action, which is used consistently only in field theory. If this axiom were to be completely abolished, the idea of the existence of quasi enclosed systems, and thereby the postulation of laws which can be checked empirically in the accepted sense, would become impossible….[17] Interpretations of Quantum Theory: Back to First Philosophy There is no question as to the soundness of the mathematics behind quantum theory and there is now a very large body of experimental evidence that supports the underlying mathematics, including empirical evidence of not only the particle behavior that it intends to describe (as in the two slit experiment for example), but also experimental evidence that validates Bell’s theorem and the EPR Paradox.  What is somewhat less clear however, and what arguably may belong more to the world of metaphysics and philosophy rather than physics, is how quantum theory is to be interpreted as a representation of reality given the state of affairs that it introduces.  What does quantum theory tell us about the world we live in, irrespective of the soundness of its predictive power?  This is a question that physicists, philosophers and even theologians have struggled with since the theory has gained wide acceptance and prominence in the scientific community since the 1930s. There are many interpretations of quantum theory but there are three in particular that Charlie thought deserved attention due primarily to a) their prevalence or acceptance in the academic community, and/or b) their impact on scientific or philosophical inquiry into the limits of quantum theory. The standard, orthodox interpretation of quantum theory and the one most often compared to when differing interpretations to quantum theory are put forth is most commonly referred to as the Copenhagen Interpretation which renders the theoretical boundaries of interpretation of the theory to the experiment itself, the Many-worlds (or Many-minds) interpretation which explores the boundaries of the nature of reality proposing in some extreme variants the existence of multiple universes/realities simultaneously, and the Causal Interpretation which is also sometimes called De Broglie-Bohm theory or Bohmian mechanics, which extends the theory to include the notion of quantum potential and at the same time abandons the classical notion of locality but still preserves objective realism and determinism.[18] The most well established and most commonly accepted interpretation of Quantum Theory, the one that is most often taught in schools and textbooks and the one that most alternative interpretations are compared against, is the Copenhagen Interpretation[19]. The Copenhagen interpretation holds that the theories of quantum mechanics do not yield a description of an objective reality, but deal only with sets of probabilistic outcomes of experimental values borne from experiments observing or measuring various aspects of energy quanta, entities that do not fit neatly into classical interpretations of mechanics.  The underlying tenet here is that the act of measurement itself, the observer (or by extension the apparatus of observation) causes the set of probabilistic outcomes to converge on a single outcome, a feature of quantum mechanics commonly referred to as wavefunction collapse, and that any additional interpretation of what might actually be going on, i.e. the underlying reality, defies explanation and the interpretation of which is in fact inconsistent with the fundamental mathematical tenets of the theory itself. In this interpretation of quantum theory, reality (used here in the classical sense of the term as existing independent of the observer) is a function of the experiment, and is defined as a result of the act of observation and has no meaning independent of measurement.  In other words, reality in the quantum world from this point of view does not exist independent of observation, or put somewhat differently, the manifestation of what we think of or define as “real” is intrinsically tied to and related to the act of observation of the system itself. Niels Bohr has been one of the strongest proponents of this interpretation, an interpretation which refuses to associate any metaphysically implications with the underlying physics.  He holds that given this proven interdependence between that which was being observed and the act of observation, no metaphysical interpretation can in fact be extrapolated from the theory, it is and can only be a tool to describe and measure states and particle/wave behavior in the subatomic realm that are made as a result of some well-defined experiment, i.e. that attempting to make some determination as to what quantum theory actually meant, violated the fundamental tenets of the theory itself.  From Bohr’s perspective, the inability to draw conclusions beyond the results of the experiments which the theory covers was a necessary conclusion of the theorem’s basic tenets and that was the end of the matter.  This view can be seen as the logical conclusion of the notion of complementarity, one of the fundamental and intrinsic features of quantum mechanics that makes it so mysterious and hard to describe or understand in classical terms. Complementarity, which is closely tied to the Copenhagen interpretation, expresses the notion that in the quantum domain the results of the experiments, the values yielded (or observables) were fundamentally tied to the act of measurement itself and that in order to obtain a complete picture of the state of any given system, as bound by the uncertainty principle, one would need to run multiple experiments across a given system, each result in turn rounding out the notion of the state, or reality of said system.  These combined features of the theory said something profound about the underlying uncertainty of the theory itself.  Perhaps complementarity can be viewed as the twin of uncertainty, or its inverse postulate.  Bohr summarized this very subtle and yet at the same time very profound notion of complementarity in 1949 as follows: …however far the [quantum physical] phenomena transcend the scope of classical physical explanation, the account of all evidence must be expressed in classical terms. The argument is simply that by the word “experiment” we refer to a situation where we can tell others what we have learned and that, therefore, the account of the experimental arrangements and of the results of the observations must be expressed in unambiguous language with suitable application of the terminology of classical physics. This crucial point…implies the impossibility of any sharp separation between the behavior of atomic objects and the interaction with the measuring instruments which serve to define the conditions under which the phenomena appear…. Consequently, evidence obtained under different experimental conditions cannot be comprehended within a single picture, but must be regarded as complementary in the sense that only the totality of the phenomena exhausts the possible information about the objects.[20] Complementarity was in fact the core underlying principle which drove the existence of the uncertainty principle from Bohr’s perspective; it was the underlying characteristic and property of the quantum world that captured at some level its very essence.  And complementarity, taken to its logical and theoretical limits, did not allow or provide any framework for describing, any definition of the real world outside of the domain within which it dealt with, namely the measurement values or results, the measurement instruments themselves, and the act of measurement itself. Another interpretation or possible question to be asked given the uncertainty implicit in Quantum Theory, was that perhaps all possible outcomes as described in the wave function did in some respect manifest even if they call could not be seen or perceived in our objective reality.  This premise underlies an interpretation of quantum theory that has gained some prominence in the last few decades, especially within the computer science and computational complexity fields, and has come to be known as the Many-Worlds interpretation. This original formulation of this theory was laid out by Hugh Everett in his PHD thesis in 1957 in a paper entitled The Theory of the Universal Wave Function wherein he referred to the interpretation not as the many-worlds interpretation but as the Relative-State formulation of Quantum Mechanics (more on this distinction below), but the theory was subsequently developed and expanded upon by several authors and the term many-worlds sort of stuck.[21] In Everett’s original exposition of the theory, he begins by calling out some of the problems with the original, or classic, interpretation of quantum mechanics; specifically what he and other members of the physics community believed to be the artificial creation of the wavefunction collapse construct to explain quantum uncertain to deterministic behavior transitions, as well as the difficulty that this interpretation had in dealing with systems that consisted of more than one observer, as the main drivers for an alternative viewpoint of the interpretation of the quantum theory, or what he referred to as a metatheory given that the standard interpretation could be derived from it. Although Bohr, and presumably Heisenberg and von Neumann, whose collective views on the interpretation of quantum theory make up what is now commonly referred to as the Copenhagen Interpretation of quantum theory, would no doubt explain away these seemingly contradictory and inconsistent problems with as out of scope of the theory itself (i.e. quantum theory is a theory that is intellectually and epistemologically bound by the experimental apparatus and their subsequent results which provide the scope of the underlying mechanics), Everett finds this view lacking as it fundamentally prevents us from any true explanation as to what the theory says about “reality”, or the real world as it were, a world considered to be governed by the laws of classic physics where things and objects exists independent of observers and have real, static measurable and definable qualities, a world fundamentally incompatible with the stochastic and uncertain characteristics that governed the behavior of “things” in the subatomic or quantum realm. The aim is not to deny or contradict the conventional formulation of quantum theory, which has demonstrated its usefulness in an overwhelming variety of problems, but rather to supply a new, more general and complete formulation, from which the conventional interpretation can be deduced.[22] Everett’s starts by making the following basic assumptions from which he devises his somewhat counter intuitive but yet now relatively widely accepted standard interpretations of quantum theory are 1) all physical systems large or small, can be described as states within Hilbert space, the fundamental geometric framework upon which quantum mechanics is constructed, 2) that the concept of an observer can be abstracted to be a machine like entity with access to unlimited memory which stores a history of previous states, or previous  observations, and has the ability to made deductions, or associations, regarding actions and behavior solely based upon this memory and this simple deductive process thereby incorporating observers and acts of observation (i.e. measurement) completely into the model, and 3) with assumptions 1 and 2, the entire state of the universe, which includes the observers within it, can be described in a consistent, coherent and fully deterministic fashion without the need of the notion of wavefunction collapse, or any additional assumptions for that matter. Everett makes what he calls a simplifying assumption to quantum theory, i.e. removing the need for or notion of wavefunction collapse, and assumes the existence of a universal wave function which accounts for and describes the behavior of all physical systems and their interaction in the universe, absorbing the observer and the act of observation into the model, observers being simply another form of a quantum state that interacts with the environment.  Once these assumptions are made, he can then abstract the concept of measurement as just interactions between quantum systems all governed by this same universal wave function.  In Everett’s metatheory, the notion of what an observer means and how they fit into the overall model are fully defined, and the challenge stemming from the seemingly arbitrary notion of wavefunction collapse is resolved. In Everett’s view, there exists a universal wavefunction which corresponds to an objective, deterministic reality and the notion of wavefunction collapse as put forth by von Neumann (and reflective of the standard interpretation of quantum mechanics) represents not a collapse so to speak, but represents a manifestation of the realization of one possible outcome of measurement that exists in our “reality”, or multi-verse. But from Everett’s perspective, if you take what can be described as a literal interpretation of the wavefunction as the overarching description of reality, this implies that the rest of the possible states reflected in the wave function of that system do not cease to exist with the act of observation, with the collapse of the quantum mechanical wave that describes said system state in Copenhagen quantum mechanical nomenclature, but that these other states do have some existence that persists but are simply not perceived by us.  In his own words, and this is a subtle yet important distinction between Everett’s view and the view of subsequent proponents of the many-worlds interpretation, they remain uncorrelated with the observer and therefore they do not exist in their manifest reality. We now consider the question of measurement in quantum mechanics, which we desire to treat as a natural process within the theory of pure wave mechanics. From our point of view there is no fundamental distinction between “measuring apparata” and other physical systems. For us, therefore, a measurement is simply a special case of interaction between physical systems – an interaction which has the property of correlating a quantity in one subsystem with a quantity in another.[23] This implies of course that these unperceived states do have some semblance of reality, that they do in fact exists as possible realities, realities that are thought to have varying levels of “existence” depending upon which version of the many-worlds interpretation you adhere to.  With DeWitt and Deutsch for example, a more literal, or “actual” you might say, interpretation of Everett’s original theory is taken, where these other states, these other realities or multi-verses, do in fact physical exist even though they cannot be perceived or validated by experiment.[24]  This is a more literal interpretation of Everett’s thesis however, because nowhere does Everett explicitly state that these other universes actually exist, what he does say on the matter seems to imply the existence of “possible” or potential universes that reflect non-measured or non-actualized states of physical systems, but not these unrealized outcomes actually exists in some physical universe: In reply to a preprint of this article some correspondents have raised the question of the “transition from possible to actual,” arguing that in “reality” there is—as our experience testifies—no such splitting of observer states, so that only one branch can ever actually exist. Since this point may occur to other readers the following is offered in explanation. The whole issue of the transition from “possible” to “actual” is taken care of in the theory in a very simple way—there is no such transition, nor is such a transition necessary for the theory to be in accord with our experience. From the viewpoint of the theory all elements of a superposition (all “branches”) are “actual,” none any more “real” than the rest. It is unnecessary to suppose that all but one are somehow destroyed, since all the separate elements of a superposition individually obey the wave equation with complete indifference to the presence or absence (“actuality” or not) of any other elements. This total lack of effect of one branch on another also implies that no observer will ever be aware of any “splitting” process. Arguments that the world picture presented by this theory is contradicted by experience, because we are unaware of any branching process, are like the criticism of the Copernican theory that the mobility of the earth as a real physical fact is incompatible with the common sense interpretation of nature because we feel no such motion. In both cases the argument fails when it is shown that the theory itself predicts that our experience will be what it in fact is. (In the Copernican case the addition of Newtonian physics was required to be able to show that the earth’s inhabitants would be unaware of any motion of the earth.)[25] According to this view, the act of measurement of a quantum system, and its associated principles of uncertainty and entanglement, is simply the reflection of this splitting off of the observable universe from a higher order notion of a multiverse where all possible outcomes and alternate histories have the potential to exist.  The radical form of the many-worlds view is that these potential, unmanifest realities do in fact exist, whereas Everett seems to only go so far as to imply that they “could” exist and that conceptually their existence should not be ignored. As hard as multiverse interpretation of quantum mechanics might be to wrap your head around, it does represent an elegant solution to some of the challenges raised by the broader physics community against quantum theory, most notable the EPR paradox and its extension to more everyday life examples as illustrated in the infamous Schrodinger’s cat paradox.  It does however raise some significant questions as to his theory of mind and subjective experience, a notion that he glosses over somewhat by abstracting observers into simple machines of sorts but nonetheless rests as a primary building block upon which his metatheory rests[26]. Another interpretation of these strange and perplexing findings of quantum mechanics in the early 20th century is Bohmian Mechanics, sometimes also referred to as de Broglie-Bohm theory, pilot-wave theory, or the Causal Interpretation of quantum theory.  The major contributors of the interpretation were initially Louis De Broglie who originally developed pilot-wave theory in the early part of the twentieth century but dropped the work after he got stuck on how to extend it to multi-body systems, and most prominently David Bohm who fully developed the theory in the second half of the twentieth century with the British physicist Basil Hiley. Bohmian mechanics is most fully developed in Bohm and Hiley’s book entitled The Undivided Universe first published in 1993 although much of its contents and the underlying theory had been thought out and published in previous papers on the topic since the 1950s.  In their book they refer to their interpretation not as the Causal Interpretation, or even as de Broglie-Bohm theory, but as the Ontological Interpretation of Quantum Theory given that from their perspective its gives the only complete causal and deterministic model of quantum theory. David Bohm was an American born British physicist of the twentieth century who made a variety of contributions to theoretical physics, but who also invested much time and thought into the metaphysical implications of quantum mechanics, and in metaphysics and philosophy in general, a topic that most theoretical physicists have steered away from, presumably due to the adverse effects it could have on their academic and pursuits in physics proper as Bohm himself encountered to some extent throughout his career.  In this respect, Bohm was a bit of a rebel relative to his peers in the academic community because he extended the hard science of theoretical physics into the more abstract realm of the descriptions of reality as a whole, incorporating first philosophy back into the discussion so to speak, but doing so with the tool of hard mathematics, making his interpretation very hard to ignore, or at least impossible to ignore its implications from a theoretical physics perspective. Bohm, like many other physicists (like Everett for example), was dissatisfied with the mainstream interpretations of quantum mechanics as represented by the Copenhagen school of thought and in 1952 published an alternative theory which extended the pilot-wave theory of De Broglie that was published some thirty years prior and applied its basic principles to multi-body quantum systems, developing a more robust mathematical foundation to pilot-wave theory which had been previously lacking.  He then, along with Hiley, further extended the underlying mathematics of quantum theory to include a concept called quantum potential, a principle that provided a deterministic pillar into the probabilistic and stochastic nature of standard quantum theory interpretation, the actual position and momentum of the underlying particle(s) in question being the so called hidden variables. De-Broglie’s pilot-wave theory from 1927 affirms the existence of subatomic particles, or corpuscles as they were called back then, but viewed these particles not as independent existing entities but as integrated into an undercurrent, or wave, which gave these subatomic particles their wave-like characteristics of diffraction and interference while still explaining their particle like behavior as illustrated in certain experimental results.  This represented a significant divergence away from standard interpretations of quantum theory and was not well received, hence the silence on advancement of the theory for the next twenty years or so by the physics community.  From his 1927 paper on the topic, De Broglie describes pilot-wave theory as follows: One will assume the existence, as distinct realities, of the material point and of the continuous wave represented by the [wave function], and one will take it as a postulate that the motion of the point is determined as a function of the phase of the wave by the equation. One then conceives the continuous wave as guiding the motion of the particle. It is a pilot wave.[27] De Broglie’s pilot-wave theory was dismissed by the broader academic community when it was presented at the time however, mainly due to fact that it’s implications were only understood to describe only single-body systems, and no doubt due to the fact that the common interpretation of quantum mechanics postulated that nothing could be said about the “existence” of a subatomic particle until it was measured and therefore the matter wasn’t further pursued until Bohm picked the theory back up some thirty years later.  Bohm expanded the theory to apply it to multi-body systems, giving the theory a more solid scientific ground and providing a fully developed framework for further consideration by the broader physics community. Bohmian Mechanics, as pilot-wave theory later evolved into its more mature form, provides a mathematical and metaphysical framework within which subatomic reality can indeed be thought of as actually existing independent of an observer or an act of measurement, a significant departure from standard interpretations of the theory that were prevalent for most of the twentieth century (in philosophic terms it’s a fully realist interpretation).  The theory was consistent with Bell’s Theorem as it abandoned the notion of locality, and also was also fully deterministic, positing that once the value of these hidden variables was known, all future states, and even past states, could be calculated and known as well, consistent in this sense with classical physics.[28] Bohmian Mechanics falls into the category of hidden variable theories.  It lays out a description of reality in the quantum realm where the wave function.  In other words it states that there are in fact hidden variables which dictate the actual position, momentum et al of the behavior of particles in the subatomic world, and outlines a factor it refers to as quantum potential which governs or guides the behavior and description of a quantum system and determines its future and past states, irrespective of whether or not the quantum system is observed or measured. Along with this theory being fully deterministic, it also explains away the notion of wavefunction collapse as put forth by von Neumann by positing that the pilot-wave behaves according the stochastic principles of Schrodinger’s wave function but that there is some element of intelligent, or active information, involved in the behavior of the underlying wave/particle.  In other words, from their perspective, the wave/particle knows about its environment and behaves in a pseudo-intelligent manner (they stay away from the word intelligence but Charlie couldn’t see any other way to describe what it is that they meant to say).  In two-slit experiment parlance, it knows whether or not one or both of the slits are open and in turn behaves or moves so to speak with this knowledge in mind. According to Bohm, one of the motivations for exploring the possibility of a fully deterministic/causal extension of quantum theory was not necessarily because he believed it to be the right interpretation, the correct one, but to show the possibility of such theories, the existence of which was cast into serious doubt after the development of Bell’s theorem in the 1950s. … it should be kept in mind that before this proposal was made there had existed the widespread impression that no conceptions of hidden variables at all, not even if they were abstract, and hypothetical, could possibly be consistent with the quantum theory.[30] Bohmian mechanics is consistent with Bell’s theorem, which rules out hidden variables only in theories which assume local realism, i.e. that all objects or things are governed by and behave according to the principles of classical physics which are bound by the constraints of Relativity and the fixed speed of light, which has been proven to not be the case in quantum mechanics, causing of course much consternation in the physics community and calling into question classical realism in general.[31] Bohmian Mechanics (or The Ontological Interpretation of quantum theory which is the terminology that Bohm and Hiley adopt to describe their hypothesis of what is actually happening in the quantum realm) agrees with all of the predictions and models of quantum mechanics as developed by Bohr, Heisenberg and von Neumann (the orthodox Copenhagen Interpretation) but extends the model with this notion of quantum potential, develops a metaphysical notion of active information which guides the subatomic particle(s), and makes nonlocality explicit (something which Einstein held to be absolute and immovable).  With respect to the importance of the development of Bohmian mechanics, at least from a theoretical and mathematical perspective even if you don’t want to believe the interpretation, Bell himself (1987) had this to say about Bohmian mechanics: Bohmian Mechanics uniqueness is not only that it yields to the presumption of non-locality, (which was and is consistent with experimental results that show that there is in fact a strong degree of correlation between physically separated and once integral quantum systems, i.e. systems that are entangled, what Einstein perhaps misappropriately referred to as “spooky action at a distance”) but also in that it proves that hidden variable type theories are in fact mathematically possible and still consistent with the basic tenets of quantum mechanics, the latter point of which had been seriously called into question. In other words, what Bohmian mechanics calls our attention to quite directly, is that there are metaphysical assumptions about reality in general that are fundamentally non-classical in nature that must be accounted for when interpreting quantum theory, the existence of what Bell refers to as “multidimensional-configuration space” that underlies the correlation of entangled particles/systems.  That is the only way to explain that once integrated but then subsequently separated quantum systems could be correlated in such a mathematically consistent and predictable way, behavior initially described by EPR as natural theoretical extensions of quantum theory in the first half of the twentieth century and subsequently proven experimentally in the latter part of the twentieth century by Aspect[33] among others. And it was these same quantum systems whose behavior which was modeled so successfully with quantum mechanics, that in some shape or form constituted the basic building blocks that provided the foundation of the entire “classically” physical world.  This latter fact could not be denied, and yet the laws and theorems that have been developed to describe this behavior were (and still are for that matter) fundamentally incompatible with classical physics and its underlying assumptions about what is “real” and how these objects of reality behave and are related to each other.[34]  Although the orthodox interpretation of Quantum Theory would have us believe that we can draw no metaphysical conclusions based upon what quantum mechanics tells us, that it is simply a tool for deriving values or observables from experimental results, Bohmian Mechanics shows us that this interpretation albeit consistent and fully coherent is lacking in many respects and that a new perspective is required even if the Bohmian view is rejected. Bohmian Mechanics, and Everett’s Relative State formulation of quantum mechanics as well to an extent, both extend well beyond the laws of classical physics to round out or complete their theories, both explicitly drawing on notions of metaphysics and the existence of some sort of underlying reality that exists in the subatomic realm, and this is where they depart significantly from the standard Copenhagen interpretation and the view most rigorously defended by Bohr.  The Copenhagen view holds that quantum theory tells us about the measurement of observables within the context of the quantum world, it’s an empirical measuring tool and nothing more and further that’s all that can be extrapolated from it by definition.  There’s no metaphysics explicit or implicit in the theory and any epistemological interpretation is ruled out.  Bohmian Mechanics and Everett’s Relative State formulation of quantum theory (and by association the various Many-World Interpretations that stemmed from it by DeWitt, Deutsch and others) intend to try and explain what is really happening in the quantum realm in a manner that’s consistent with the underlying model of behavior and prediction of experimental results, and some adventure into metaphysics, Aristotle’s first philosophy, is required in order to do this given that some departure from the assumptions of classical physics are required. In the Relative State formulation, the wave function of Schrodinger is postulated to be a true representation of all of reality, abstracted to include observers at all levels, observers roughly corresponding to machines that can store the results of measurements (quantum states) and apply some level of deductive reasoning to correlate states and make subsequent observations.  From this perspective, the wave function represents perspectives (this is not the term that Everett uses but the one Charlie prefers) of a correlated reality that comes into existence, a correlated reality between one or many quantum system states/observers, all definable within the geometry of Hilbert space rather than Cartesian space which is used in Newtonian mechanics (with an extra dimension of time in Relativity). Bohm (and Hiley) lay out an extension to the quantum theoretical mathematical model which is not only fully deterministic, but also “real”, not yielding to the Copenhagen view that reality in the quantum world only exists upon measurement, i.e. a reality existing independent of any observation, albeit a fundamentally non-local reality which is completely consistent with Bell’s Theorem.  Both interpretations however, and others that fit into similar categories as does Bell’s Theorem itself, call into serious question the notion of local realism which sits in the center of Newtonian mechanics which has driven scientific development in the last three hundred years. One can put it quite succinctly by observing that no matter what school of interpretation you adhere to, at the very least the classical notion of local realism must be abandoned, one would be hard pressed to find someone with a good understanding of Quantum Theory who would dispute this.  In other words, regardless of which interpretation is more attractive, or which one you adhere to, what cannot be ignored is that the classical interpretation of reality, that it has intrinsic properties that exist independent of observation and can be precisely measured in a fully deterministic and predictive way, the assumption that drove the developments of the Scientific Revolution and provided the underlying metaphysical framework for Newton, Einstein and others, was in need of serious revision. [1] Without quantum mechanics we wouldn’t have transistors which are the cornerstone of modern computing. [2] Our current ability to measure the size of these subatomic particles goes down to approximately 10-16 cm leveraging currently available instrumentation, so at the very least we can say that our ability to measure anything in the subatomic realm, or most certainly the realm of the general constituents of basic atomic elements such as quarks or gluons for example, is very challenging to say the least.  Even the measurement of the estimated size of an atom is not so straightforward as the measurement is dictated by the circumference of the atom, a measurement that relies specifically on the size or radius of the “orbit” of the electrons on said atom, “particles” whose actual “location” cannot be “measured” in tandem with their momentum, standard tenets of quantum mechanics, both of which constitute what we consider measurement in the classic Newtonian sense. [3] In some respects, even at the cosmic scale, there is still significant reason to believe that even Relativity has room for improvement as evidenced by what physicists call Dark Matter and/or Dark Energy, artifacts and principles that have been created by theoretical physicists to describe matter and energy that they believe should exist according to Relativity Theory but the evidence for which their existence is still yet ”undiscovered”.  For more on Dark Matter see http://en.wikipedia.org/wiki/Dark_matter and Dark Energy see http://en.wikipedia.org/wiki/Dark_energy, both of which remain mysteries and lines of active research for modern day cosmology. [4] Quantum theory has its roots in this initial hypothesis by Planck, and in this sense he is considered by some to be the father of quantum theory and quantum mechanics.  It is for this work in the discovery of “energy quanta” that Max Planck received the Nobel Prize in Physics in 1918, some 15 or so years after publishing. [5] Einstein termed this behavior the photoelectric effect, and it’s for this work that he won the Nobel Prize in Physics in 1921. [6] The Planck constant was first described as the proportionality constant between the energy (E) of a photon and the frequency (ν) of its associated electromagnetic wave.  This relation between the energy and frequency is called the Planck relation or the Planck–Einstein equation: [7] It is interesting to note that Planck and Einstein had a very symbiotic relationship toward the middle and end of their careers, and much of their work complemented and built off of each other.  For example Planck is said to have contributed to the establishment and acceptance of Einstein’s revolutionary concept of Relativity within the scientific community after being introduced by Einstein in 1905, the theory of course representing a radical departure from the standard classical physical and mechanics models that had held up for centuries prior.  It was through the collaborative work and studies of Planck and Einstein in some sense then that the field of quantum mechanics and quantum theory is shaped how it is today; Planck who defined the term quanta with respect to the behavior of elements in the realms of matter, electricity, gas and heat, and Einstein who used the term to describe the discrete emissions of light, or photons. [8] The double slit experiment was first devised and used by Thomas Young in the early nineteenth century to display the wave like characteristics of light.  It wasn’t until the technology was available to send a single “particle” (a photon or electron for example) that the wave like and stochastically distributed nature of the underlying “particles” was discovered as well.  http://en.wikipedia.org/wiki/Young%27s_interference_experiment [9] Louis de Broglie, “The wave nature of the electron”, Nobel Lecture, Dec 12th, 1929 [11] Max Born, “The statistical interpretation of quantum mechanics” Nobel Lecture, December 11, 1954. [12] Erwin Schrodinger made many of the fundamental discoveries in the foundation of quantum mechanics, most notably the wave function which described the behavior of subatomic particles.  He shared some of the same concerns of standard interpretations of quantum mechanics with Einstein, as illustrated in his cat paradox that he is so well known for. [13] Actually Verschränkung in German. [14] Schrödinger, E. (1935) Discussion of Probability Relations Between Separated SystemsProceedings of the Cambridge Philosophical Society, 31: pg. 555 [15] As later analysis and criticism has pointed out, Bell’s theorem rules out hidden variable theories of a given genre rather than all hidden variable theories in toto. [16] Bell, John (1964). “On the Einstein Podolsky Rosen Paradox”Physics 1 (3): 195–200. [17] Albert Einstein, Quantum Mechanics and Reality (“Quanten-Mechanik und Wirklichkeit”, Dialectica 2:320-324, 1948) [18] For a more complete review of a multitude of interpretations of Quantum Theory going well beyond this analysis see http://en.wikipedia.org/wiki/Interpretations_of_quantum_mechanics. [19] This mode of thought was formulated primarily by Niels Bohr and Werner Heisenberg, stemming from their collaboration in Copenhagen in 1927; hence the name.  The term was further crystallized in writings by Heisenberg in the 1950s when addressing contradictory interpretations of quantum theory and still represents the most widely accepted, and widely taught, interpretation of quantum mechanics in physics today. [20] Niels Bohr (1949),”Discussions with Einstein on Epistemological Problems in Atomic Physics”. In P. Schilpp. Albert Einstein: Philosopher-Scientist. Open Court. [21] Everett was a graduate student at Princeton at the time that he authored The Theory of the Universal Wave Function and his advisor was John Wheeler, one of the most respected theoretical physicists of the latter half of the twentieth century.  Incidentally Everett did not continue in academia and therefore subsequent interpretations and expansions upon his theory were left to later authors and researchers, most notably by Bryce Dewitt in 1973 who coined the term “many-worlds” and then developed even further by subsequent physicists such as David Deutsch among others.  DeWitt’s book on the subject included several different viewpoints and research papers and was called The Many-Worlds Interpretation of Quantum Mechanics; it included a reprint of Everett’s thesis.  Deutsch’s seminal work on the topic is probably his book entitled The Fabric of Reality published in 1997 where he expands and extends the man-worlds interpretation to other disciplines outside of physics such as philosophy and epistemology, computer science and quantum computing, and even biology and theories of evolution. [22] From the Introduction of Everett’s thesis in 1957 “Relative State” Formulation of Quantum Mechanics. [23] Hugh Everett, III.  Theory of the Universal Wave Function, 1957.  Pg 53. [24] Deutsch actually posits that proof of the “existence” of these other multi-verses is given by the wave interference pattern displayed in even the single split version of the classic double slit experiment as well as the some of the running time algorithm enhancements driven by quantum computing, namely Shor’s algorithm which finds the polynomial factors of a given number which runs an order of magnitude faster on quantum computers than it does on classical, 1 or 0 but based machines.  This claim is controversial to say the least, or at least remains an open point of contention among the broader physics community. See http://daviddeutsch.physics.ox.ac.uk/Articles/Frontiers.html for a summary of his views on the matter. [25] Everett’s thesis in 1957 “Relative State” Formulation of Quantum Mechanics, Note on Page 15, presumably in response to criticisms he received upon publishing the draft of his thesis to various distinguished members of the physics community, one of who was Niels Bohr. [26] Deutsch actually posits that proof of the “existence” of these other multi-verses is given by the wave interference pattern displayed in even the single split version of the classic double slit experiment as well as the some of the running time algorithm enhancements driven by quantum computing, namely Shor’s algorithm which finds the polynomial factors of a given number which runs an order of magnitude faster on quantum computers than it does on classical, 1 or 0 bit based machines.  This claim is controversial to say the least, or at least remains an open point of contention among the broader physics community. See http://daviddeutsch.physics.ox.ac.uk/Articles/Frontiers.html for a summary of Deutsch’s views on the matter and Bohm and Hiley’s Chapter on Many-Worlds in their 1993 book entitled The Undivided Universe: An Ontological Interpretation of Quantum Theory for a good overview of the strengths and weaknesses mathematical and otherwise of Everett and DeWitt’s different perspectives on the Many-Worlds approach. [27] Louis De Broglie `Wave mechanics and the atomic structure of matter and of radiation’, Le Journal de Physique et le Radium, 8, 225 (1927) [28] These features are why it is sometimes referred to as the Causal Interpretation due to the fact that that it outlined a fully causal description of the universe and its contents. [29] From Stanford Encyclopedia entry on Bohmian Mechanics by Sheldon Goldstein, quote from Bell, Speakable and Unspeakable in Quantum Mechanics, Cambridge: Cambridge University Press; 1987, p. 115. [30]  David Bohm, Wholeness and the Implicate Order, London: Routledge 1980 pg. 81. [31] In fact, it was Bohm’s extension of De Broglie’s work on pilot-wave theory that provided at least to some degree the motivation for Bell to come up with his theorem to begin with; see Bell’s paper entitled On the Einstein Podolsky Rosen Paradox in 1964, published some 12 years after Bohm published his adaption of De Broglie’s pilot-wave theory. [32] From Stanford Encyclopedia entry on Bohmian Mechanics, 2001 by Sheldon Goldstein; taken from Bell 1987, “Speakable and Unspeakable in Quantum Mechanics”, Cambridge University Press. [33] Experimental Realization of Einstein-Podolsky-Rosen-Bohm Gedankenexpermient: A New Violation of Bell’s Inequalities; Aspect, Grangier, and Roger, July 1982. [34] There has been significant progress in the last decade or two in reconciling quantum theory and classical mechanics, most notably with respect to Newtonian trajectory behavior, what is described in the literature as accounting for the classical limit.  For a good review of the topic see the article The Emergence of Classical Dynamics in a Quantum World by Tanmoy Bhattacharya, Salman Habib, and Kurt Jacobs published in Las Alamos Science in 2002. Categories: Metaphysics, Physics 13 replies 1. Thank you all for the comments and feedback. There are many sources that were used to create this content, a good place to start is The Stanford Encyclopedia of Philosophy at plato.stanford.edu which has many good articles on the various topics covered herein. Wikipedia is also a good resource. 2. Absolutely brilliant and very detailed. Keep the good work up. Liked by 1 person • I honestly wonedr about that. I’ve been thinking that perhaps consciousness exists at all levels of the holo-fractal universe, we just don’t see it/are aware of it unless we are part of that same fractal.To expand, I believe consciousness to exist not only at our level,, but at the cellular level, the atomic level, the quantum level, and so on.. I doubt I’ll ever find proof in my limetime though Liked by 1 person 3. If you lived in the 1700s, would you believe in the horseless carriage, space travel, the fact that an entire library of books can be stored on a tiny chip the size of a postage stamp, or the existence of an internet across which people can share letters, photographs, music, and videos worldwide. We might look at it as a way for our unconscious to communicate with us utilizing what’s available on the outside and bringing it to consciousness. There are several more things in building an excellent site, but these are the basic ones, illustrated in a ‘short n sweet’ way that is quite exciting. Liked by 1 person • Yeah, in Fundamental Physics one is not experiencing Ray’s law of aorelecaticn but quite a slow down from the hectic pace of the first half of the 20th century.Interesting to ask why it is so.1, Attention shift (funding) towards applied physics, especially physics of new materials?2, All the low hanging fruits already grabbed, further advances hampered by the big apparatus needed for experimental evidence (e.g. high-energy physics)?3, Dead ends and smoke and mirrors theories in theoretical physics?4, Radically original thinking smashed by too much/early massive internet use and social networking?In big science, there was truly groundbreaking work performed from 1860 to 1950 and then mostly refinements, same for engineering disciplines like e.g. navy and ship construction, aeronautics and, up to 1970, space exploration.Fortunately, there is indeed accelerating returns but only when science can be assimilated to an instantiation of information science (e.g. nowadays biology)Of course, the big hope is that, eventually, all science will be treated as information science, especially if the universe itself essentially amounts to computations (through intelligent material interactions) everywhere. Liked by 1 person • I think first we can reasonably asumse that our feelings as in emotions and sensory sensations are largely due to our expectations or predictions of imminent and distal occurrences especially predictions of EF&S’s (emotions, feelings and sensations) outcomes to those occurrences obviously person specific in a specific context.Surveys and interviews may elucidate both validity and reliability in terms of approach-avoidance, aromatic-disgusting, pleasant-irritating, joy-anger and etc. Naturally fMRI correlates will go a long way in further definition. Any consistent physiological response such as GSR Galvanic Skin Response, heart rate, blood pressure or breathing rhythms should add more defining operations . Liked by 1 person Is going to be back continuously to check up on new posts Liked by 1 person 6. I don’t see what quantum pcyishs, which isn’t fantastically well understood in the first place, has to do with life and what happens in it.That’s like saying flowers are pretty, so if you want to look good, eat lots of roses. Thinking positive is a good idea, but it’s not going to change how someone responds to your email. What it’ll change is how you write it. Liked by 1 person 8. I couldn’t refrain from commenting. Very well written! Leave a Reply WordPress.com Logo Google photo Twitter picture Facebook photo Connecting to %s %d bloggers like this:
796f8be24d0ea523
Delayed-choice quantum eraser Last updated The delayed-choice quantum eraser experiment investigates a paradox. If a photon manifests itself as though it had come by a single path to the detector, then "common sense" (which Wheeler and others challenge) says that it must have entered the double-slit device as a particle. If a photon manifests itself as though it had come by two indistinguishable paths, then it must have entered the double-slit device as a wave. If the experimental apparatus is changed while the photon is in mid‑flight, then the photon should reverse its original "decision" as to whether to be a wave or a particle. Wheeler pointed out that when these assumptions are applied to a device of interstellar dimensions, a last-minute decision made on Earth on how to observe a photon could alter a decision made millions or even billions of years ago. While delayed-choice experiments have confirmed the seeming ability of measurements made on photons in the present to alter events occurring in the past, this requires a non-standard view of quantum mechanics. If a photon in flight is interpreted as being in a so-called "superposition of states", i.e. if it is interpreted as something that has the potentiality to manifest as a particle or wave, but during its time in flight is neither, then there is no time paradox. This is the standard view, and recent experiments have supported it.[ clarification needed ] [2] [3] In the basic double-slit experiment, a beam of light (usually from a laser) is directed perpendicularly towards a wall pierced by two parallel slit apertures. If a detection screen (anything from a sheet of white paper to a CCD) is put on the other side of the double-slit wall (far enough for light from both slits to overlap), a pattern of light and dark fringes will be observed, a pattern that is called an interference pattern. Other atomic-scale entities such as electrons are found to exhibit the same behavior when fired toward a double slit. [4] By decreasing the brightness of the source sufficiently, individual particles that form the interference pattern are detectable. [5] The emergence of an interference pattern suggests that each particle passing through the slits interferes with itself, and that therefore in some sense the particles are going through both slits at once. [6] :110 This is an idea that contradicts our everyday experience of discrete objects. A well-known thought experiment, which played a vital role in the history of quantum mechanics (for example, see the discussion on Einstein's version of this experiment), demonstrated that if particle detectors are positioned at the slits, showing through which slit a photon goes, the interference pattern will disappear. [4] This which-way experiment illustrates the complementarity principle that photons can behave as either particles or waves, but not both at the same time. [7] [8] [9] However, technically feasible realizations of this experiment were not proposed until the 1970s. [10] Which-path information and the visibility of interference fringes are hence complementary quantities. In the double-slit experiment, conventional wisdom held that observing the particles inevitably disturbed them enough to destroy the interference pattern as a result of the Heisenberg uncertainty principle. However, in 1982, Scully and Drühl found a loophole around this interpretation. [11] They proposed a "quantum eraser" to obtain which-path information without scattering the particles or otherwise introducing uncontrolled phase factors to them. Rather than attempting to observe which photon was entering each slit (thus disturbing them), they proposed to "mark" them with information that, in principle at least, would allow the photons to be distinguished after passing through the slits. Lest there be any misunderstanding, the interference pattern does disappear when the photons are so marked. However, the interference pattern reappears if the which-path information is further manipulated after the marked photons have passed through the double slits to obscure the which-path markings. Since 1982, multiple experiments have demonstrated the validity of the so-called quantum "eraser". [12] [13] [14] A simple quantum-eraser experiment A simple version of the quantum eraser can be described as follows: Rather than splitting one photon or its probability wave between two slits, the photon is subjected to a beam splitter. If one thinks in terms of a stream of photons being randomly directed by such a beam splitter to go down two paths that are kept from interaction, it would seem that no photon can then interfere with any other or with itself. However, if the rate of photon production is reduced so that only one photon is entering the apparatus at any one time, it becomes impossible to understand the photon as only moving through one path, because when the path outputs are redirected so that they coincide on a common detector or detectors, interference phenomena appear. This is similar to envisioning one photon in a two-slit apparatus: even though it is one photon, it still somehow interacts with both slits. Figure 1. Experiment that shows delayed determination of photon path Beam Split and fuse.svg Figure 1. Experiment that shows delayed determination of photon path In the two diagrams in Fig. 1, photons are emitted one at a time from a laser symbolized by a yellow star. They pass through a 50% beam splitter (green block) that reflects or transmits 1/2 of the photons. The reflected or transmitted photons travel along two possible paths depicted by the red or blue lines. In the top diagram, it seems as though the trajectories of the photons are known: If a photon emerges from the top of the apparatus, it seems as though it had to have come by way of the blue path, and if it emerges from the side of the apparatus, seem as though it had to have come by way of the red path. However, it is important to keep in mind that the photon is in a superposition of the paths until it is detected. The assumption above—that it 'had to have come by way of' either path—is a form of the 'separation fallacy'. In the bottom diagram, a second beam splitter is introduced at the top right. It recombines the beams corresponding to the red and blue paths. By introducing the second beam splitter, the usual way of thinking is that the path information has been "erased"—however we have to be careful, because the photon cannot be assumed to have 'really' gone along one or the other path. Recombining the beams results in interference phenomena at detection screens positioned just beyond each exit port. What issues to the right side displays reinforcement, and what issues toward the top displays cancellation. It is important to keep in mind however that the illustrated interferometer effects apply only to a single photon in a pure state. When dealing with a pair of entangled photons, the photon encountering the interferometer will be in a mixed state, and there will be no visible interference pattern without coincidence counting to select appropriate subsets of the data. [15] Delayed choice Elementary precursors to current quantum-eraser experiments such as the "simple quantum eraser" described above have straightforward classical-wave explanations. Indeed, it could be argued that there is nothing particularly quantum about this experiment. [16] Nevertheless, Jordan has argued on the basis of the correspondence principle, that despite the existence of classical explanations, first-order interference experiments such as the above can be interpreted as true quantum erasers. [17] These precursors use single-photon interference. Versions of the quantum eraser using entangled photons, however, are intrinsically non-classical. Because of that, in order to avoid any possible ambiguity concerning the quantum versus classical interpretation, most experimenters have opted to use nonclassical entangled-photon light sources to demonstrate quantum erasers with no classical analog. Furthermore, use of entangled photons enables the design and implementation of versions of the quantum eraser that are impossible to achieve with single-photon interference, such as the delayed-choice quantum eraser, which is the topic of this article. The experiment of Kim et al. (1999) Figure 2. Setup of the delayed-choice quantum-eraser experiment of Kim et al. Detector D0 is movable Kim EtAl Quantum Eraser.svg The experimental setup, described in detail in Kim et al., [1] is illustrated in Fig 2. An argon laser generates individual 351.1 nm photons that pass through a double-slit apparatus (vertical black line in the upper left corner of the diagram). An individual photon goes through one (or both) of the two slits. In the illustration, the photon paths are color-coded as red or light blue lines to indicate which slit the photon came through (red indicates slit A, light blue indicates slit B). So far, the experiment is like a conventional two-slit experiment. However, after the slits, spontaneous parametric down-conversion (SPDC) is used to prepare an entangled two-photon state. This is done by a nonlinear optical crystal BBO (beta barium borate) that converts the photon (from either slit) into two identical, orthogonally polarized entangled photons with 1/2 the frequency of the original photon. The paths followed by these orthogonally polarized photons are caused to diverge by the Glan–Thompson prism. One of these 702.2 nm photons, referred to as the "signal" photon (look at the red and light-blue lines going upwards from the Glan–Thompson prism) continues to the target detector called D0. During an experiment, detector D0 is scanned along its x axis, its motions controlled by a step motor. A plot of "signal" photon counts detected by D0 versus x can be examined to discover whether the cumulative signal forms an interference pattern. The other entangled photon, referred to as the "idler" photon (look at the red and light-blue lines going downwards from the Glan–Thompson prism), is deflected by prism PS that sends it along divergent paths depending on whether it came from slit A or slit B. Somewhat beyond the path split, the idler photons encounter beam splitters BSa, BSb, and BSc that each have a 50% chance of allowing the idler photon to pass through and a 50% chance of causing it to be reflected. Ma and Mb are mirrors. Figure 3. x axis: position of D0. y axis: joint detection rates between D0 and D1, D2, D3, D4 (R01, R02, R03, R04). R04 is not provided in the Kim article and is supplied according to their verbal description. KimDelayedChoiceQuantumEraserGraphsSVG.svg Figure 4. Simulated recordings of photons jointly detected between D0 and D1, D2, D3, D4 (R01, R02, R03, R04) KimDelayedChoiceQuantumEraserGraphsGIF.gif The beam splitters and mirrors direct the idler photons towards detectors labeled D1, D2, D3 and D4. Note that: Detection of the idler photon by D3 or D4 provides delayed "which-path information" indicating whether the signal photon with which it is entangled had gone through slit A or B. On the other hand, detection of the idler photon by D1 or D2 provides a delayed indication that such information is not available for its entangled signal photon. Insofar as which-path information had earlier potentially been available from the idler photon, it is said that the information has been subjected to a "delayed erasure". By using a coincidence counter, the experimenters were able to isolate the entangled signal from photo-noise, recording only events where both signal and idler photons were detected (after compensating for the 8 ns delay). Refer to Figs 3 and 4. This result is similar to that of the double-slit experiment, since interference is observed when it is not known from which slit the photon originates, while no interference is observed when the path is known. Figure 5. Distribution of signal photons at D0 can be compared with distribution of bulbs on digital billboard. When all the bulbs are lit, billboard does not reveal any pattern of image, which can be "recovered" only by switching off some bulbs. Likewise interference pattern or no-interference pattern among signal photons at D0 can be recovered only after "switching off" (or ignoring) some signal photons and which signal photons should be ignored to recover pattern, this information can be gained only by looking at corresponding entangled idler photons in detectors D1 to D4. Detector0RawResults.svg Detection of signal photons at D0 does not directly yield any which-path information. Detection of idler photons at D3 or D4, which provide which-path information, means that no interference pattern can be observed in the jointly detected subset of signal photons at D0. Likewise, detection of idler photons at D1 or D2, which do not provide which-path information, means that interference patterns can be observed in the jointly detected subset of signal photons at D0. In other words, even though an idler photon is not observed until long after its entangled signal photon arrives at D0 due to the shorter optical path for the latter, interference at D0 is determined by whether a signal photon's entangled idler photon is detected at a detector that preserves its which-path information (D3 or D4), or at a detector that erases its which-path information (D1 or D2). Some have interpreted this result to mean that the delayed choice to observe or not observe the path of the idler photon changes the outcome of an event in the past. [18] [19] Note in particular that an interference pattern may only be pulled out for observation after the idlers have been detected (i.e., at D1 or D2).[ clarification needed ] The total pattern of all signal photons at D0, whose entangled idlers went to multiple different detectors, will never show interference regardless of what happens to the idler photons. [20] One can get an idea of how this works by looking at the graphs of R01, R02, R03, and R04, and observing that the peaks of R01 line up with the troughs of R02 (i.e. a π phase shift exists between the two interference fringes). R03 shows a single maximum, and R04, which is experimentally identical to R03 will show equivalent results. The entangled photons, as filtered with the help of the coincidence counter, are simulated in Fig. 5 to give a visual impression of the evidence available from the experiment. In D0, the sum of all the correlated counts will not show interference. If all the photons that arrive at D0 were to be plotted on one graph, one would see only a bright central band. Delayed-choice experiments raise questions about time and time sequences, and thereby bring the usual ideas of time and causal sequence into question. [note 1] If events at D1, D2, D3, D4 determine outcomes at D0, then effect seems to precede cause. If the idler light paths were greatly extended so that a year goes by before a photon shows up at D1, D2, D3, or D4, then when a photon shows up in one of these detectors, it would cause a signal photon to have shown up in a certain mode a year earlier. Alternatively, knowledge of the future fate of the idler photon would determine the activity of the signal photon in its own present. Neither of these ideas conforms to the usual human expectation of causality. However, knowledge of the future, which would be a hidden variable, was refuted in experiments. [21] Experiments that involve entanglement exhibit phenomena that may make some people doubt their ordinary ideas about causal sequence. In the delayed-choice quantum eraser, an interference pattern will form on D0 even if which-path data pertinent to photons that form it are only erased later in time than the signal photons that hit the primary detector. Not only that feature of the experiment is puzzling; D0 can, in principle at least, be on one side of the universe, and the other four detectors can be "on the other side of the universe" to each other. [22] :197f Consensus: no retrocausality However, the interference pattern can only be seen retroactively once the idler photons have been detected and the experimenter has had information about them available, with the interference pattern being seen when the experimenter looks at particular subsets of signal photons that were matched with idlers that went to particular detectors. [22] :197 Moreover, the apparent retroactive action vanishes if the effects of observations on the state of the entangled signal and idler photons are considered in their historic order. Specifically, in the case when detection/deletion of which-way information happens before the detection on D0, the standard simplistic explanation says "The detector Di, at which the idler photon is detected, determines the probability distribution at D0 for the signal photon". Similarly, in the case when D0precedes detection of the idler photon, the following description is just as accurate: "The position at D0 of the detected signal photon determines the probabilities for the idler photon to hit either of D1, D2, D3 or D4". These are just equivalent ways of formulating the correlations of entangled photons' observables in an intuitive causal way, so one may choose any of those (in particular, that one where the cause precedes the consequence and no retrograde action appears in the explanation). The total pattern of signal photons at the primary detector never shows interference (see Fig. 5), so it is not possible to deduce what will happen to the idler photons by observing the signal photons alone. The delayed-choice quantum eraser does not communicate information in a retro-causal manner because it takes another signal, one which must arrive by a process that can go no faster than the speed of light, to sort the superimposed data in the signal photons into four streams that reflect the states of the idler photons at their four distinct detection screens. [note 2] [note 3] In fact, a theorem proved by Phillippe Eberhard shows that if the accepted equations of relativistic quantum field theory are correct, it should never be possible to experimentally violate causality using quantum effects. [23] (See reference [24] for a treatment emphasizing the role of conditional probabilities.) In addition to challenging our common-sense ideas of temporal sequence in cause and effect relationships, this experiment is among those that strongly attack our ideas about locality, the idea that things cannot interact unless they are in contact, if not by being in direct physical contact then at least by interaction through magnetic or other such field phenomena. [22] :199 Against consensus Despite Eberhard's proof, some physicists have speculated that these experiments might be changed in a way that would be consistent with previous experiments, yet which could allow for experimental causality violations. [25] [26] [27] Other delayed-choice quantum-eraser experiments Many refinements and extensions of Kim et al. delayed-choice quantum eraser have been performed or proposed. Only a small sampling of reports and proposals are given here: Scarcelli et al. (2007) reported on a delayed-choice quantum-eraser experiment based on a two-photon imaging scheme. After detecting a photon passed through a double-slit, a random delayed choice was made to erase or not erase the which-path information by the measurement of its distant entangled twin; the particle-like and wave-like behavior of the photon were then recorded simultaneously and respectively by only one set of joint detectors. [28] Peruzzo et al. (2012) have reported on a quantum delayed-choice experiment based on a quantum-controlled beam splitter, in which particle and wave behaviors were investigated simultaneously. The quantum nature of the photon's behavior was tested with a Bell inequality, which replaced the delayed choice of the observer. [29] Rezai et al. (2018) have combined the Hong-Ou-Mandel interference with a delayed choice quantum eraser. They impose two incompatible photons onto a beam-splitter, such that no interference pattern could be observed. When the output ports are monitored in an integrated fashion (i.e. counting all the clicks), no interference occurs. Only when the outcoming photons are polarization analysed and the right subset is selected, quantum interference in the form of a Hong-Ou-Mandel dip occurs. [30] The construction of solid-state electronic Mach–Zehnder interferometers (MZI) has led to proposals to use them in electronic versions of quantum-eraser experiments. This would be achieved by Coulomb coupling to a second electronic MZI acting as a detector. [31] Entangled pairs of neutral kaons have also been examined and found suitable for investigations using quantum marking and quantum-erasure techniques. [32] A quantum eraser has been proposed using a modified Stern-Gerlach setup. In this proposal, no coincident counting is required, and quantum erasure is accomplished by applying an additional Stern-Gerlach magnetic field. [33] 2. "... the future measurements do not in any way change the data you collected today. But the future measurements do influence the kinds of details you can invoke when you subsequently describe what happened today. Before you have the results of the idler photon measurements, you really can't say anything at all about the which-path history of any given signal photon. However, once you have the results, you conclude that signal photons whose idler partners were successfully used to ascertain which-path information can be described as having ... traveled either left or right. You also conclude that signal photons whose idler partners had their which-path information erased cannot be described as having ... definitely gone one way or the other (a conclusion you can convincingly confirm by using the newly acquired idler photon data to expose the previously hidden interference pattern among this latter class of signal photons). We thus see that the future helps shape the story you tell of the past." — Brian Greene, The Fabric of the Cosmos , pp 198–199 3. The Kim paper says: P. 1f: The experiment is designed in such a way that L0, the optical distance between atoms A, B and detector D0, is much shorter than Li, which is the optical distance between atoms A, B and detectors D1, D2, D3, and D4, respectively. So that D0 will be triggered much earlier by photon 1. After the registration of photon 1, we look at these "delayed" detection events of D1, D2, D3, and D4 which have constant time delays, i ≃ (Li − L0)/c, relative to the triggering time of D0. P.2: In this experiment the optical delay (Li − L0) is chosen to be ≃ 2.5m, where L0 is the optical distance between the output surface of BBO and detector D0, and Li is the optical distance between the output surface of the BBO and detectors D1, D2, D3, and D4, respectively. This means that any information one can learn from photon 2 must be at least 8ns later than what one has learned from the registration of photon 1. Compared to the 1ns response time of the detectors, 2.5m delay is good enough for a "delayed erasure". P. 3: The which-path or both-path information of a quantum can be erased or marked by its entangled twin even after the registration of the quantum. P. 2: After the registration of photon 1, we look at these "delayed" detection events of D1, D2, D3, and D4 which have constant time delays, i ≃ (Li − L0)/c, relative to the triggering time of D0. It is easy to see these "joint detection" events must have resulted from the same photon pair. (Emphasis added. This is the point at which what is going on at D0 can be figured out.) Related Research Articles Double-slit experiment Physics experiment, showing light can be modelled by both waves and particles In modern physics, the double-slit experiment is a demonstration that light and matter can display characteristics of both classically defined waves and particles; moreover, it displays the fundamentally probabilistic nature of quantum mechanical phenomena. This type of experiment was first performed, using light, by Thomas Young in 1801, as a demonstration of the wave behavior of light. At that time it was thought that light consisted of either waves or particles. With the beginning of modern physics, about a hundred years later, it was realized that light could in fact show behavior characteristic of both waves and particles. In 1927, Davisson and Germer demonstrated that electrons show the same behavior, which was later extended to atoms and molecules. Thomas Young's experiment with light was part of classical physics long before the development of quantum mechanics and the concept of wave-particle duality. He believed it demonstrated that the wave theory of light was correct, and his experiment is sometimes referred to as Young's experiment or Young's slits. Quantum entanglement Correlation between measurements of quantum subsystems, even when spatially separated Quantum entanglement is a physical phenomenon that occurs when a group of particles are generated, interact, or share spatial proximity in a way such that the quantum state of each particle of the group cannot be described independently of the state of the others, including when the particles are separated by a large distance. The topic of quantum entanglement is at the heart of the disparity between classical and quantum physics: entanglement is a primary feature of quantum mechanics lacking in classical mechanics. The de Broglie–Bohm theory, also known as the pilot wave theory, Bohmian mechanics, Bohm's interpretation, and the causal interpretation, is an interpretation of quantum mechanics. In addition to the wavefunction, it also postulates an actual configuration of particles exists even when unobserved. The evolution over time of the configuration of all particles is defined by a guiding equation. The evolution of the wave function over time is given by the Schrödinger equation. The theory is named after Louis de Broglie (1892–1987) and David Bohm (1917–1992). Mach–Zehnder interferometer In physics, the Mach–Zehnder interferometer is a device used to determine the relative phase shift variations between two collimated beams derived by splitting light from a single source. The interferometer has been used, among other things, to measure phase shifts between the two beams caused by a sample or a change in length of one of the paths. The apparatus is named after the physicists Ludwig Mach and Ludwig Zehnder; Zehnder's proposal in an 1891 article was refined by Mach in an 1892 article. Demonstrations of Mach-Zehnder interferometry with particles other than photons had been demonstrated as well in multiple experiments. In Bell tests, there may be problems of experimental design or set-up that affect the validity of the experimental findings. These problems are often referred to as "loopholes". See the article on Bell's theorem for the theoretical background to these experimental efforts. The purpose of the experiment is to test whether nature is best described using a local hidden variable theory or by the quantum entanglement theory of quantum mechanics. A Bell test, also known as Bell inequality test or Bell experiment, is a real-world physics experiment designed to test the theory of quantum mechanics in relation to Albert Einstein's concept of local realism. The experiments test whether or not the real world satisfies local realism, which requires the presence of some additional local variables to explain the behavior of particles like photons and electrons. To date, all Bell tests have found that the hypothesis of local hidden variables is inconsistent with the way that physical systems behave. A local hidden-variable theory in the interpretation of quantum mechanics is a hidden-variable theory that has the added requirement of being consistent with local realism. It refers to all types of the theory that attempt to account for the probabilistic features of quantum mechanics by the mechanism of underlying inaccessible variables, with the additional requirement from local realism that distant events be independent, ruling out instantaneous interactions between separate events. Spontaneous parametric down-conversion Spontaneous parametric down-conversion is a nonlinear instant optical process that converts one photon of higher energy, into a pair of photons of lower energy, in accordance with the law of conservation of energy and law of conservation of momentum. It is an important process in quantum optics, for the generation of entangled photon pairs, and of single photons. The Afshar experiment is a variation of the double slit experiment in quantum mechanics, devised and carried out by Shahriar Afshar while at the private, Boston-based Institute for Radiation-Induced Mass Studies (IRIMS). The results were presented at a Harvard seminar in March 2004. Afshar claimed that the experiment gives information about which of two paths a photon takes through the apparatus while simultaneously allowing interference between the two paths to be observed, by showing that a grid of wires, placed at the nodes of the interference pattern, does not alter the beams. Afshar claimed that the experiment violates the principle of complementarity of quantum mechanics, which states roughly that the particle and wave aspects of quantum objects cannot be observed at the same time, and specifically the Englert–Greenberger duality relation. The experiment has been repeated by a number of investigators but its interpretation is controversial and there are several theories that explain the effect without violating complementarity. Elitzur–Vaidman bomb tester Quantum mechanics thought experiment In quantum mechanics, the quantum eraser experiment is an interferometer experiment that demonstrates several fundamental aspects of quantum mechanics, including quantum entanglement and complementarity. The quantum eraser experiment is a variation of Thomas Young's classic double-slit experiment. It establishes that when action is taken to determine which of 2 slits a photon has passed through, the photon cannot interfere with itself. When a stream of photons is marked in this way, then the interference fringes characteristic of the Young experiment will not be seen. The experiment also creates situations in which a photon that has been "marked" to reveal through which slit it has passed can later be "unmarked." A photon that has been "marked" cannot interfere with itself and will not produce fringe patterns, but a photon that has been "marked" and then "unmarked" will interfere with itself and produce the fringes characteristic of Young's experiment. Wheelers delayed-choice experiment Number of quantum physics thought experiments The wave–particle duality relation, often loosely referred to as the Englert–Greenberger–Yasin duality relation, or the Englert–Greenberger relation, relates the visibility, , of interference fringes with the definiteness, or distinguishability, , of the photons' paths in quantum optics. As an inequality: Popper's experiment is an experiment proposed by the philosopher Karl Popper to put to the test different interpretations of quantum mechanics (QM). In fact, as early as 1934, Popper started criticising the increasingly more accepted Copenhagen interpretation, a popular subjectivist interpretation of quantum mechanics. Therefore, in his most famous book Logik der Forschung he proposed a first experiment alleged to empirically discriminate between the Copenhagen Interpretation and a realist interpretation, which he advocated. Einstein, however, wrote a letter to Popper about the experiment in which he raised some crucial objections and Popper himself declared that this first attempt was "a gross mistake for which I have been deeply sorry and ashamed of ever since". Quantum imaging is a new sub-field of quantum optics that exploits quantum correlations such as quantum entanglement of the electromagnetic field in order to image objects with a resolution or other imaging criteria that is beyond what is possible in classical optics. Examples of quantum imaging are quantum ghost imaging, quantum lithography, sub-shot-noise imaging, and quantum sensing. Quantum imaging may someday be useful for storing patterns of data in quantum computers and transmitting large amounts of highly secure encrypted information. Quantum mechanics has shown that light has inherent “uncertainties” in its features, manifested as moment-to-moment fluctuations in its properties. Controlling these fluctuations—which represent a sort of “noise”—can improve detection of faint objects, produce better amplified images, and allow workers to more accurately position laser beams. In physics, the observer effect is the disturbance of an observed system by the act of observation. This is often the result of instruments that, by necessity, alter the state of what they measure in some manner. A common example is checking the pressure in an automobile tire; this is difficult to do without letting out some of the air, thus changing the pressure. Similarly, it is not possible to see any object without light hitting the object, and causing it to reflect that light. While the effects of observation are often negligible, the object still experiences a change. This effect can be found in many domains of physics, but can usually be reduced to insignificance by using different instruments or observation techniques. Hardy's paradox is a thought experiment in quantum mechanics devised by Lucien Hardy in 1992–3 in which a particle and its antiparticle may interact without annihilating each other. Quantum illumination is a paradigm for target detection that employs quantum entanglement between a signal electromagnetic mode and an idler electromagnetic mode, as well as joint measurement of these modes. The signal mode is propagated toward a region of space, and it is either lost or reflected, depending on whether a target is absent or present, respectively. In principle, quantum illumination can be beneficial even if the original entanglement is completely destroyed by a lossy and noisy environment. Quantum microscopy is a novel tool that allows microscopic properties of matter and quantum particles to be measured and directly visualized. There are various types of microscopy that use quantum principles. The first microscope to make use of quantum concepts was the scanning tunneling microscope, which paved the way for development of the photoionization microscope and the quantum entanglement microscope. 1. 1 2 Kim, Yoon-Ho; R. Yu; S. P. Kulik; Y. H. Shih; Marlan Scully (2000). "A Delayed "Choice" Quantum Eraser". Physical Review Letters . 84 (1): 1–5. arXiv: quant-ph/9903047 . Bibcode:2000PhRvL..84....1K. doi:10.1103/PhysRevLett.84.1. PMID   11015820. S2CID   5099293. 2. Ma, Xiao-Song; Kofler, Johannes; Qarry, Angie; Tetik, Nuray; Scheidl, Thomas; Ursin, Rupert; Ramelow, Sven; Herbst, Thomas; Ratschbacher, Lothar; Fedrizzi, Alessandro; Jennewein, Thomas; Zeilinger, Anton (2013). "Quantum erasure with causally disconnected choice". Proceedings of the National Academy of Sciences. 110 (4): 1221–1226. arXiv: 1206.6578 . Bibcode:2013PNAS..110.1221M. doi:10.1073/pnas.1213201110. PMC   3557028 . PMID   23288900. Our results demonstrate that the viewpoint that the system photon behaves either definitely as a wave or definitely as a particle would require faster-than-light communication. Because this would be in strong tension with the special theory of relativity, we believe that such a viewpoint should be given up entirely. 3. Peruzzo, A.; Shadbolt, P.; Brunner, N.; Popescu, S.; O'Brien, J. L. (2012). "A Quantum Delayed-Choice Experiment". Science. 338 (6107): 634–637. arXiv: 1205.4926 . Bibcode:2012Sci...338..634P. doi:10.1126/science.1226719. PMID   23118183. S2CID   3725159. This experiment uses Bell inequalities to replace the delayed choice devices, but it achieves the same experimental purpose in an elegant and convincing way. 4. 1 2 Feynman, Richard P.; Robert B. Leighton; Matthew Sands (1965). The Feynman Lectures on Physics, Vol. 3. US: Addison-Wesley. pp. 1.1–1.8. ISBN   978-0-201-02118-9. 5. Donati, O; Missiroli, G F; Pozzi, G (1973). "An Experiment on Electron Interference". American Journal of Physics. 41 (5): 639–644. Bibcode:1973AmJPh..41..639D. doi:10.1119/1.1987321. 6. Greene, Brian (2003). The Elegant Universe . Random House, Inc. ISBN   978-0-375-70811-4. 10. Bartell, L. (1980). "Complementarity in the double-slit experiment: On simple realizable systems for observing intermediate particle-wave behavior". Physical Review D. 21 (6): 1698–1699. Bibcode:1980PhRvD..21.1698B. doi:10.1103/PhysRevD.21.1698. 11. Scully, Marlan O.; Kai Drühl (1982). "Quantum eraser: A proposed photon correlation experiment concerning observation and "delayed choice" in quantum mechanics". Physical Review A. 25 (4): 2208–2213. Bibcode:1982PhRvA..25.2208S. doi:10.1103/PhysRevA.25.2208. 12. Zajonc, A. G.; Wang, L. J.; Zou, X. Y.; Mandel, L. (1991). "Quantum eraser". Nature. 353 (6344): 507–508. Bibcode:1991Natur.353..507Z. doi:10.1038/353507b0. S2CID   4265543. 13. Herzog, T. J.; Kwiat, P. G.; Weinfurter, H.; Zeilinger, A. (1995). "Complementarity and the quantum eraser" (PDF). Physical Review Letters. 75 (17): 3034–3037. Bibcode:1995PhRvL..75.3034H. doi:10.1103/PhysRevLett.75.3034. PMID   10059478. Archived from the original (PDF) on 24 December 2013. Retrieved 13 February 2014. 14. Walborn, S. P.; et al. (2002). "Double-Slit Quantum Eraser". Phys. Rev. A. 65 (3): 033818. arXiv: quant-ph/0106078 . Bibcode:2002PhRvA..65c3818W. doi:10.1103/PhysRevA.65.033818. S2CID   55122015. 15. Jacques, Vincent; Wu, E; Grosshans, Frédéric; Treussart, François; Grangier, Philippe; Aspect, Alain; Rochl, Jean-François (2007). "Experimental Realization of Wheeler's Delayed-Choice Gedanken Experiment". Science. 315 (5814): 966–968. arXiv: quant-ph/0610241 . Bibcode:2007Sci...315..966J. doi:10.1126/science.1136303. PMID   17303748. S2CID   6086068. 16. Chiao, R. Y.; P. G. Kwiat; Steinberg, A. M. (1995). "Quantum non-locality in two-photon experiments at Berkeley". Quantum and Semiclassical Optics: Journal of the European Optical Society Part B. 7 (3): 259–278. arXiv: quant-ph/9501016 . Bibcode:1995QuSOp...7..259C. doi:10.1088/1355-5111/7/3/006. S2CID   118987962. 17. Jordan, T. F. (1993). "Disappearance and reappearance of macroscopic quantum interference". Physical Review A. 48 (3): 2449–2450. Bibcode:1993PhRvA..48.2449J. doi:10.1103/PhysRevA.48.2449. PMID   9909872. 18. Ionicioiu, R.; Terno, D. R. (2011). "Proposal for a quantum delayed-choice experiment". Phys. Rev. Lett. 107 (23): 230406. arXiv: 1103.0117 . Bibcode:2011PhRvL.107w0406I. doi:10.1103/physrevlett.107.230406. PMID   22182073. S2CID   44297197.[ better source needed ] 19. J.A. Wheeler, Quantum Theory and Measurement, Princeton University Press p.192-213 20. Greene, Brian (2004). The Fabric of the Cosmos: Space, Time, and the Texture of Reality. Alfred A. Knopf. p.  198. ISBN   978-0-375-41288-2. 21. Peruzzo, Alberto; Shadbolt, Peter J.; Brunner, Nicolas; Popescu, Sandu; O'Brien, Jeremy L. (2012). "A quantum delayed choice experiment". Science. 338 (6107): 634–637. arXiv: 1205.4926 . Bibcode:2012Sci...338..634P. doi:10.1126/science.1226719. PMID   23118183. S2CID   3725159. 22. 1 2 3 Greene, Brian (2004). The Fabric of the Cosmos . Alfred A. Knopf. ISBN   978-0-375-41288-2. 23. Eberhard, Phillippe H.; Ronald R. Ross (1989). "Quantum field theory cannot provide faster-than-light communication". Foundations of Physics Letters. 2 (2): 127–149. Bibcode:1989FoPhL...2..127E. doi:10.1007/BF00696109. S2CID   123217211. 24. Gaasbeek, Bram (2010). "Demystifying the Delayed Choice Experiments". arXiv: 1007.3977 [quant-ph]. 26. Werbos, Paul J.; Dolmatova, Ludmila (2000). "The Backwards-Time Interpretation of Quantum Mechanics - Revisited with Experiment". arXiv: quant-ph/0008036 . 27. John Cramer, "An Experimental Test of Signaling using Quantum Nonlocality" has links to several reports from the University of Washington researchers in his group. See: 28. Scarcelli, G.; Zhou, Y.; Shih, Y. (2007). "Random delayed-choice quantum eraser via two-photon imaging". The European Physical Journal D. 44 (1): 167–173. arXiv: quant-ph/0512207 . Bibcode:2007EPJD...44..167S. doi:10.1140/epjd/e2007-00164-y. S2CID   10267634. 29. Peruzzo, A.; Shadbolt, P.; Brunner, N.; Popescu, S.; O'Brien, J. L. (2012). "A quantum delayed-choice experiment". Science. 338 (6107): 634–637. arXiv: 1205.4926 . Bibcode:2012Sci...338..634P. doi:10.1126/science.1226719. PMID   23118183. S2CID   3725159. 30. Rezai, M.; Wrachtrup, J.; Gerhardt, I. (2018). "Coherence Properties of Molecular Single Photons for Quantum Networks". Physical Review X. 8 (3): 031026. Bibcode:2018PhRvX...8c1026R. doi: 10.1103/PhysRevX.8.031026 . 31. Dressel, J.; Choi, Y.; Jordan, A. N. (2012). "Measuring which-path information with coupled electronic Mach-Zehnder interferometers". Physical Review B. 85 (4): 045320. arXiv: 1105.2587 . doi:10.1103/physrevb.85.045320. S2CID   110142737. 32. Bramon, A.; Garbarino, G.; Hiesmayr, B. C. (2004). "Quantum marking and quantum erasure for neutral kaons". Physical Review Letters. 92 (2): 020405. arXiv: quant-ph/0306114 . Bibcode:2004PhRvL..92b0405B. doi:10.1103/physrevlett.92.020405. PMID   14753924. S2CID   36478919. 33. Qureshi, T.; Rahman, Z. (2012). "Quantum eraser using a modified Stern-Gerlach setup". Progress of Theoretical Physics. 127 (1): 71–78. arXiv: quant-ph/0501010 . Bibcode:2012PThPh.127...71Q. doi:10.1143/PTP.127.71. S2CID   59470770.
0224d3b3c4b7c69b
Coronavirus (Covid-19): Latest updates and information Skip to main content Skip to navigation Theory of Electron, Phonon and Spin Transport in Nanoscale Quantum Devices Hatef Sadeghi At the level of fundamental science, it was recently demonstrated that molecular wires can mediate long-range phase-coherent tunnelling with remarkably low attenuation over a few nanometre even at room temperature. Furthermore, a large mean free path has been observed in graphene and other graphene-like two-dimensional materials. These create the possibility of using quantum and phonon interference to engineer electron and phonon transport through nanoscale junctions for wide range of applications such as molecular switches, sensors, piezoelectricity, thermoelectricity and thermal management. To understand transport properties of such devices, it is crucial to calculate electron transmission coefficient (Te) and phonon transmission coefficient (Tph) through them. The aim of this tutorial article is to outline the basic theoretical concepts and review the state-of-the-art theoretical and mathematical techniques to treat electron, phonon and spin transport in nanoscale molecular junctions. This helps not only to explain new phenomenon observed experimentally but also provides a vital design tool to develop novel nanoscale quantum devices. Index Terms: Molecular electronics, Nanoelectronics, Theory and modelling, Quantum interference, Phonon interference  Download this tutorial: PDF format A molecular junction consists of Bipyridine molecule connected to gold electrodes. Bottom right: Electron transmission coefficient (Te). Bottom left: Phonon transmission coefficient (Tph) . Electron (phonon) transmission coefficient describes the transmission probability of electrons (phonons) with energy E (ω) from left to right electrodes through the molecule. I. Introduction: Molecular electronics The silicon-based semiconductor industry is facing a grave problem because of the performance limits imposed on semiconductor devices when miniaturized to nanoscale [1]. To counter this problem, new nanoscale materials such as carbon nanotubes, graphene and transition metal dichalcogenide monolayers have been proposed [2]. Alternatively, the idea of using single molecules as building blocks to design and fabricate molecular electronic components has been around for more than 40 years [3], but only recently has it attracted huge scientific interest to explore their unique properties and opportunities. Molecular electronics including self-assembled monolayers [4] and single-molecule junctions [5] are of interest not only for their potential to deliver logic gates [6], sensors [7], and memories [8] with ultra-low power requirements and sub-10-nm device footprints, but also for their ability to probe room-temperature quantum properties at the molecular scale such as quantum interference [9] and thermoelectricity [10, 11]. There are five main area of research in molecular-scale electronics [5] namely: molecular mechanics, molecular optoelectronics, molecular electronics, molecular spintronics and molecular thermoelectrics. By studying electron and phonon transport across a junction consisting of two or more electrodes connected to a single or few hundred molecules, one could study the electrical and mechanical properties of nanoscale junctions such as molecular electronic building blocks, sensors, molecular spintronic, thermoelectric, piezoelectric and optoelectronic devices. For example, when a single molecule is attached to metallic electrodes, de Broglie waves of electrons entering the molecule from one electrode and leaving from the other form a complex interference pattern inside the molecule. These patterns could be utilized to optimize the single-molecule device performance [6]. In addition, the potential of such junctions for removing heat from nanoelectronic devices (thermal management) and thermoelectrically converting waste heat into electricity has recently been recognized [10]. Indeed, electrons passing through single molecules have been demonstrated to remain phase coherent, even at room temperature. In this tutorial, the aim is to outline the basic theoretical concepts and review theoretical and mathematical techniques to model electron, phonon and spin transport in nanoscale molecular junctions. This helps not only to understand the experimental observations but also provides a vital design tool to develop strategies for molecular electronic building blocks, thermoelectric devices and sensors. Transport on the molecular scale: Any nanoscale device consists of two or more electrodes (leads) connected to a scattering region (figure 1 ). Electrodes are perfect wave-guides where electrons and phonons can transmit without any scattering. The main scattering occurs either at the interface to leads or inside scattering region. The goal is to understand the electrical and vibrational properties of nano and molecular junctions where a nanoscale scatter or a molecule is bonded to electrodes with strong or weak coupling in the absence or presence of surroundings, such as an electric field (e.g. gate and bias voltages or local charge), a magnetic field, a laser beam or a molecular environment (e.g. water, gases, biological spices, donors and acceptors). There are different approaches to study the electronic and vibrational properties of molecular junctions such as semi-classical methods [1214], kinetic theory of quantum transport [15], scattering theory [16] and master equation approach [17]. In this paper, our focus is mostly on the scattering theory based on the Green’s function formalism and the master equation approach. Fig 1 Fig. 1.  A scattering region is connected to reservoirs trough ballistic leads. Reservoirs have slightly different electrochemical potentials to drive electrons from the left to right leads. All inelastic relaxation process take place in the reservoirs and transport in the leads are ballistic. Here, we begin with the Schrödinger equation and relate it to the physical description of a matter at the nano and molecular scale. Then we will discuss the definition of the current using the time-dependent Schrödinger equation and introduce density functional theory (DFT) and a tight binding description of quantum system. The scattering theory and non-equilibrium Green’s function method are discussed and different transport regimes (on and off resonances) are considered. One dimensional system and a more general multi-channel method are derived to calculate transmission coefficient T(E) for electrons (phonons) with energy E (ω) traversing from one electrode to the other through a scattering region. We then briefly discuss the master equation method to model transport in the Coulomb and Franck-Condon blockade regimes. We follow with a discussion of the physical interpretation of quantum systems including charge, spin and thermal currents, phonon thermal conductance, electron-phonon interaction, piezoelectric response, inclusion of a Gauge field and superconducting systems. Furthermore, environmental effects and different techniques used to model the experiment are discussed. II. Schrödinger equation The most general Schrödinger equation [18] describes the evolution of physical properties of a system in time and was proposed by the Austrian physicist Erwin Schrödinger in 1926 as: iℏ-∂Ψ (r,t) = HˆΨ(r,t) ∂t where i = √--- - 1, is the reduced Planck constant (h∕2π), Ψ is the wave function of quantum system, r and t are the position vector and time respectively, and Ĥ is the Hamiltonian operator which characterizes the total energy of any given wave function. For a single particle moving in an electric field, the non-relativistic Schrödinger equation reads as: ∂- --ℏ2 2 iℏ ∂tΨ(r,t) = [2m ▽ +V (r,t)]Ψ(r,t) where m is the particle’s reduced mass, V is its potential energy and 2 is the Laplacian. If we assume that the Hamiltonian is time-independent and write the wavefunction as a product of spatial and temporal terms: Ψ(r,t) = ψ(r)θ(t), the Schrödinger equation become two ordinary differential equations: 1 d iE ------θ(t) = ---- θ(t)dt ℏ ˆ H ψ(r) = E ψ(r) where Ĥ = -2mℏ2 2 + V (r). Note that this is not a solution if the Hamiltonian is time-dependent e.g. when a laser is shined to a system or an AC high frequency voltage is applied in which V (r) is varied by time and a time-dependent DFT should be considered [19]. The solution of equation 3 could be written as: θ(t) = e-iEt∕. The amplitude of θ(t) does not change with time and therefore the solutions θ(t) are purely oscillatory. The total wave function Ψ (r,t) = ψ(r)e- iEt∕ℏ differs from ψ(r) only by a phase factor of constant magnitude and the expectation value |Ψ(r,t)|2 is time-independent. Of course equation 5 is a particular solution of the time-dependent Schrödinger equation. The most general solution is a linear combination of these particular solutions as: ∑ -iEit∕ℏ Ψ (r,t) = ϕie ψi(r) i In time independent problems, the spatial part needs to be solved only, since the time dependent phase factor in equation 5 is always the same. Equation 4 is called the time-independent Schrödinger equation and it is an eigenvalue problem where E’s are eigenvalues of the Hamiltonian Ĥ. Since the Hamiltonian is a Hermitian operator, the eigenvalues E are real. ψ(r) describes the standing wave solutions of the time-dependent equation, which are states with definite energies called ”stationary states” or ”energy eigenstates” in physics and ”atomic orbitals” or ”molecular orbitals” in chemistry. The Schrödinger equation must be solved subject to appropriate boundary conditions. Since electrons are fermions, the solution must satisfy the Pauli exclusion principle and wavefunctions ψ must be well behaved everywhere. The Schrödinger equation can be solved analytically for a few small systems e.g the hydrogen atom. However, the Schrödinger equation is too complex in most cases to be solved even with the best supercomputers available today. Therefore, some approximations are needed [20] such as the Born-Oppenheimer approximation to decouple the movement of electrons and nuclei; density functional theory (DFT) to describe electron - electron interactions and pseudopotentials to treat nuclei and core electrons except those in the valence band. We will briefly discuss these approximations in the next section. A more detailed discussion can be found elsewhere [20]. To describe transport through molecules or nanoscale matters, one needs to build a simple tight-binding Hamiltonian using Hückel parameters or use DFT to construct the material specific mean-field Hamiltonian. To reduce the size of Hamiltonian, it is appropriate to define basis functions where ∑ Ψ (r) = ϕiψi(r) i The wavefunction can then be represented by a column vector |ϕconsisting of the expansion coefficients ϕi. The time-independent Schrödinger equation could be written as a matrix equation: [H ]|ϕ⟩ = E[S]|ϕ⟩ ∫ * Sij = ⟨i|j⟩ = drψj(r)ψi(r) ∫ Hij = ⟨i|H |j⟩ = drψ*j(r)H ψi(r) The evaluation of these integrals is the most time-consuming step, but once [H] and [S] are obtained, the eigenvalues En and eigenvectors ϕn are easily calculated. If i| and |jare orthogonal then Sij = δij where δij is the Kronecker delta (δij = 1 if i = j and δij = 0 if ij). Note that a system with the Hamiltonian H and overlap matrix S obtained using non-orthogonal basis could be transformed to a new Hamiltonian H = S-12 × H × S-12 with orthogonal basis (S = I where I is the Identity matrix.) A. Density functional theory (DFT) In order to understand the behavior of molecular electronic devices, it is necessary to possess a reliable source of structural and electronic information. A solution to the many body problem has been sought by many generations of physicists. The task is to find the eigenvalues and eigenstates of the full Hamiltonian operator of a system consisting of nuclei and electrons as shown in figure 2 . Since this is not practically possible for the systems bigger than a few particles, some approximations are needed. The atomic masses are roughly three orders of magnitudes bigger than the electron mass, hence the Born-Oppenheimer approximation [20] can be employed to decouple the electronic wave function and the motion of the nuclei. In other words, we solve the Schrödinger equation for the electronic degrees of freedom only. Once we know the electronic structure of a system, we can calculate classical forces on the nuclei and minimize these forces to find the ground-state geometry (figure 2 a). Once the Schrödinger equation was solved, the wavefunction is known and all physical quantities of intereste could be calculated. Although the Born-Oppenheimer approximation decouple the electronic wave function and the motion of the nuclei, the electronic part of the problem has reduced to many interacting particles problem which even for modest system sizes i.e. a couple of atoms, its diagonalization is practically impossible even on a modern supercomputer. The virtue of density functional theory DFT [20, 21] is that it expresses the physical quantities in terms of the ground-state density and by obtaining the ground-state density, one can in principle calculate the ground-state energy. However, the exact form of the functional is not known. The kinetic term and internal energies of the interacting particles cannot generally be expressed as functionals of the density. The solution is introduced by Kohn and Sham in 1965. According to Kohn and Sham, the original Hamiltonian of the many body interacting system can be replaced by an effective Hamiltonian of non-interacting particles in an effective external potential, which has the same ground-state density as the original system as illustrated in figure 2 a. The difference between the energy of the non-interacting and interacting system is referred to the exchange correlation functional (figure 2 a). Fig 2 Fig. 2. From many-body problem to density functional theory DFT. (a) Born-Oppenheimer approximation, Hohenberg-Kohn theorem and Kohn-Sham ansatz, (b) Schematic of the DFT self-consistency process. Exchange and correlation energy: There are numerous proposed forms for the exchange and correlation energy V xc in the literature [20, 21]. The first successful - and yet simple - form was the Local Density Approximation (LDA) [21], which depends only on the density and is therefore a local functional. Then the next step was the Generalized Gradient Approximation (GGA) [21], including the derivative of the density. It also contains information about the neighborhood and therefore is semi-local. LDA and GGA are the two most commonly used approximations to the exchange and correlation energies in density functional theory. There are also several other functionals, which go beyond LDA and GGA. Some of these functionals are tailored to fit specific needs of basis sets used in solving the Kohn-Sham equations and a large category are the so called hybrid functionals (eg. B3LYP [22], HSE [23] and Meta hybrid GGA [22]), which include exact exchange terms from Hartree-Fock. One of the latest and most universal functionals, the Van der Waals density functional (vdW-DF [24]), contains non-local terms and has proven to be very accurate in systems where dispersion forces are important. Pseudopotentials: Despite all simplifications shown in figure 2 , in typical systems of molecules which contain many atoms, the calculation is still very large and has the potential to be computationally expensive. In order to reduce the number of electrons, one can introduce pseudopotentials which effectively remove the core electrons from an atom. The electrons in an atom can be split into two types: core and valence, where core electrons lie within filled atomic shells and the valence electrons lie in partially filled shells. Together with the fact that core electrons are spatially localized about the nucleus, only valence electron states overlap when atoms are brought together so that in most systems only valence electrons contribute to the formation of molecular orbitals. This allows the core electrons to be removed and replaced by a pseudopotential such that the valence electrons still feel the same screened nucleon charge as if the core electrons were still present. This reduces the number of electrons in a system dramatically and in turn reduces the time and memory required to calculate properties of molecules that contain a large number of electrons. Another benefit of pseudopotentials is that they are smooth, leading to greater numerical stability. Basis Sets: In order to turn the partial differential equations (e.g. the Schrödinger equation 1 ) into algebraic equations suitable for efficient implementation on a computer, a set of functions (called basis functions) is used to represent the electronic wave function. For a periodic system, the plane-wave basis set is natural since it is, by itself, periodic. However, to construct a tight-binding Hamiltonian, we need to use localised basis sets discussed in the next section, which are not implicitly periodic. An example is a Linear Combination of Atomic Orbital (LCAO) basis set which are constrained to be zero after some defined cut-off radius, and are constructed from the orbitals of the atoms. Mean-field Hamiltonian from DFT: To obtain the ground state mean-field Hamiltonian of a system from DFT, the calculation is started by constructing the initial atomic configuration of the system. Depending on the applied DFT implementation, the appropriate pseudopotentials for each element which can be different for every exchange-correlation functional might be needed. Furthermore, a suitable choice of the basis set has to be made for each element present in the calculation. The larger the basis set, the more accurate our calculation - and, of course, the longer it will take. With a couple of test calculations we can optimize the accuracy and computational cost. Other input parameters are also needed that set the accuracy of the calculation such as the fineness and density of the k-grid points used to evaluate the integral([21, 25]). Then an initial charge density assuming no interaction between atoms is calculated. Since the pseudopotentials are known, this step is simple and the total charge density will be the sum of the atomic densities. The self-consistent calculation [21] (figure 2 b) starts by calculating the Hartree and exchange correlation potentials. Since the density is represented in real space, the Hartree potential is obtained by solving the Poisson equation with the multi-grid or fast Fourier-transform method. Then the Kohn-Sham equations are solved and a new density is obtained. This self-consistent iterations end when the necessary convergence criteria are reached such as density matrix tolerance. Once the initial electronic structure of a system was obtained, the forces on the nucleis is calculated and a new atomic configuration to minimize these forces obtained. New atomic configuration is the new initial coordinate for the next self-consistent calculation. This structural optimization is controlled by the conjugate gradient method for finding the minimal ground state energy and the corresponding atomic configuration [21]. From the obtained ground state geometry of the system, the ground state electronic properties of the system such as the total energy, the binding energies between different part of the system, the density of states, the local density of states, the forces could be calculated. It is apparent that the DFT could potentially provide an accurate description of the ground state properties of a system such as the total energy, the binding energy and the geometrical structures. However, all electronic properties related to excited states are less accurate within DFT. B. Tight-Binding Model By expanding wave function over a finite set of atomic orbitals, Hamiltonian of a system can be written in a tight-binding model. The main idea is to represent the wave function of a particle as a linear combination of some known localized states. A typical choice is to consider a linear combination of atomic orbitals (LCAO). If the LCAO basis is used within DFT, the Hamiltonian H and overlap S matrices used within the scattering calculation (section III ) could be directly extracted. However, if a plane-wave DFT code is used, a LCAO-like based Hamiltonian could be constructed using Wannier functions. For a periodic system where the wave function is described by a Bloch function, equation 8 could be written as ∑ ∑ ′H α,c;β,c′ϕβ,c′ = E ′S α,c;β,c′ϕβ,c′ β,c β,c where c and care the neighboring identical cells containing states α and Hα,c;β,c′ = H α,β(Rc - Rc ′) ik.Rc ϕβ,c = ϕβe . Equation 11 could be written as ∑ Hαβ(k)ϕβ = E ∑ S αβ (k)ϕβ β β ∑ H αβ(k) = H αβ(Rc - Rc′)eik(Rc-Rc′) c′ ∑ Sαβ(k) = Sαβ(Rc - Rc′)eik(Rc-Rc′) c′ More generally, the single-particle tight-binding Hamiltonian in the Hilbert space formed by |Rαcould be written as: ∑ ∑ H = (εα + eVα)|α⟩⟨α|+ γαβ|α⟩⟨β| α αβ where εα is the corresponding on-site energy of the state |α, V α is the electrical potential and γαβ are the hopping matrix elements between states |αand |β. Simple TB Hamiltonian: For conjugated hydrocarbons, the energy of molecular orbitals associated with the π electrons could be determined by a very simple LCAO molecular orbital method called Hückel molecular orbital method (HMO). Therefore, a simple TB description of system could be constructed by assigning a Hückel parameter to on-site energy εα of each atom in the molecule connected to the nearest neighbours with a single Hückel parameter (hopping matrix element) γαβ. Obviously, more complex TB models could be made using HMO by taking the second, third, forth or more nearest neighbours hopping matrix elements into account. It is worth mentioning that once a material specific LCAO mean-field DFT or a simple HMO Hamiltonians (described in this section) were obtained, electron and spin transport properties of a junction can be calculated. 1) Two level system As a simplest example, consider a close system of two single-orbital sites with on-site energies ε and -ε coupled to each other by the hoping integral γ. The Hamiltonian of such system is written as: H = ( ε γ ) γ * - ε , so the Schrödinger equation reads: ( ε γ) ( ψ) (ψ ) γ* - ε ϕ = E ϕ The eigenvalues E are calculated by solving det(H - EI) = 0 where I = ( ) 1 0 0 1 is the identity matrix. ∘ -------- E± = ± ε2 + |γ|2 E- and E+ are called the bonding and anti-bonding states. There must be two orthogonal eigenvectors (ψ+ ) ϕ+ and (ψ- ) ϕ- corresponding to each eigenvalue. By substituting equation 19 into equation 18 , ψ± γ E ± + ε ϕ±-= E-± --ε =--γ*-- If ε = 0 and E = ±γ, simplest normalised eigenstates could be written as: ( ) ( ) ( ) ( ) ψ+ = 1√-- 1 , ψ - = √1- 1 ϕ+ 2 1 ϕ- 2 - 1 If γ = 0 and E = ±ε, the wave functions are fully localised on each site: ( ψ+) 1 (0 ) (ψ- ) 1 (1) ϕ+ = √-- 1 , ϕ- = √-- 0 2 2 Effective Hamiltonian: The solution of the two level system can be used to obtain solutions of a larger system. For a given N ×N Hamiltonian H, a new effective energy dependent (N - 1) × (N - 1) Hamiltonian Ĥ(E) could be obtained by decimating a given site p, ˆ -HipHpj- Hij(E) = Hij + E - Hpp Therefore, any N × N Hamiltonian H of an arbitrary system with N site could be reduced to 2 × 2 energy dependent effective Hamiltonian Ĥ(E) by decimating all sites but two sites of interest using equation 23 . 2) One dimensional (1D) infinite chain Consider an infinite linear chain of hydrogen atoms shown in figure 3 a. A single orthogonal orbital nearest neighbor tight binding Hamiltonian of such system with on-site energies j|H|j= ε0 and the hopping matrix elements j|H|j ± 1= j ± 1|H|j= -γ could be written as: H = ∑ ε |j⟩⟨j|- ∑ γ|j⟩⟨j + 1|- ∑ γ|j - 1⟩⟨j| j 0 j,j+1 j-1,j Therefore the Schrödinger equation reads ε0ϕj - γϕj-1 - γϕj+1 = Eϕj where -∞ < j < +. The solution of this equation could be obtained using the Bloch function as 1 ∑ ¯ |ϕk ⟩ = √-- eikja0|j⟩ N j E(k) = ε0 - 2γcos(k) where k = ka0 is the dimensionless wave vector and -π∕a0 < k < π∕a0 in the first Brillouin zone. Equation 27 is called a dispersion relation (E - k) or electronic band-structure of a 1D chain. Since -1 < cos(k) < 1, hence ε0 - 2γ < E < ε0 + 2γ; therefore the bandwidth is 4γ. In the bottom of the band around Emin (figure 3 c), by Taylor expansion of equation 27 , a parabolic band structure is obtained: E(k) ε0 - 2γ + γk2. Comparing this with a free electron parabolic band structure E(k) = ε0 - 2γ + 22m*k2, γ = 22m* is inversely proportional to the effective mass m*. This implies that electrons in a system with smaller (larger) bandwidth are heavier (lighter). Fig 3 Fig. 3. One dimensional (1D) infinite chain. (a) hydrogen atoms in an infinite chain with one orbital per atom, (b) 1D balls and springs, (c,d) electronic and phononic band structures and (e,f) density of states (DOS) of a and b. The band structure of a perfect 1D chain (equation 27 ) is a continuous function and does not have any energy band gap. Therefore, electrons injected to a 1D chain transmit perfectly within the bandwidth (metal). However, 1D metallic chain does not exist in reality. The reason is, if a band gap was opened (e.g. by mechanical force), all energy levels below valance band move down and therefore, total energy of the system decreases leading to a more stable structure (Peierls distortion). Unless gained electric energy is higher than lost mechanical energy needed to distort the system, this will continue. Therefore, atoms in a 1D crystal oscillate, so that the perfect order is broken. Density of States: From the dispersion relation (e.g. equation 27 ), one can obtain the density of states (DOS) using: D(E) = ∑ δ(E - λ¯) ¯k k where λk are the eigenvalues of the system, δ is Kronecker delta and k kx,ky,kz where kp -∞+dkp2π. From the dispersion relation (equation 27 ), DOS of a 1D chain is obtained as: D(E) = dk∕πdE = 12πγsin(k). At k = ±π where E = ε± 2γ (close to the band edges), D →∞. This singularity in the DOS called Van Hove singularity. Figure 3 a shows the band structure and the DOS of a infinite 1D chain. 1D balls and springs: We have yet discussed the electronic properties of a quantum system e.g. 1D chain. Now consider a chain of atoms with mass m connected to the next nearest neighbours with the springs with spring-constant K = -γ as shown in figure 3 . On the one hand, the derivative of the energy U with respect to the position x describes forces on a particular atom (F = -∂∂xU). Provided a local minimum is reached, one can expand the potential energy about this minimum in terms of the powers of atomic displacements. Since all forces on all atoms are zero, the Taylor expansion does not have linear terms. By neglecting higher orders (harmonic approximation), F = -∂2∂Ux2-x = -Kx. Note that being at a local minimum, the matrix of second derivatives must be positive definite, and thus will have only positive eigenvalues. On the other hand, from Newton’s second law F = -mdd2tx2. Therefore, the Schrödinger-like equation could be written as: d2x - m---n2 = - K [2xn - xn-1 - xn+1] dt Similar to what was discussed above, using xn(t) = Aei(kn-ωt), equation 29 reads -2 = -K[2 - e-ik - eik] and therefore the phononic dispersion relation of a 1D chain of ball and springs (figure 3 b) is obtained as ∘ ----------- 2γ --2γcosk ω (k ) = m Comparing equations 27 and 30 , it is apparent that equation 30 is obtained by changing E ω2 and ε0 2γ∕m in equation 27 . ε0 = 2γ∕m is the negative of sum of all off-diagonal terms of 1D chain TB Hamiltonian which make sense to satisfy translational invariance. Therefore, Schrödinger equation-like relation for phonons could be written as 2 ω ψ = D ψ where D = -K∕M is the dynamical matrix, M is the mass matrix and K is Hessian matrix calculated from the force matrix. 3) One dimensional (1D) finite chain and ring To analyse the effect of boundary conditions in the solution of the Schrödinger equation, consider three examples shown in figure 4 . As the first example, consider a 1D finite chain of N atoms. As a consequence of introducing boundary conditions at the two ends of the chain, the energy levels and states are no longer continuous in the range of ε0 - 2γ < E < ε0 + 2γ; instead there are discrete energy levels with corresponding states in this range. This is obtained by writing the Schrödinger equation in 1 < j < N (equation 25 ) and at the boundaries j = 1 and j = N. At j = 1 the Schrödinger equation reads ε0ϕ1 - γϕ2 = Eϕ1 and at j = N ε0ϕN - γϕN -1 = E ϕN using the Bloch function ϕj = eikj + ce-ikj, the solution for the 1D finite chain problem is obtained as: ∘ ------ --2--- --nπ-- ϕj = N + 1 sin(N + 1j) where n [1,...,N]. Similarly, solutions of 1D finite ring of N atom could be obtained (figure 4 ) as: 1 2nπ- ϕj = √--eN j N where n [0,...,N - 1]. Clearly, the allowed energy levels of 1D finite chain is different than 1D ring. This demonstrates that a small change in a molecular system may significantly affect the energy levels and corresponding orbitals. This becomes more important when few number of atoms was investigated e.g. the molecules, so two very similar molecules may show different electronic properties. Figure 4 also shows a solution for a phononic toy model consist of N ball connected to each other by springs with spring constant -γ. nπj- nπ- ϕj = Acos( N - 2N ) where n [0,...,N - 1], A = 1 -- √N for n = 0 and A = 1 --- √ 2N, otherwise. Note that to satisfy translational invariance condition, the diagonal terms in dynamical matrix are +2γ except in the boundaries (j = 1 and j = N) where they are +γ. Fig 4 Fig. 4. 1D finite chain and ring. The energy levels and corresponding wave functions or orbitals for 1D finite chain and ring. The phononic mode for a finite chain of balls and springs with mass m. 4) Two dimensional (2D) square and hexagonal lattices In section mbox II-B: ¯ mbox 2 , the band-structure and density of states of a 1D chain were calculated. Now let’s consider two most used 2D lattices: a square lattice where the unit-cell consist of one atom is connected to the first nearest neighbour in two dimensions (figure 5 a) and a hexagonal lattice where a unit cell consist of two atoms is connected to the neighbouring cells in which first (second) atom in a cell is only connected to the second (first) atom in any first nearest neighbour cell (figure 5 b). The TB Hamiltonian and corresponding band-structure could be calculated [1214] using equation 17 and the Bloch wave function of the form Aeikxj+ikyl (figure 5 ). The Schrödinger equation for two dimensional square lattice (figure 5 a) with on-site energies ε0 and hopping integrals γ could be written as: ε0ϕj,l - γϕj,l- 1 - γϕj,l+1 - γϕj-1,l,- γϕj+1,l = Eϕj,l Using the Bloch function ϕj,l = Aeikxj+ikyl, the band structure of the 2D square lattice is obtained: E = ε0 - 2γ(coskx + cosky) Using similar approach, the band structure of a hexagonal lattice (e.g. graphene) could be obtained as: ∘ ----------------------- E = ε0 ±γ 1 + 4coskxcosky + 4cos2ky Figures 5 b,c,f,g show the bandstructure of the square and hexagonal lattices. The number of conduction channels (figures 5 d,h) could be calculated from the band structure of a crystalline system as we will discuss later in section III . Graphene is a semimetal and can be used as electrodes to probe molecules [7, 9, 26, 27]. Fig 5 Fig. 5. Two dimensional square and hexagonal lattices. Lattice geometry of (a) square and (e) hexagonal lattices, the bandstructure of (b,c) square and (f,g) hexagonal lattices and the number of conduction channels in (d) square and (h) hexagonal lattices. C. Current carried by a Bloch function The time evolution of density matrix ρt = |ψt⟩⟨ψt| allows us to obtain current associated with a particular quantum state |ψt. Using the time-dependent Schrödinger equation 1 , I =-d|ψt⟩⟨ψt| = 1[H |ψt⟩⟨ψt|- |ψt⟩⟨ψt|H] dt iℏ By expanding |ψtover orthogonal basis |jequation 40 could be written as: dρt 1-∑ ′ * ∑ ′ * dt = iℏ[ H |j⟩⟨j|ψjψj′ - |j⟩⟨j|H ψjψj′] jj′ jj′ Current carried by a Bloch function in a 1D chain: For a 1D infinite chain with the Hamiltonian of the form of equation 24 , the rate of change of charge Il = tldt at site l could be obtained by calculating the expectation value of both side of equation 41 over the state |l dρlt = 1-[∑ ⟨l|H |j⟩⟨j′|l⟩ψ ψ* - ∑ ⟨l|j⟩⟨j′|H |l⟩ψ ψ*] dt iℏ jj′ j j′ jj′ j j′ which could be simplified as l dρt= Il+1→l + Il-1→l dt I = --1[⟨l|H |l+ 1⟩ψ ψ *- ⟨l+ 1|H |l⟩ψ ψ* ] l+1→l iℏ l+1 l l l+1 -1 * * Il-1→l = -iℏ[⟨l|H |l- 1⟩ψl- 1ψ l - ⟨l- 1|H |l⟩ψlψl- 1] These equations could be rewritten as: 2γ * Il+1→l = - ℏ-Im (ψ l+1ψl) Il- 1→l = - 2γIm (ψ *l-1ψl) ℏ The charge density is changing at atom site l as a result of two currents: right moving electrons Il+1 l and left moving electrons Il-1 l. The corresponding current to a Bloch state ψj(t) = eikj-iE(k)t∕ are: Il+1→l = - vk Il-1→l = +vk where vk = ∂E(k)∂k = 2γsin(k)is the group velocity. Note that this defines the time that a wave-package need to move from one site to another e.g. l + 1 l, so the actual group velocity is vk × a, where a is the spacing between the sites. Although the individual currents are non-zero and proportional to the group velocity, the total current I = Il+1l + Il-1l for a pure Bloch state is zero due to an exact balance between left and right going currents. It is worth to mention that to simplify the notation, a Bloch state eikj is often normalized with its current flux 1√-- vk calculated from equations 48 and 49 to obtain a unitary current. Hence we will mostly use a normalized Bloch state eikj√ -- vk in later derivations. Furthermore, one important consequence of equations 46 and 47 is that if ψj = Aeikj + Be-ikj, although the charge density ρj = |ψj|2 is oscillating by j, the current is not oscillating by j and it is equal to the sum of the individual currents at site l only, Im(ψl*ψl+1) = |A|2sink -|B|2sink due to the Aeikl and Be-ikl. Initial states are usually assumed to be stationary. However, if a non-stationary initial state was prepared in a closed (isolated) system such as a finite 1D chain of N atom, the charge density would be time-dependent (oscillatory) and therefore, current could be defined. As an example, for a system of two atoms coupled to each other by -γ, Hamiltonian reads H=\bigl(\begin{smallmatrix} 0 &amp;-\gamma\\ -\gamma&amp; 0 \end{smallmatrix}\bigr). If the initial states are \psi_1(t)=\bigl(\begin{smallmatrix} 1\\0 \end{smallmatrix}\bigr) and \psi_2(t)=\bigl(\begin{smallmatrix} 0\\1 \end{smallmatrix}\bigr) which are non-stationary states, the final state is obtained \psi(t)=\psi_1(t)+\psi_2(t)=\bigl(\begin{smallmatrix} cos(\gamma t/\hbar)\\i sin(\gamma t/\hbar) \end{smallmatrix}\bigr) which is not stationary. For such a closed system, the current could be obtained from equations 44 and 45 . D. Parr-Pariser-Pople (PPP) Hamiltonian Equation 17 described a non-interacting Hamiltonian which could also be written in the form of ∑ ∑ H = εini + γijc†i,scj,s i i,j,s where i and j run over the orbitals centred on each site. For the orbital centred on site i and with spin s, ci,s is the electron creation operator that inserts an electron in state i and cj,s is the electron annihilation operator which takes an electron out of state i [1214]. ni,s = ci,scj,s is the electron number operator with ni = sni,s, εi is the energy of the orbital relative to the chiral symmetry point and γi,j are hopping integrals. Electron-electron interactions are missing from the non-interacting Hamiltonian (equation 50 ). In order to take the Coulomb electron-electron interaction into account, the Parr-Pariser-Pople (PPP) model can be used to write the interacting tight-binding Hamiltonian as ∑ 1 1 Hint = H + Uii(ni,↑ - 2)(ni,↓ - 2) 1i∑ + - Uij(ni - 1)(nj - 1) 2 i,j⁄=i where Uii and Uij are the on-site and long range Coulomb interactions, respectively given by the Ohno parametrization, Uii = U0 and for ij Uij = U0[1 + (2--U0----)2]-1∕2 e∕4πϵ0dij where dij is the distance between sites i and j and U0 is the interaction amplitude e.g. U0 is equal to 11.26eV for gold and carbon and 9.95eV for sulfur. III. Nanoscale transport Nanoscale transport can be described by three regimes: (1) The self-consistent field (SCF) regime in which the thermal broadening kBT and coupling Γ to electrodes are comparable to Coulomb energy U0. The SCF method (single electron picture) implemented within non-equilibrium Green’s function (NEGF) could be used to describe transport in this regime as discussed in sections mbox III-A:¯ mbox to mbox III-D:¯ mbox . Fig 6 Fig. 6. Transport regimes. Transmission coefficient T of electrons with energy E traversing from one electrode to the other through a scattering region. Transport mechanism in a molecular junction could be either in tunnelling (off-resonance) regime where electrons tunnel through the molecule that is usually modelled with NEGF, or on-resonance where electrons transmit with high rate through an energy level that is modelled using master equation. The intermediate state (cross-over) between on and off resonance regimes are difficult to interpret either with NEGF or master equation. In molecular junctions smaller than ~ 3 - 4nm, it is shown that transport remain elastic and phase coherent at room temperature. Therefore, using SCF models to describe the properties of these molecular junctions is well accepted in the mesoscopic community. Based on a single electron picture, the NEGF method coupled to the SCF Hamiltonian describes properties of the system on and off resonances (figure 6 ). Good agreement between these models and many room-temperature experiments suggest applicability of this method. A simplified Breit-Wigner formula ( mbox III-A: ¯ mbox 5 ) derived from this method also could be used to model on-resonance transport through a device provided the level spacing is larger than resonances width. However, in those cases where the Coulomb energy has larger contribution, this method cannot describe the on-resonance properties of system. (2) The Coulomb blockade (CB) regime in which Coulomb energy U0 is much higher than both thermal broadening kBT and coupling Γ where the SCF method is not adequate and multi-electron master equation should be used to describe properties of the system (figure 6 ) as discussed in section mbox III-E: ¯ mbox . This is usually needed to model the properties of molecular junctions at low temperature where an electrostatic gate voltage is applied through back gate. (3) The intermediate regime (figure 6 ) in which the Coulomb energy U0 is comparable to the larger of the thermal broadening kBT and coupling Γ. There is no simple approach to model this regime. Neither the SCF method nor master equation could be used to well describe the transport in this regime because SCF method does not do justice to the charging, while the master equation does not do justice to the broadening. A. Transport through an arbitrary scattering region Consider the nanoscale junction of figure 7 where an arbitrary scattering region with Hamiltonian H connected to two single channel electrodes. On-site energies and coupling in the left (right) lead L(R) are εL (εR) and -γL (-γR), respectively. The leads are connected to the site a and b of the scattering region with the couplings -αL and -αR. The aim is to find the transmission t and reflection r amplitudes for a Bloch wave normalized with its current flux eikLj√ --- vL traveling from the left to right (figure 7 ). Fig 7 Fig. 7. Transport through an arbitrary scattering with Hamiltonian H connected two single channel electrodes. If the wave function in the left and right leads and scattering region are ψj = eikLj√v--- L + re-ikLj√v--- L, ϕj = teikRj√v--- R and fj, respectively; the Schrödinger equation in the left and right leads, the scattering region and connection points could be written as: L εLψj - γLψj-1 - γLψj+1 = Eψj if j < 0 L εL ψ0 - γLψ- 1 - αLfa = Eψ0 if j = 0 ∑ H f - α ψ δ - α ϕ δ = Ef if a ≤ j ≤ b i ji i L 0 ja R 0jb j εRϕ0 - αRfb - γRϕ1 = Eϕ0 if j = 0R R εRϕj - γRϕj-1 - γRϕj+1 = Eϕj if j > 0 From equations 54 and 58 , the E - k relations (band-structure) in the left and right leads are obtained as: L E = εL - 2γLcos(kL) if j ≤ 0R E = εR - 2γRcos(kR) if j ≥ 0 Equation 56 could be re-written as |f⟩ = g|s⟩ where g = (E -H)-1 is Green’s function and |s⟩ called source which is a zero vector with non-zero elements in the connection points only (at site j = a and j = b). For the junction in figure 7 , |f⟩ has only two non-zero elements due to the source, ( ) ( )( ) fa gaa gab sa fb = gba gbb sb where sa = -αLψ0 and sb = -αRϕ0. Furthermore, from equations 55 and 57 , the recurrence relation implies that: - αLfa = - γLψ1 - αRfb = - γRϕ- 1 ψ1 = √1v-(eikL - e-ikL )+ ψ0e-ikL L ϕ-1 = ϕ0e- ikR Hence, by substituting ψ1 and ϕ-1 in equation 61 ( ) ( γ ) ( ) fa -αL2L e-ikL 0 sa fb = 0 - γαR2e-ikR sb (--γ√RL-- ) + αL vL 2isin(kL) 0 From equation 60 and 63 , ( ) ( γ )-1 sa gaa + αL2Le-ikL gab sb = gba gbb + γαR2e-ikR (--γ√L--R ) × αL vL2isin(kL ) 0 Since sa = -αL(1 + r)√vL-- and sb = -αRt∕√vR-, transmission t and reflection r amplitudes could be obtained. √ --- (gba-) √--- t = iℏ vLαLgL d gRαR vR eikL,R gL,R = --γL,R- is the surface Green’s function in the left and right leads at sites 0L and 0R (figure 7 ) and where ΣL,R = αL,R2gL,R are called self-energies due to the left and right contacts. The Green’s function in the surface of a semi-infinite lead (equation 66 ) can be obtained from the Green’s function of a doubly infinite crystalline lead. For example for the left electrode in figure 7 , the Green’s function of a doubly infinite crystalline chain is: ik |j- l| gL = e-L---- jl iℏvL To calculate the Green’s function at site j = 0L due to a source at site l = 0L (the surface Green’s function), equation 68 should vanish at site a (figure 7 ). This can be achieved by adding an appropriate wave function to equation 68 , L eikL|j-l| e--ikL(j-2a+l) gjl = iℏvL - iℏvL Hence, the Green’s function at site j = 0L due to a source at site l = 0L is gL,R = -eikL,RγL,R (equation 66 ). Assuming two identical leads (kL = kR = k and γL = γR = γ), equation 65 could be written as: 2ikαLαR-( gba-) t = i2sin(k)e γ d where d = 1 + Δ1 + iΔ2 and Δ1 = Acos(k) + Bcos(2k), Δ2 = Asin(k) + Bsin(2k), A = (gaaαL + gbbαR)∕γ and B = αL2αR2(gaagbb - gabgba)∕γ2. From equation 70 , the transmission amplitude at E = 0 (e.g. k = π∕2) is αLαR ( gba ) t = - 2i-γ- 1--B-+-iA- Finally, transmission probability T = tt. More generally, the total transmission T and reflection R probabilities for multi-channel leads are obtained from ∑ * † T = tijtij = T race(tt) ij ∑ R = rijr*ij = T race(rr†) ij ti,j (ri,j) is the transmission (reflection) amplitude describing scattering from the jth channel of the left lead to the ith channel of the right (same) lead. Scattering matrix S is defined from ψOUT = IN and could be written by combining reflection and transmission amplitudes as: (r t′) S = t r′ The S matrix is a central object of scattering theory and charge conservation implies that the S matrix is unitary: SS = I. 1) Transmission and reflection amplitudes from total Green’s function As demonstrated in equation 65 , if the total Green’s function of a junction consisting two or few electrodes connected to an arbitrary scattering region is known, the transmission amplitude t (and transmission probability T ) for electrons traversing from one lead to the other could be calculated. The main task now is to find a method to calculate the Green’s function of whole system including crystalline leads (equation 68 ) connected to an arbitrary scattering region. Consider the nanoscale junction shown in figure 7 . The wave functions including |ψ ⟩ and |ϕ ⟩ can be multiplied by any arbitrary amplitude -ikLl A = e-√--- iℏ vL without affecting the transport. Note that A does not depend on j. Using this amplitude, the wave-functions |ψ⟩ reads ψ = eikL(j--l)+ re--ikL(j+l) j iℏvL iℏvL This equation looks like the Green’s function (equation 68 ) for j l. If we show that for j l, eikL(l-j) e- ikL(j+l) ψj = -iℏvL--+ r---iℏvL--- then ψj is the Green’s function of whole system at site j due to a source at site l and therefore transmission coefficient from any point to any other point can be obtained (equation 65 ). To demonstrate that equation 77 is valid for j l, consider { g = ψj if j ≥ l jl θj if j ≤ l θ = eikL(l-j)+ re-ikL(j+l) j iℏvL iℏvL We shall show that gjl satisfy the Green’s function equation (E - H)gjl = δjl. We note that gjl can be written as { ψj if j ≥ l gjl = θj = ψj + yj if j ≤ l eikL(l-j) eikL(j-l) yj = -iℏvL--- --iℏvL- and since any wave-function can be added or subtracted from the Green’s function and the result is still a Green’s function, by subtracting |ψ ⟩ from g, the new Green’s function ĝ is obtained: {0 if j ≥ l ˆgj = -1-- ikL(l- j) ikL(j-l) iℏvL(e - e ) if j ≤ l Substituting this into the Green’s function equation (E - H)gjl = δjl, (E - ε0)ˆgl,l - γˆgl+1,l - γ ˆgl-1,l = δl,l The first and second terms are zero from equation 82. For j = l - 1, the third term -γgl-1,l = 1. Therefore eikL|j- l| e-ikL(j+l) ψj = -iℏv---+ r--iℏv---- L L is the Green’s function of whole system and describes the wave function at any site j due to a source at site l. Similarly, the wave-function teikR(j-l) ϕj = iℏ√v--√v--- R L is the Green’s function for a source in the left lead where j l. Therefore, the Green’s function of whole system e.g. Gij, -teikR(j-l)- eikL(j-l) e-ikL(j+l)- Gij = iℏ√vR-√vL-= iℏvL + r iℏvL the transmission amplitude t at j = 1 due to a source at l = 0 and the reflection amplitude r at j = 0 due to a source at l = 0 could be calculated: √---√--- t = iℏ vR vLG01e-ikR r = iℏvLG00 - 1 The transmission T and reflection R coefficients can then be obtained from equations 72 and 73 . 2) Scattering theory and Green’s function Green’s function method has been widely used in the literature to model electron and phonon transport in nano and molecular scale devices and has been successful to predict and explain different physical properties. Green’s function is a wave function in a specific point of the system due to an impulse source in another point. In other words, Green’s function is the impulse response of the Schrödinger equation. Therefore, as shown in previous section, Green’s function naturally carries all information about the wave-function evolution from one point to the other in a system [1214, 28, 29]. The Green’s function G of a system with N site described by Hamiltonian H is defined as: G = (EI - H )-1 where I is the identity matrix. Using the completeness condition, ∑ |ψn⟩⟨ψn | = 1 n The Green’s function could be written in terms of eigenstates ψn and eigenenergies λn of H, N∑ G = |ψn-⟩⟨ψn-| n=1 E - λn and therefore the Green’s function element between point a and b is, N∑ ψn(a)ψ*(b) G (a,b) = --E-- λn-- n=1 n Figure 8 shows how Green’s function could be used to calculate the transmission and reflection amplitudes in a simplest one dimensional system where two semi-infinite crystalline 1D leads are connected to each other through coupling β (representing the scattering region). The main question is what the amplitudes of the transmitted and reflected waves are? There are two main steps, first to calculate the total Green’s function matrix elements between sites 0 and 1 (G10) or 0 and 0 (G00); and secondly project these to the wavefunction to calculate transmission t and reflection r amplitudes (equation 87 ). For this example, the transmission and reflection probabilities are obtained from T = tt and R = rr. Dyson’s equation describes the exact Green’s function of a system G = (g-1 -h)-1 in terms of the Green’s function of non-interacting parts g and Hamiltonian that connects them h. As shown in figure 8 , using the surface Green’s functions of the decoupled two semi-infinite leads g=\bigl(\begin{smallmatrix} g_{00} &amp; 0 \\ 0 &amp; g_{11} \end{smallmatrix}\bigr) and the Hamiltonian that couples them h, the total Green’s function could be obtained from Dyson’s equation (first step). The second step is to use equation 87 to calculate t and r from the total green’s function G. It is worth to mention that Dyson’s equation could take different equivalent forms such as G = (g-1 - h)-1, G = g + ghG or G = g + gV g where V = (g-1 - h)-1. Fig 8 Fig. 8.  Transport through a scatter connected to two 1D leads. For a Bloch wave e^{ikj}/\sqrt{v_k} incident with a barrier, the wave is transmitted with the amplitude of t (te^{ikj}/\sqrt{v_k}) and reflected with the amplitude of r (re^{-ikj}/\sqrt{v_k}). Using the surface Green’s function of the leads (g00 and g11), the Hamiltonian of the scattering region in which bridge two leads h and Dyson’s equation, the total Green’s function G could be calculated. The Green’s function could then be used to calculate the transmission t and reflection r amplitudes. 3) Green’s function of N site finite chain and ring Similar to the Green’s function of a semi-infinite chain (equation 69 ), from the Green’s function of the doubly infinite crystalline chain (equation 68 ), the Green’s function of N site finite chain and ring shown in figure 4 could be obtained [9] using appropriate boundary conditions as chain gjl = cos(k(N-+-1--|j --l|))--cos(k(N-+-1---j --l)) 2γsin (k)sin(k(N +1)) ring cos(k(N∕2 - |j - l|)) gjl = 2γsin-(k)sin(kN∕2))- These are useful equations to remember because they can help to understand quantum interference effects in simple molecules. As an example, consider a ring of 6 sites (N = 6) e.g. benzene with on-site energies ε = 0 and hopping integrals γ = -1. In the middle of the energy band e.g. E = 0, from the dispersion relation of a 1D chain (equation mbox II-B: ¯ mbox 2 ), the wave vector is k = π∕2. The Green’s function of benzene ring between any site i and j at the middle of the band is obtained from equation 93 as: benzene cos(π2(3---|j --l|)) gjl = 2 For any odd to odd (oo) or even to even (ee) connectivities (fig. 4 ), (3 -|j - l|) is an odd number and therefore goooreebenzene = 0. This is called destructive quantum interference because the transmission between oo or ee sites are zero. In contrast, for any odd to even (oe) connectivity, (3 -|j -l|) is an even number and therefore goebenzene0 which is called constructive quantum interference. This implies non-zero transmission between any oe sites. The Green’s function of the ring of 6 sites with on-site energies ε = 0 and hopping integrals γ can also be obtained by substituting its wavefunction (equation 35 ) into equation 91 . The eigenenergies of such system are: En = 2γcos(2nπ∕N) = [-2γ,-γ,-γ,+γ,+γ,+2γ]. The highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) levels are degenerate. At the middle of HOMO-LUMO gap E = EHL where EHL = (EH - EL)2 = 0 and EH (EL) is the energy of HOMO (LUMO) level, the Green’s function is obtained from equation 91 , ring 1 ∑2 ei2n6π(j-l) gjl = √--- ----E--- N n=-3 n It is convenient to introduce the notation, ring 1 ring ring ring gjl = √N-(g1 ∕γ + g2 ∕γ + g3 ∕2γ) grxing = A(ei(xπ∕3)(j- l) - ei(π+xπ∕3)(j- l)) and A = [-1,1,1] for x = [1,2,3], respectively. For any oo or ee connectivities, j - l is an even number leading to a phase shift of 2π in the second term of equation 97 , that does not change the sign of second term. Therefore the magnitude of the first and second terms in equation 97 are equal with opposite sign and therefore g1ring = 0, g2ring = 0, g3ring = 0 and goooreering = 0 (the destructive quantum interference). This is a radical behaviour since contribution from all pares of HOMO – LUMO, HOMO-1 – LUMO+1 and HOMO-2 – LUMO+2 states to the Green’s function are zero. In contrast, for any oe connectivity, j - l is an odd number leading to a phase shift of π in the second term of equation 97 , that changes the sign of second term. Therefore, the magnitude and sign of the first and second terms in equation 97 are equal, leading to non-zero values goering0 (the constructive quantum interference). Consider a molecule which possesses only a HOMO ψH(l) of energy EH and a LUMO ψL(l) of energy EL, whose Green’s function from equation 91 is given by ψH (l)ψ *H (m) ψL(l)ψ *L (m ) glm (E ) =---E---E---- + --E---E---- H L In this equation, ψH(l) and ψL(l) are the amplitudes of the HOMO and LUMO orbitals on connection site l, while ψH(m) and ψL(m) are the amplitudes of the HOMO and LUMO orbitals on connection site m. Since the core transmission coefficient for connectivity lm is given by τlm = (glm(E))2 a destructive interference feature occurs at an energy E given by glm(E) = 0, or equivalently ψH (l)ψ*H (m ) E - EH -ψL(l)ψ*L(m-) = E---EL- If the energy E at which the destructive interference feature occurs lies within the HOMO-LUMO gap, then E - EH > 0 and EL - E > 0. This can only occur if the left hand side of equation 99 is positive and therefore the condition that a destructive interference feature occurs within the HOMO-LUMO gap is that the orbital products must have the same sign. Conversely, if they have opposite signs, there will be no destructive interference dip within the HOMO-LUMO gap. In the most symmetric case, where ψH(l)ψ*H(m) = ψL(l)ψ*L(m), this yields E = (EL - EH)2 and therefore the interference dip occurs at the middle of the HOMO-LUMO gap. On the other hand, if |ψH(l)ψ*H(m)| << |ψL(l)ψ*L(m)|, then E EH and the dip is close to the HOMO. In this case, for a real molecule with many orbitals, the approximation of retaining only the LUMO and HOMO breaks down, and the effect of the HOMO-1 should also be considered. It is apparent from equation 98 that manipulating anti-resonances (e.g. due to the destructive quantum interference) is easier than resonances. To manipulate a resonance, red-ox state of a molecule should change whereas small environmental effects such as an inhomogeneous charge distribution or nearby ions could lead to a significant change in the position of anti-resonances. Graphical illustration of Green’s function: As we discussed in section mbox III-A: ¯ mbox 1 , transmission coefficient T is proportional to the modules square of Green’s function. We also showed that Green’s function can be obtained from the wave functions (molecular orbitals) using equation 90 . To predict quantum interference from molecular orbitals (e.g. figure 9 a,b), a combination of two (HOMO and LUMO orbitals) or more orbitals need to be considered (section mbox III-A: ¯ mbox 3 ). All of these contributions are naturally considered in Green’s function and therefore, by visualising the Green’s function, all information about quantum interference features are directly accessible. Graphical illustration of Green’s function also provides more intuitive picture. This is like an intermediate step between using molecular orbitals to predict transport and carrying out a full transmission calculation. The graphical illustration of Green’s function is also useful because it is not trivial to predict differences between non-zero quantum interference effects from molecular orbitals, whereas graphical illustration of Green’s function provides this information (i.e. differences in the radius of circles in figure 9 c,d). Fig 9 Fig. 9.  Green’s function GF illustration and its relation with molecular orbitals MOs (wave functions) and transmission coefficient T . Hückel MOs of (a) benzene and (b) pyridine. All orbitals in the left (right) side of the dashed line in a and b are occupied (unoccupied). HOMO and LUMO are degenerate in benzene which are lifted by perturbation due to nitrogen in pyridine. The colour and radius of circles show sign and amplitude of MOs at each site, respectively. (c,d) illustrate GF due to injection point shown by green arrows at two energies E = 0 (e.g. mid-gap) and E = 0.5 (e.g. close to the LUMO resonance). These are obtained using equation 90 and MOs in a and b. In all illustrations, the radius and colour of circles represent the amplitude and sign of GF matrix elements between injection point (shown by green arrows) and all collection points. Blue (red) represents positive (negative) numbers. If electron with energy E = 0 was injected from site shown by green arrows in c, the Green’s function matrix elements are zero for meta collection points but non zero for ortho and para collection points. Since transmission T is proportional to the module square of GF , zero transmission is expected for meta connectivity whereas T is non-zero for para connectivity as shown in e. This is illustrated using GF plots in c and inset of e. From the graphical visualisation of GF , the differences between transmission functions are predictable. (d) illustrates GF for pyridine when electrons are injected from site shown by green arrows in d and collected from any other points. (f) shows specific injection and collection points at E = 0 and E = 0.5. In both cases the radius of GF increases by energy in agreement with transmission curve. Figure 9 shows two examples of polycyclic aromatic hydrocarbons, benzene and pyridine. Six molecular orbitals and corresponding eigenenergies due to six pz orbitals are shown in figure 9 a,b. HOMO and LUMO states are degenerate in benzene. These degeneracies are lifted in pyridine due to the presence of heteroatom (nitrogen). Consequently, the molecular orbitals are also affected. Since Green’s function is the wave function due to a given source, we can visualise Green’s function just like molecular orbitals for given electron injection point and energy. Examples of Green’s function visualisation are illustrated in figure 9 c,d. Similar to wave functions visualisations, the radius and color of circles represent the amplitude and sign of Green’s function matrix elements, respectively due to a source in site shown by the green arrows. Figures 9 e,f show transmission coefficients of para and meta connectivities of benzene and a para connectivity of pyridine connected to two 1D leads through a weak coupling. Corresponding Green’s function illustrations are provided in the inset of figure 9 e,f at two different energies. Clearly, from the size of circles the main features of transmission is predictable. Therefore, one could use the Green’s function illustration of a molecule to predict transport intuitively. 4) Density of states from Green’s Function The density of states for a system with eigenvalues λ is obtained from equation 28 . However, having calculated the Green’s function G from equation 90 , the density of states can be calculated. We note that a delta function δ(x - x0) can be defined as the limit of a function which exhibits a sharp peak about x0 and whose integral over space is 1. For Instance δ(x - x0) = 1 lim (-----η------) π η-→0 (x- x0)2 + η2 To prevent Green’s function to diverge at E = λ, equation 90 can be written as ∑ G = --|ψn-⟩⟨ψn|- = Gr + iGi n E - λn + iη where η is a small number and ∑ (E - λ ) Gr = |ψn ⟩⟨ψn|--------n2---2 n (E - λn) + η ∑ ------η------ Gi = - n |ψn⟩⟨ψn|(E - λn)2 + η2 Since the eigenstates are orthonormal ψniψnj* = δij, we can find the expression for the trace of the Green’s function when η 0 as trace(Gi) = - n η (E-λn)2+η2 = -π nδ(E - λn). Therefore DOS is obtained, 1 D(E ) = - πtrace(Gi) 5) Breit-Wigner formula (BWF) In the SCF regime, provided the coupling to electrodes was weak enough, level broadening on resonances due to electrodes was small enough and level spacing (differences between the eigenenergies of a quantum system) was large enough, the Green’s function gba in equation 65 for a system described by Hamiltonian H (figure 7 ) and energies close to an eigenvalue λm of H, is approximately --ybya-- gba ≈ E - λm where ya,b = fa,bm and |fmis the eigenvectors of H. This is a good approximation for E’s close to an eigenvalue λm if above mentioned conditions satisfied because the Green’s function g = n|fn⟩⟨fn| E -λn terms in nm are much smaller than in n = m. This yields to on-resonance transmission T for electrons with energy E passing through a molecule described by a Lorentzian like transmission function called BWF: -------4Γ-LΓ R------ T (E ) = (E - εn)2 + (Γ L + Γ R )2 where εn = λ - σL - σR, σL,R = α2L,R γL,Rya,b2cos(kL,R) are the real part of the self-energies. ΓL,R = α2L,R- γL,Rya,b2sin(kL,R) are the imaginary part of the self-energies (Σ = σ + iΓ) which describe broadening due to the coupling of a molecular orbital to the electrodes. λ is the eigenenergy of the molecular orbital shifted slightly by an amount σ = σL + σR due to the coupling of the orbital to the electrodes. In this expression, ya2 and yb2 are the local DOS on the scattering region at the contact point. This formula shows that when the electron resonates with the molecular orbital (e.g. when E = εn), electron transmission is a maximum. The formula is valid when the energy E of electron is close to an eigenenergy λ of the isolated molecule, and if the level spacing of the isolated molecule is larger than (ΓL + ΓR). If ΓL = ΓR (a symmetric molecule attached symmetrically to the identical leads), T(E) = 1 on-resonance (E = εn). If a bound state (e.g. a pendant group εp) is coupled (by coupling integral α) to a continuum of states, Fano resonances could occur [30, 31]. Fano resonance contains an anti-resonance followed by a resonance with an asymmetric line profile in between. Fano resonance originates from a close coexistence of resonant transmission and resonant reflection. This could be modeled by considering εn = λ-σL -σR + α2(E -εp) in BWF. At E = εp, electron transmission is destroyed (the electron anti-resonates with the pendant orbital) and at E = εn, the electron transmission is resonated by εn. The level spacing between this resonance and antiresonance is proportional to α. Two levels BWF: As we discussed above, if a level broadening is smaller than level spacing between resonances, BWF can be used in the weak coupling regime. In case of two degenerate states, since these resonances can be close such that their level spacing is smaller than broadening, it is useful to drive a new form of BWF for two levels system. For a two levels system, the Hamiltonian H in figure 7 is given by ( ) H = ε1 Vd Vd ε2 where ε1 and ε2 are the energy levels coupled to each other by V d. If ε1 (ε2) is weakly bonded to the left (right) lead, the transmission coefficient T(E) could be obtained form equation 65 as [32, 33]: T(E)= ----------------4Γ LΓ RV-2d-------------- [(E- ε1- σL+ iΓ L)(E - ε2- σR+ iΓ R)- |Vd|2]2 where σL,R (ΓL,R) are the real (imaginary) part of the self-energies due to the left L and right R leads. Wigner delay time: Wigner delay time is the measure of time spent by an electron to pass from a scattering region of an open system. If the transmission amplitude of a given system t = |t|e (106 ) is defined by its magnitude |t| and phase θ, the Wigner delay time describes the phase difference between a scattered wave and a freely propagating one. Therefore, the Wigner delay time τw = dθ∕dE. 6) Open and close channels in leads To calculate the number of open conduction channels in a 3D arbitrary crystalline lead, it is useful to consider a simple 2D cubic lattice with one orbital per site where each site is connected to its first nearest neighbouring sites as shown in the figure 5 . For simplicity, consider a finite system in the y direction Ny whereas the lattice is infinite in the x direction. A normalized wave function and the band-structure of such structure are calculated as ψkxm = ∘ ---------- 2∕(Ny +1)sin(mπl∕(Ny + 1))eikxj and E(kx) = ε0 - 2γcos(mπ∕(Ny + 1)) + 2γcos(kx), respectively. Similar to the one-dimensional case (section mbox II-C: ¯ mbox ), current is associated to each ψkxm since every mini-band corresponds to a Bloch state. ψkxm are called channels. If we assume that the injected electrons from each lead to any individual channel are uncorrelated, the conductance at a given Fermi energy EF is given by G(EF) = 2e2M(EF)∕h where M(EF) is the number of open conduction channels at EF. In a one-dimensional lead with one orbital per site, there are either one open conduction channel or it is closed. For the above quasi one dimensional system where x is the transport direction and y is the transverse direction with Ny atomic sites, the Green’s function in the sites l and j due to the source in the sites land jcould be written as: Ny m ′ ∑ ---2-- -m-π-- -m-π--′ eikx |j-j| glj,l′j′ = Ny + 1sin(Ny + 1l)sin(Ny + 1l) iℏvmx m=1 where kxm is longitudinal momentum and vxm = ∂E(kxm)∂kxm is the group velocity of channel m. Equation 109 could be re-written as: Ny ikm|j-j′| glj,l′j′ = ∑ ϕm e-x-----ϕ′m * m=1 l iℏvmx l where ϕlm = ∘ ----- N2y+1sin(Nmy+π1l). glj,lj consists of the sum of all allowed longitudinal modes eikxmj weighted by the corresponding transverse components ϕlm. Note that for a given E, kxm could be both real and imaginary. If kxm value of an eigenstate has no imaginary part Im(kxm) = 0, this state defined to be open, or propagating since a complex kxm will only occur if the wave is tunnelling, or decaying. The sign of the imaginary part of kxm (the group velocity) can be used to define the direction of a decaying wave (a propagating wave) as summarized in the table I . Four classes of possible scattering channels   left right Decaying: Im(kxm) > 0 Im(kxm) < 0 Propagating: Im(kxm) = 0,vxm < 0 Im(kxm) = 0,vxm > 0 Propagating (decaying) channels are conventionally called open (close) channels. It is worth to mention that retarded Green’s function of equation 110 is obtained by summing up all Ny scattering channels some of which are open channels and some others are closed channels. Figures 5 d,h show two examples of the number of conduction channels for the square and hexagonal lattices. The number of channels has a maximum in the middle of the band for the square lattice, whereas for the hexagonal lattice, there are fewer open channels (e.g. only two for graphene) in the middle of the band. B. Generalized model to calculate transmission coefficient In this section, an expression for the transmission coefficient Tnn = |sn,n(E,H)|2 between two scattering channels n,nof an open vector space A, in contact with a closed subspace B is obtained. The result is very general and makes no assumptions about the presence or otherwise of resonances. More precisely, we describe a quantum structure connected to ideal, normal leads of constant cross-section, labelled L = 1,2,. Consider two vector spaces A and B, spanned by a countable set of basis functions. In what follows, the sub-space B represents the structure of interest and sub-space A the normal leads, as shown in figure 10 . Fig 10 Fig. 10. A sketch of a closed subspace B, in contact with subspace A through couplings W. XL denotes the surface of A connected to B. Subspace B includes some open subspaces connected to reservoirs shown by dotes. The Hamiltonian is H = HA + HB + HJ, where HJ allows transitions between the subspaces. Since HJ can be written ( ) HJ = 0† W W 0 the Green’s function G for the combined space A B has the form (GAA GAB ) G = GBA GBB To derive the more general formula where degenerate states can simultaneously resonate, note that when HJ = 0, G reduces to the Green’s function g of the decoupled system, where ( ) g = gA 0 0 gB and Dyson’s equation G = g(1 - HJg)-1 yields † -1 GAA = gA (1 - W gBW gA) GAB = gA (1 - W gBW †gA)-1W gB GBA = gB (1 - W †gAW gB)-1W †gA † -1 GBB = gB (1 - W gAW gB) Rewriting the above result for GAA in the form GAA = gA + gAW gB(1- W †gAW gB)-1W †gA GAA = gA + gAW GBBW †gA GBA = GBBW †gA These demonstrate that once GBB is known, all other quantities are determined. To obtain an expression for transmission coefficients, it is convenient to introduce a set of states {|n⟩}, which span the subspace A and write gA = nm|ngnmm|. Since part of A consists of a number of ideal, straight, normal leads of constant cross-section, described by a real Hamiltonian, it is convenient to associate a sub-set of the states {|n⟩} with open channels of these leads. For these states, the notation |n= |n,xis introduced, where n is a discrete label identifying the lead, quasi-particle type, transverse kinetic energy and any other quantum numbers of an open channel and x is a position coordinate parallel to the lead. With this notation, g = ∑ |n,x⟩g(x,x′)⟨n,x′|+ ∑ ′|¯n⟩g ⟨m¯| A n,x,x′ n ¯n¯m ¯nm¯ where the prime indicates a sum over states |n,|morthogonal to the open channels (the close channels), and ′ eiknx|x-x′| --e--iknx(x+x′-2(xL+a)) gn(x,x ) = iℏvn is the Green’s function of the semi-infinite lead between any position point x and xin the transport direction terminated at x = xL and vanishes at x = xL + a [34]. kxn is the longitudinal wave vector of channel n. If the lead belonging to channel n terminates at x = xL, then on the surface of the lead, the Green’s function gn(x,x) takes the form gn(xL,xL) = gn, where gn = an + ibn with an real and bn equal to π times the density of states per unit length of channel n. Moreover, if vn is the group velocity for a wave packet traveling along channel n, then vn = 2bn|gn|2. It is interesting to note that if x and xare positions located between xL and some point xn, - 2 - 2 gn(x,xn )g*n(x′,xn) =---Imgn (x,x′) =--- Imgn (x′,x) ℏvn ℏvn If xn is some asymptotic position far from the end of the lead belonging to channel n and far from the scattering region (e.g. contact) defined by HJ, then the transmission amplitude t and the transmission coefficient T from channel nto channel n (nn) are √ --∘ -- ′ tnn′ = iℏ vn v′n|⟨n,xn|GAA |n ,xn′⟩| ′ ′ 2 Tnn′ = ℏvnℏvn|⟨n,xn |GAA |n ,xn′⟩| and since ′ ⟨∑n,xn|GAA|n ,xn′⟩ = gn(xn,x)⟨n,x|W GBBW †|n′,x′⟩gn′(x′,xn′), x,x′ one obtains ∑ † ′ ′ Tnn′ = 4 ′[Imgn (¯x,x)]⟨n,x|W GBBW |n ,x ⟩ x¯x,x,¯x′ ⟨n,¯x|W GBBW †|n′,¯x′⟩*[Imgn ′(x′,x¯′)] Let’s introduce eigenstates of HB, satisfying HB|fν= ϵν|fνand write ∑ |fν⟩⟨fν|- gB = E - ϵν ν From the expression for GBB given in equation 114 , this yields -1 † - 1 GBB =(gB∑ - W gAW ) = |fμ⟩(GBB )μν⟨fν| μ,ν -1 (GBB )μν = (E - ϵν)δμν - ⟨fμ|W †gAW |fν⟩ Combining this with equation 119 yields -1 (GBB )μν = (E∑ - ϵν)δμν - ⟨fμ|W †|n,x⟩gn(x,x ′)⟨n,x′|W |fν⟩ xn,x′ ∑ ′ - ⟨fμ |W †|n¯⟩g¯n¯m ⟨m¯|W |fν⟩ ¯n¯m In general, since the energy E lies in a region where the contribution to the density of states from gnm is zero, Imgnm = 0 and gnm = gnm*. For this reason it is convenient to introduce the notation ∑ ′ σ′μν = ⟨fμ|W †|n¯⟩g¯nm¯⟨m¯|W |fν⟩, ¯n¯m ∑ † ′ ′ σμν(n) = ⟨fμ|W |n,x⟩[Regn (x,x )]⟨n,x |W |fν⟩, x,x′ ∑ σμν = σμν(n), n Σ μν = σμν + σ′μν, ∑ Γ μν(n) = - 2 ⟨fμ|W †|n,x ⟩[Imgn(x,x′)]⟨n,x′|W |fν⟩, x,x′ Γ μν = ∑ Γ μν(n) n Clearly the matrices σ, σ(n) and Γ(n) are Hermitian. With this notation (GBB-1)μν = (E - ϵν)δμν - Σμν + iΓμν. Furthermore equation 125 becomes T ′ = ∑ Γ ′ (n )(G ) (G )*′′Γ ′(n′) nn μνμ′ν′ μ μ BB μν BB μ ν νν [ ′ †] = Trace Γ (n)GBB Γ (n)GBB where the trace is over all internal levels of B and G -1 = g - 1 - σ′ - σ+ iΓ BB B In these expressions, Γ(n) is a Hermitian matrix of inverse lifetimes, Γ = nΓ(n), σ and σ are Hermitian self-energy matrices and gB is the retarded Green’s function of subspace B when HJ = 0. The form of equations 136 and 137 highlights the essential difference between open and closed channels. In the absence of open channels, σ and Γ are identically zero and if the subspace B is closed, GBB describes a quantum structure with well-defined energy levels, shifted by the self energy σ arising from contact with closed channels. Clearly no quasi-particle transport is possible through such a structure. When contact is made with open channels, the levels are further shifted by the self energy matrix σ and more crucially are broadened by the life-time matrix Γ. For a system with non-orthogonal basis sets, in equation 128 , δμν should be replaced with the overlap matrix Sμν = fμ|fν. It is interesting to note that the vector spaces A representing the normal leads include both crystalline structures connected to the outside world and any close system coupled to the vector spaces B representing the structure of interest. In the latter case, the only effect of the closed part of the vector spaces A is to contribute in the scattering by its self-energy. Furthermore, figure 11 shows a slightly different approach to calculate the transmission (reflection) amplitude t (r) in a two terminal system with non-orthogonal basis set [28]. Fig 11 Fig. 11. Generalized transport model using Green’s function method. Generalized transport model using equilibrium Green’s function method [28] and its equivalent model for a simple 1D problem. Surface Green’s function: An important step to calculate the total Green’s function of a system is to calculate the Green’s function at the surface of a semi-infinite lead. A lead is a perfect wave guide connected to a reservoir in the infinity from one side and to the scattering region from another side. It consists of identical slices described by Hamiltonian H0 connected to the first nearest neighbour with matrix elements H1. Note that for any lead the periodic slices could be chosen large enough to avoid the second and higher nearest neighbours interactions. The Green’s function of the semi-infinite lead at the point of contact to the scattering region is called the surface Green’s function. There are two main methods to calculate the surface Green’s function: analytic and recursive methods. In the analytic methods, first Green’s function of a doubly infinite lead is calculated. Then, a proper wave function is added to the Green’s function of a doubly infinite lead such that the Green’s function is vanished at the site next to the surface of the lead. This was discussed in the section mbox III-A: ¯ mbox , equation 68 and the section mbox III-B: ¯ mbox . In the recursive methods, the following non-linear equation 138 is solved iteratively using different algorithms such as a fixed-point iterative or Newton’s scheme. † gs = ((E - iη)I - H0 - H 1gsH1 )-1 where gs is the surface Green’s function and η is a small number to avoid divergence of the Green’s function. In the fixed-point iterative scheme, equation 138 takes the following form: n+1 † n -1 gs = ((E - iη)I - H0 - H 1gsH1) where n indicates the iteration number and gs1 = ((E -)I -H0)-1. The convergence of the fixed-point method can be quite poor, so more sophisticated methods such as the Newton’s scheme may be used [35]. C. The Landauer formula Landauer used the scattering theory of transport as a conceptual framework to describe the electrical conductance and wrote ”Conductance is transmission” [36]. In the Landauer approach a mesoscopic scatterer is connected to two ballistic leads (see figure 1 ). The leads are connected to the reservoirs where all inelastic relaxation processes take place. The reservoirs have slightly different electrochemical potentials μL - μR 0 to drive electrons from the left to the right lead. Current therefore could be written as: ∫ I = e dET (E)(f(E - μL)- f(E - μR)) h where e = |e| is the electronic charge, T(E) is the transmission coefficient and f is Fermi-Dirac distribution function f(E -μ) = 1(1 + e(E-μ)∕kBT) associated with the electrochemical potential μ, kB is Boltzmann’s constant and T is temperature. The Fermi functions can be Taylor expanded over the range eV , ∫ ( ) I = e- dET (E ) - ∂f(E)- (μL - μR) h ∂E where μL - μR = eV . By including the spin, the electrical conductance G = I∕V reads: ∫ ( ) 2e2- ∂f-(E-) G = h dET (E) - ∂E At T = 0K, -∂f(∂EE-μ) = δ(μ) where δ(μ) is the Kronecker delta. For an ideal periodic chain where T(E) = 1 at T = 0K, the Landauer formula becomes: 2e2 G0 = h--≃ 77.5 μ Siem ens G0 is called the ”Conductance Quantum”. In other words, current associated with a single Bloch state vk∕L and generated by the electrochemical potential gradient is I = e(vk∕L)DΔμ where D = ∂n∕∂E = L∕hvk. It is worth mentioning that the Landauer formula 141 describes the linear response conductance, hence it only holds for small bias voltages, δV 0. 1) Landauer-Buttiker formula for multi-terminal structures Conductance measurements are often performed using a four-probe structure to minimize the contact resistance effect. Also multi-probe structures are widely used to describe the Hall-effect or in sensing applications. Based on the Landauer approach for two terminal system, Buttiker [37] suggested a formula to model multi-probe currents for structures with multiple terminals as: Ii = e∑ Tij(μi - μj) h j where Ii is current at ith terminal and Tij is the transmission probability from terminal j to i. In a multi-terminal system, it is convenient to assume one of the probes as reference voltage V ref = 0 and write currents based on that. As an example, for a four probe structure, current in each probe can be written as: 2 Ii = 2e-(Niδij - ∑j=1,2,3,4Tij) Vi h where Ni is number of open conduction channels in lead i. In the four probe structure, if probe 3 and 4 are the outer voltage probes (I3 = I4 = 0) and probe 1 and 2 are the inner current probes, the four probe conductance is 2 Gfour-probe = (2e ∕h)(V3 - V4)∕I1 2) Equilibrium vs. non-equilibrium I-V The Landauer formula only holds in the linear response regime. A transmission coefficient T(E) of particle with energy E from one electrode to the other is obtained in the steady state condition where the junction is assumed to be close to equilibrium (δV 0). In this regime, transmission coefficient is assumed to be not voltage dependent and current is calculated using equation 140 . However, if transmission coefficient changes by applied bias voltage (the non-linear regime), bias voltage dependent transmission coefficient T(E,V b) should be calculated. In order to take the effect of electric field on T(E) into account, new Hamiltonian for a given field is calculated. This is obtained by calculating the potential profile applied to the junction due to the given electric field (e.g. bias voltage) using Poisson’s equation 2U = -ρ∕ε where U is potential profile due to charge distribution ρ and ε is permittivity which could vary spatially [1214]. In the non-equilibrium condition, the Landauer formula (equation 140 ) takes the form, I(Vb,Vg) = e ∫ ( eVb eVb ) -- dET (E,Vb,Vg) f(E + ---)- f(E - ---) h 2 2 where V b and V g are bias and gate voltages, respectively and T(E,V b,V g) is transmission coefficient calculated at each bias and gate voltages. It is worth mentioning that in some experiments, the measured conductance G = I∕V b is noisy. Therefore, the differential conductance map Gdiff(V b,V g) = dI(V b,V g)∕dV b is plotted. This could be calculated by differentiation of equation 147 with respect to the bias voltage V b. D. Non-equilibrium Green’s function formalism If ES|ψ= H|ψdescribes the properties of the closed system H with non-orthogonal basis set S, then once it connects to the outside world and becomes an open system (see figure 12 ), a modified Schrödinger equation in the non-equilibrium condition could be written [1214]: ES |ψ⟩ = H |ψ⟩+ Σ|ψ⟩+ |s⟩ where the terms Σ|ψand |sdescribe the outflow and inflow (e.g. see equation 60 ), respectively arises from the boundary conditions. Equation 148 could be rewritten as |ψ⟩ = [GR ]|s⟩ where GR = [ES - H - Σ]-1 is retarded Green’s function (GA = [GR]), Σ = Σ1 + Σ2 + Σ0 is sum of self-energies due to the electrodes Σ1, Σ2, and surroundings Σ0 such as dephasing contact or inelastic scattering e.g. electron-phonon coupling, emission and absorption. Dephasing contact terms could be described by the SCF method whereas for inelastic processes one needs to use for instance Fermi’s golden rule to describe these self energies. Fig 12 Fig. 12. Non-equilibrium Green’s function (NEGF) equations. [1214] There are a few proposals in the literature to treat incoherent and inelastic processes [1214, 38]. Buttiker [38] suggests to treat the inelastic and incoherent scattering by introducing a new probe to the original coherent system. This could be seen as assigning new self-energies associated with any inelastic or incoherent processes. However, if the incoherent and inelastic effects are treated by introducing an extra electrode, corresponding distribution function e.g. Fermi function for electrons is assigned which in general may not be the case. More generally, any incoherence and/or inelastic processes are introduced by appropriate self energies which not necessarily described by equivalent Fermi functions in the contact. For a normal, coherent elastic junction if H1,2 are the coupling matrices between electrode 1 (2) and scattering region and g1,2 are the surface Green’s function of the electrodes, Σ1,2 = H1,2g1,2H1,2. Furthermore, current could be calculated as I1 = eT race[- Γ 1Gn + ΣinA ] h 1 where Γ1 = i1 - Σ1) is the imaginary part of the self-energy, Gn is the density matrix, 1in = s1s12π is related to the source |sand A = GRΓGA is the spectral function as shown in figure 12 . Note that the definition of self-energy here is a bit different than what was discussed in section 3.2. From the basic law of equilibrium, in a special situation where we have only one contact connected; the ratio of the number of electrons to the number of states must be equal to the Fermi function in the contact ( 1,2in = Γ1,2f1,2(E)). However, in dephasing contact, Σ0in is not described by any Fermi function and since inflow and outflow should be equal Trace[ 0inA] = Trace0Gn]. Figure 12 summarize the basic non-equilibrium Green’s function (NEGF) equations to calculate current in a most general junction where surroundings are present. In the absence of surroundings, current (equation 150 ) in lead i could be re-written as [1214]: e ∑ Ii =-- Trace[Γ iGR Γ jGA ](fi - fj) h j where Tij(E) = Tracei(E)GR(Ej(E)GA(E)] is the transmission coefficient for electrons with energy E passing from lead i to lead j. Fig 13 Fig. 13. Two terminal system with two 1D leads connected to a scattering region ε1. It is worth mentioning that in the past decade, several numerical implementations of the scattering theory and non-equilibrium Green’s functions approach such as Gollum [29], SMEAGOL [28], TransSIESTA [39] and TURBOMOLE [40] have been developed. Transport through one-level system: Consider two identical 1D leads with on-site energies ε0 and hoping integrals γ connected to a scattering region ε1 with coupling integrals α and β as shown in figure 13 . The transmission coefficient T for electrons with energy E traversing from the left to right leads can be calculated as T(E) = ΓL(E)GR(ER(E)GA(E) where the retarded Green’s function is GR(E) = (E - ε1 - Σ)-1, the self-energies Σ = ΣL + ΣR is obtained from ΣL = α2eik∕γ and ΣR = β2eik∕γ and the broadening due to the left and right leads are ΓL = iL - ΣL) = -2α2sin(k)∕γ and ΓR = iR - ΣR) = -2β2sin(k)∕γ. Note that for a one-level system, this equation can be exactly re-written in the form of Breit-Wigner formula (equation 106 ). However, for the two or more level systems, BWF is a good approximation only to describe on-resonances transport and if the level spacing is less than level broadening (see section mbox III-A: ¯ mbox 5 ). Mapping quantum to semi-classical model: Current in the semi-classical method is defined as the flow of charge N in the unit time t as I = dN-- dt. As shown in figure 14 , in the contact one, there are two in-going S1D0 and outgoing ν1N currents where ν1 is coupling strength to the contact 1, S1 = ν1f1 is source and D0 = D(E)dE is total density of state in the scattering region. Therefore, current in the contact 1 is written as: ∫ -ν1ν2- I1 = e dED (E )ν1 + ν2(f1 - f2) where f1 and f2 are the Fermi distribution in the contact 1 and 2, respectively. Fig 14 Fig. 14. Semi-classical method to calculate current [1214]. In order to build a physical intuition of NEGF equations (figure 12 ), we compare the semi-classical picture (figure 14 ) with the quantum model (figure 12 ), 2Aπ ↔ D, Gn2π-↔ N, Σiℏn↔ S, Γℏ ↔ ν A and Gn look like matrix version of the density of states and the total number of electrons, respectively. However, the exact relations are: D = -∞+dEtrace(A(E))2π and N = -∞+dEtrace(Gn(E))2π. Clearly, the semi-classical picture is missing most of the interesting effects such as quantum interference and connectivity dependence of conductance [6, 41, 42]. E. Master equation As discussed in section III , if the Coulomb energy is much higher than thermal broadening and coupling to electrodes (the on-resonance transport regime) in a molecular junction, the SCF method is not adequate to describe properties of the junction. For example, when Fermi energy is moved from one side of a resonance to the other using bias or gate voltages, the redox state of molecule may change by gaining or losing an electron. In order to account for this effect, the occupation probability of each state should be calculated. This is described by the multi-electron master equation. In the multi-electron picture, the overall N-electron system has different probabilities Pα of being in one of the 2N possible states α. Furthermore all of these probabilities Pα must add up to one. The individual probabilities could be calculated under steady-state conditions where there is no net flow into or out of any state (see figures 15 and 16 ) ∑ R(α → β )P = ∑ R (β → α)P β α β β where R(α β) is the rate constants obtained by assuming a specific model for the interaction with the surroundings. In a system that electrons can only enter or exit from the source and drain contacts, these rates are given in figures 15 and 16 for one and two electron systems. Equation 154 is called a multi-electron master equation [1214]. The general principle of equilibrium statistical mechanics could be used to calculate the probability Pα that the system is in state α with energy Eα and Nα electrons as - Eα-μNα P = --e---kBT--- α ∑ e- Eα-kBμTNα α where kB is Boltzmann’s constant and T is temperature. Fig 15 Fig. 15. Spin-degenerate one electron system with energy ε. There are two possibilities either the state is full |1or empty |0. The rate to move an electron into the state (|0to |1) is a product of the rates that electron can go in and the Fermi function in the leads (R(|0⟩→|1) = γ1f1 + γ2f2) [1214] One-electron energy levels represent differences between energy levels corresponding to states that differ by one electron. If E(N) is the energy associated with the N-electron state, the energy associated with the addition of one electron is called affinity energy EA = E(N) - E(N + 1). Similarly the energy needed to remove one electron is called ionization energy IP = E(N - 1) - E(N). One electron system: Figure 15 shows the master equation for spin-degenerate one electron system with energy ε where there are only two possibilities, either the state is full |1or empty |0. Current is then calculated as: I = e--γ1γ2-(f1(E )- f2(E )) ℏ γ1 + γ2 where γ1 and γ2 are the rates that electron can go in and out from the left and right electrodes with f1(E) and f2(E) Fermi functions. Two electron system: When the number of electrons N increases, the number of possibilities increases by 2N. In two electron system, there are four possibilities (22), both states empty |00or full |11and either one of them full and the other empty (|01and |10). Fig 16 Fig. 16. Two electron system. There are four possibilities, both states empty |00or full |11and either one of them full and the other empty (|01and |10) [1214]. Figure 16 shows how to calculate current in two electron system [1214]. Note that as soon as a state become full, there need an additional energy (Coulomb repulsion energy) to have second electron in the other state in addition to the level spacing between the two energy levels. Furthermore, it is not correct to assume one Fermi function for all transitions. Due to the Coulomb blockade energy, each level needs certain electrochemical potential to overcome the barrier and current flow. Clearly, the expression for current in two electron system is more complicated than one electron system. This becomes even more complicated when multi-electron system was considered. In this case, the number of calculations rapidly increases, therefore efficient numerical algorithms are needed to solve the multi-electron master equation. Coulomb and Franck-Condon blockade regimes: The electronic properties of weakly coupled molecules are dominated by Coulomb interactions and spatial confinement at low temperatures [43]. This could lead to Coulomb blockade (CB) regimes in which the transport is blocked due to the presence of an electron trapped in the junction. In addition, charge transfer can excite vibrational modes or vibrons, and strong electron-vibron coupling leads to suppression of tunnel current at low bias called Franck-Condon (FC) blockade regimes. To describe transport in this regime, a minimal model (the Anderson-Holstein Hamiltonian) can be used [44] that captures the CB, FC and the Kondo effect if three assumptions are made: (1) the relaxation in the leads assumed to be sufficiently fast leading to Fermi functions for the distribution of the electrons in thermal equilibrium at all times; (2) the transport through the molecule is dominated by tunnelling through a single, spin-degenerate electronic level, and (3) one vibron taken into account within the harmonic approximation. In this case, the Anderson-Holstein Hamiltonian reads H = Hmol + Hleads + HT with † † Hmol = εdnd + U nd↑nd↓ + ℏωb b+ λℏω(b + b)nd describing the electronic and vibrational degrees of freedom of the molecule, H = ∑ ∑ (ε - μ )c† c leads a=L,Rp,σ ap a apσ apσ the non-interacting leads, and HT = ∑ ∑ (tapc† dσ + h.c.) a=L,R p,σ apσ the tunnelling between the leads and molecule. Here, Coulomb blockade is taken into account via the charging energy U where eV,kBT << U. The operator dσ (dσ) annihilates (creates) an electron with spin projection σ on the molecule, nd = σdσdσ denotes the corresponding occupation-number operator. Similarly, capσ (capσ) annihilates (creates) an electron in lead a (a = L,R) with momentum p and spin projection σ. Vibrational excitations are annihilated (created) by b (b). They couple to the electric charge on the molecule by the term ~ nd(b + b), which can be eliminated by a canonical transformation, leading to a renormalisation of the parameters ε and U, and of the lead-molecule coupling ta tae-λ(b+b) . The master equations determining the molecular occupation probabilities Pqn for charge state n and vibrons q is: dPn ∑ --q-= (Pnq′′ Wqn′′→→nq - PqnWqn→→qn′′) dt n′,q′ 1- n eq∑ n - τ(Pq - Pq ′ Pq′) q Pqeq denotes the equilibrium vibron distribution with a relaxation time τ and Wqqnn denotes the total rate for a transition from |n,qto |n,q′⟩. ∑ W nq→→nq+′1 = (fa(En+q′1 - Enq))Γ nq→→nq+′1;a , a=L,R W n→n′-1 = ∑ (1- fa(En - En-′1))Γ n→n-′1 q→q a=L,R q q q→q;a where fa is the Fermi function and the transition rates Γ are calculated from Fermi’s golden rule. Γ nq→→nq+′1;a = sn→n+1 2πρa(Enq+′1 - Enq)|Mqn→→qn′+;a1| ℏ Γ nq→→nq′-;a1= sn→n -12πρa(Enq - Enq-′1)|Mqn→→qn′-;a1| ℏ Here, ρa denotes DOS in lead a, Mqq;ann±1 denotes the FC matrix elements and snm the spin factor [45] such that for sequential tunnelling and assuming twofold degeneracy they are s10 = s12 = 1,s01 = s21 = 2. The matrix elements Mqq;ann±1 defined for vibrations are ∘ --- M n→n′±1= t0 q1!λq2-q1e-λ2∕2 q→q ;a q2! where q1 = min{q,q′} and q2 = max{q,q′}. This minimal model captures the main features of resonant tunnelling due to Coulomb energy and vibrons effect in low temperature [45, 46]. F. Minimal phase-coherent CB description Since transport through molecular junctions is usually phase coherent even at room temperature and at low temperatures exhibits Coulomb blockade, we can employ a minimal model by including the effect of the Coulomb energy and computing the differential conductance as a function of the bias and gate voltages. This model preserves phase coherence, implements a derivative discontinuity to describe Coulomb blockade and avoids self-interactions. The bias and gate voltage depended differential conductance is calculated by differentiating bias-dependent current with respect to the bias voltage Gdiff = dI(V b,V g)∕dV b using the Landauer formula: 2e I(∫Vb,Vg) = h × dET (E, Vb,Vg)(f(E + Vb∕2) - f(E - Vb∕2)) Note that a factor of 2 in this equation accounts for spin so that T = (T1 + T2)2 where T1 (T2) is transmission function for majority (minority) spins. This method can lead to long calculation times, which can be reduced by considering only the first term of the Taylor expansion of the Fermi-Dirac distribution function f(E). Therefore, a simplified and approximated form of Gdiff is obtained in the limit of low temperatures 2 Gˆdiff = 2e-× [T(EF-=-Vb∕2,Vb,Vg))- h 2 + T(EF-=---Vb∕2,Vb,Vg))] 2 To include the effect of additional or Coulomb energy, for each new bias, we can start from the ’bare’ tight binding Hamiltonian and then we (1) add the gate voltage to each site energy, (2) diagonalise the Hamiltonian (3) if an eigenvalue drops below the drain voltage and the level becomes occupied we add the Coulomb energy to all other eigenvalues and then transform back to the original basis. (4) T(E,V b,V g) and the corresponding differential conductance is computed using this new Hamiltonian and steps 1 to 4 are repeated. Figure 17 shows the stability diagram obtained by applying this minimal model calculation with and without additional energy. Fig 17 Fig. 17.  Stability diagram.(a) a tight-binding model consists of two one-dimensional leads with hopping elements of γ = -2 (which sets the energy scale) connected to a scattering region containing 10 sites with hopping elements γi = -[0.25, 0.125, 0.25, 0.25, 0.075, 0.2, 0.2, 0.25, 0.25, 0.4]. All on site energies (εi and ε0) are set to zero. The scattering region is coupled to the left and right electrodes by hopping elements α = 0.1γ. (b,d) The differential conductance dI(V b = 0,V g)∕dV b. (c,e) The stability diagram dI(V b,V g)∕dV b is obtained by applying minimal model calculation. In b,c the the Coulomb energy U is zero whereas in d,e U = 0.125γ. Chessboard pattern due to quantum interference is obtained in c. By including additional energy U, the size of Coulomb diamonds (blockade region) is increased accordingly and excited states do not penetrate to the blockade region. IV. Modelling the experiment So far we have discussed, different transport regimes and the methods to model electron and phonon through nanoscale junctions. However, all of these tools are useful if they can explain new observed physical phenomenon or predict new properties of a future physical system and suggest new design principles. Experiments in the field of molecular electronics either study new junction physical properties such as conductance and current or focus on using well characterized junctions for future applications. It is important to understand the limits in theory and experiment. For example, there are certain quantities that are not directly measurable but can be calculated such as wave-functions. In contrast, there are quantities that are not predictable but accessible in experiment such as the position of the Fermi energy, the overall effect of the inhomogeneous broadening on the transport, or screening effects which is related to the exact junction configuration in the real-time experiment. Furthermore, more reliable predictions can be made if a series of junctions were studied and the overall trends compared between theory and experiment. The bottom line is theory and experiment are not two isolated endeavours. They need to communicate with each other in order to discover new phenomena. Those quantities that cannot be computed reliably, but for which experimental data is available, can be used to correct and refine theoretical models. Usually to explain new phenomena, one needs to build a model based on a working hypothesis. To make an initial hypothesis, a theorist needs to know how to take into account different physical phenomena such as the effect of environment, presence of an electric or magnetic field. In the following, the aim is to make few bridges between the well-known physical phenomena and the methods to model them theoretically. A. Virtual leads versus physical leads Let’s start by considering the differences between a lead and a channel. From a mathematical viewpoint, channels connect an extended scattering region to a reservoir and the role of lead i is simply to label those channels ki,qi, which connect to a particular reservoir i. Conceptually, this means that from the point of view of solving a scattering problem at energy E, a single lead with N(E) incoming channels can be regarded as N(E) virtual leads, each with a single channel. We could take advantage of this equivalence by regarding the above groups of channels with wave-vectors kαi,qαi as virtual leads and treating them on the same footing as physical leads [29]. This viewpoint is particularly useful when the Hamiltonians H0i, H1i describing the principle layers PLs (the identical periodic unit cells H0i connected to each other by H1i) of the physical lead i are block diagonal with respect to the quantum numbers associated with kαi,qαi. For example, this occurs when the leads possess a uniform magnetization, in which case the lead Hamiltonian is block diagonal with respect to the local magnetization axis of the lead and α represents the spin degree of freedom σ. This occurs also when the leads are normal metals, but the scattering region contains one or more superconductors, in which case the lead Hamiltonian is block diagonal with respect to particle and hole degrees of freedom and α represents either particles p or holes h. More generally, in the presence of both magnetism and superconductivity, α would represent combinations of spin and particles and holes degrees of freedom. In all of these cases, H0i, H1i are block diagonal and it is convenient to identify virtual leads αi with each block, because we can compute the channels kαi,qαi belonging to each block in separate calculations and therefore guarantees that all such channels can be separately identified. This is advantageous, because if all channels of H0i, H1i were calculated simultaneously, then in the case of degeneracies, arbitrary superposition of channels with different quantum numbers could result and therefore it would be necessary to implement a separate unitary transformation to sort channels into the chosen quantum numbers. By treating each block as a virtual lead, this problem is avoided. B. Charge, spin and thermal currents When comparing theory with experiment, we are usually interested in computing the flux of some quantity Q from a particular reservoir. If the amount of Q carried by quasi-particles of type αi is Qαi(E), then the flux of Q from reservoir i is: ∫ Ii = 1- dE ∑ Ti,j ¯fj(E ) Q h α ,j,β αi,βj βj i j Tαiji,j in this expression is transmission coefficient of quasi-particles of type αi. In the simplest case of a normal conductor, choosing Qαi = -e , independent of αi, this equation yields the electrical current from lead i. αi may represent spin, and in the presence of superconductivity it may represent hole (αi = h) or particle (αi = p) degrees of freedom. In the latter case, the charge Qp carried by particles is -e, whereas the charge Qh carried by holes is +e. In the presence of non-collinear magnetic moments, provided the lead Hamiltonians are block diagonal in spin indices, choosing αi = σi and Qαi = -e in Eq. (166 ) yields for the total electrical current ∫ Ii= - e- dE ∑ T i,j f¯j (E) e h σ,j,σ σi,σj βj i j Note that in general it is necessary to retain the subscripts i,j associated with σi or σj, because the leads may possess different magnetic axes. Similarly the thermal energy carried by the electrons from reservoir i per unit time is ∫ Ii= 1- dE ∑ (E - μ)T i,j ¯fj(E ) q h σ ,j,σ i σi,σj βj i j For the special case of a normal multi-terminal junction having collinear magnetic moments, αi = σ for all i and since there is no spin-flip scattering, Tσ,σi,j = Tσ,σi,jδσ,σ [29]. In this case, the total Hamiltonian of the whole system is block diagonal in spin indices and the scattering matrix can be obtained from separate calculations for each spin. we assume that initially the junction is in thermodynamic equilibrium, so that all reservoirs possess the same chemical potential μ0. Subsequently, we apply to each reservoir i a different voltage V i, so that its chemical potential is μi = μ0 - eV i. Then from equation (166 ), the charge per unit time per spin entering the scatterer from each lead can be written as ∫ Ii= - e- dE∑ Ti,j¯fj(E) e h σ,j σ,σ σ and the thermal energy per spin per unit time is 1∫ ∑ Iiq = h- dE (E - μi)Tσi,,jσ ¯fjσ(E ) σ,j where e = |e| and fσi(E) = f(E - μi) - f(E - μ) is the deviation in Fermi distribution of lead i from the reference distribution f(E - μ). In the limit of small potential differences or small differences in reservoir temperatures, the deviations in the distributions from the reference distribution fσj(E) can be approximated by differentials, therefore by Taylor expanding fj = f(E - μj), fj - f = -∂f- ∂E(μj - μ) --∂f- ∂E(E-μ- T)(Tj - T) is obtained. Using this expression for fj - f, equations 169 and 170 could be written as: ( I ) ( - e∑ L0 - e ∑ L1ij,σ ) ( e△V ) Q˙ = 1h∑ σ 1ij,σ h1∑ σL2Tij,σ △T h σLij,σ h σ--T- Since nth moment of probability distribution P(x) define as: xn= dxP(x)xn, the spin-dependent moment Lij,σ in the presence of collinear magnetism is obtained as n ∫ ∞ n ij ( ∂f-) L ij,σ(T,EF ) = -∞ dE (E - EF ) Tσ,σ(E ) - ∂E where f(E,T) = (1 + e(E-EF)∕kBT)-1 is the Fermi-Dirac distribution function, T is temperature and kB is Boltzmanns constant. Therefore, in the linear-response regime, the electric current I and heat current ˙Q passing through a device is related to the voltage difference ΔV and temperature difference ΔT by ( ) ( ) ( ) ΔV G- 1 - S I Q˙ = Π κel ΔT where electrical conductance G (thermal conductance κel) is the ability of the device to conduct electricity (heat) and the Seebeck coefficient S (Peltier Π) is a measure of generated voltage (temperature) due to a temperature (voltage) differences between two sides of the device. In the presence of two leads labeled i = 1,2, the spin-dependent low-voltage electrical conductance G(T,EF), the Seebeck coefficient (somtimes called thermopower) S(T,EF) = -ΔV∕ΔT , the Peltier coefficient Π(T,EF) and the thermal conductance due to the electrons κel(T,EF) as a function of Fermi energy EF and temperature T can be obtained as ∑ e2 G (T,EF) = --L012,σ σ h∑ -1---σL112,σ S(T,EF) = - eT ∑ σL012,σ Π(T,E ) = T S(T,E ) F ( F ∑ 1 2) κel(T,EF) = 1-- ∑ L2 - (∑-σL-12,σ) (174) hT σ 12,σ σ L012,σ Note that the thermal conductance is guaranteed to be positive, because the expectation value of the square of a variable is greater than or equal to the square of the expectation value. Furthermore, by Taylor expanding Tσ,σij(E) around EF, Tσ,σij(E) Tσ,σij(EF) + ij ∂Tσ,σ(E)- ∂E|E=EF(E -EF), the low temperature approximation of Lij,σ1 and Lij,σ0 where Tσ,σij(E) varies approximately linearly with E on the scale of kBT could be written as: 0 Lij,σ ≈ T (EF ) ij ∫ L1 ≈ ∂Tσ,σ(E-)| dE (- ∂f-)(E - E )2 ij,σ ∂E E=EF ∂E F ∂Tiσj,σ(E-) 2 ≈ ∂E |E=EF (eT )α 2 2 Lij,σ ≈ (eT) αT (EF ) where α = kB2π23e2 WΩK-2 is the Lorenz number. Therefore, Seebeck coefficient could be re-written as: ij 2 S(T,EF ) ≈ - ∂Tσ,σ(E-)|E=EF ⟨(E-- EF-)-⟩ ∂E Tσij,σ(EF)eT k2Bπ2T ∂Ln (Tσij,σ(E)) = - --3e------∂E------|E=EF Equation 178 is also called the Mott Formula. From equations 175 -177 , the electrical conductance and thermal conductance due to electrons are obtained as G(T,EF) G0T(EF) and κel(T,EF) αTG(T,EF). The latter is also called the Wiedemann-Franz law and shows that thermal conductance due to electrons is proportional to the electrical conductance. Efficiency of a thermoelectric martial η is defined as the ratio between the work done per unit time against the chemical potential difference (between two hot and cold reservoir) and the heat extracted from the hot reservoir per unit time. The maximum efficiency ηmax could be written as: ΔT ∘Z.Tavg-+-1- 1 ηmax = ---∘-------------Tc- Th Z.Tavg + 1+ Th where Th and Tc are temperatures of hot- and cold-sides, respectively, ΔT = Th -Tc and Tavg = (Th + Tc)2. The thermoelectric conversion efficiency (equation 179 ) is the product of the Carnot efficiency (ΔT- Th) and a reduction factor as a function of the material’s figure of merit Z = S2G∕κ, where S, G, and κ = κel + κph are the Seebeck coefficient, electrical conductance, and thermal conductance due to both electrons and phonons, respectively. More commonly a dimensionless figure of merit (ZT = Z.Tavg) is used to account for the efficiency of the thermoelectric materials. The thermoelectric figure of merit could be written as --κel--- ZT = ZTelκel +κph where the electronic thermoelectric figure of merit for a two-terminal system is ----L112----- ZTel = L012L212 - L112 To calculate the total ZT , not only the thermal conductance due to the electrons are needed but also it is absolutely crucial to take the phonons contribution to the thermal conductance (κph) into account as described in the next section. C. Phonon thermal conductance To calculate the heat flux through a molecular junction carried by the phonons, equation 166 could be used where the thermal conductance due to the phonons κph could be obtained [10] by calculating the phononic transmission Tph for different vibrational modes as 1-∫ ∞ ∂fBE-(ω,T) κph(T) = 2π 0 ℏωTph(ω) ∂T dω where fBE(ω,T) = (eω∕kBT - 1)-1 is Bose-Einstein distribution function and is reduced Plancks constant and kB is Boltzmanns constant. To calculate the vibrational modes of a system, we use the harmonic approximation method to construct the dynamical matrix D. From the ground state relaxed xyz coordinate of the system, each atom is displaced from its equilibrium position by δqand -δq in x, y and z directions and the forces Fiq = (Fix,Fiy,Fiz) in three directions qi = (xi,yi,zi) on each atom are calculated. For 3n degrees of freedom (n = number of atoms), the 3n × 3n dynamical matrix D is constructed from Hessian matrix K ′ Kqiqj Dij = Mij- where Kijqq for ij are obtained from finite differences qq′ Fqi (δq′j)-- F-qi (- δq′j) K ij = 2δq′j and the mass matrix M = ∘MiMj---. To satisfy momentum conservation, the Ks for i = j (diagonal terms) are calculated from Kii = - ijKij. Once the dynamical matrix is constructed the Green’s function method as described in section mbox III-B: ¯ mbox could be used to calculate the phononic transmission coefficient Tph. It is worth to mention that in order to compute the electron-phonon coupling matrices Wλ for a given phononic mode λ, one could use the Hamiltonian matrices {{⟨i|H|j⟩}} for the displaced configurations [47]. ⟨i| ∂H-|j⟩ = ∂⟨i|H-|j⟩ - ∂⟨i||H |j⟩- ⟨i|H |∂|j⟩ ∂ δq′ ∂δq′ ∂δq′ ∂δq′ where |iare basis orbitals. The Wλ matrices are then used [47] to calculate the appropriate self-energies (see section mbox III-D: ¯ mbox ) to account for inelastic scattering due to electron-phonon interactions. D. Piezoelectric response In piezoelectric materials, electric field is generated due to conformational changes. The piezoelectric response between atoms i and j of a molecule is given by: ∂∂ufj-- ∂∂ufi Pij = --rij--- where rij is the distance between atom i and j in the equilibrium geometry. Displacement derivatives ∂u ∂f is given by ∂u- -1 ¯ ∂f = - VW H where W and V are normal modes and vectors of Hessian matrix K = ∂∂2uE2, respectively (section mbox IV-C: ¯ mbox ). H = V TH where H = ∂2E- ∂f∂u is dipole derivatives matrix. Note that just like equation 184 , these derivatives can be calculated using finite differences. Clearly, larger piezoelectric response is expected for molecules with larger dipole moments. Recently, a large converse piezoelectric effect was measured in helicene single molecules [48]. E. Spectral adjustment Although DFT has been successful at predicting the trends, it usually underestimates the position of the Fermi energy EF, the exact energy levels (Kohn-Sham eigenvalues [49]) and therefore the position of the HOMO and LUMO and the energy gap. To compare mean-field theory with experiment, some corrections are needed. One way is to use hybrid functionals e.g. B3LYP [50] or many body calculations e.g. GW approximation [51]. The latter is computationally expensive [52] and can only be used for molecules with few atoms. An alternative way is to correct the HOMO-LUMO gap using the values measured experimentally. A phenomenological scheme that improves the agreement between theoretical simulations and experiments in, for example, single-molecule electronics consists of shifting the occupied and unoccupied levels of the molecule (M) downwards and upwards, respectively to increase the energy gap. The procedure is conveniently called spectral adjustment in nanoscale transport [29, 53] and has been widely used in the literature to correct theoretical transport gap [54, 55]. The Hamiltonian K = H - ES of a given M region could be modified as: 0 KM = K M + (Δo - Δu)SM ρMSM + ΔuSM where \Delta_\mathrm{o,u} are energy shifts. \rho_\mathrm{M} = \sum_{no}\,|\Psi_{no}\rangle\langle\Psi_{no}|\nonumber is the density matrix where (no, nu) denote the occupied and unoccupied states, respectively and SM is overlap matrix. If experimental HOMO and LUMO energies are available, \Delta_\mathrm{o,u} can be chosen to correct HOMO and LUMO obtained from the mean field Hamiltonian. Alternatively, in the simplest case, the shifts \Delta_\mathrm{o,u} are chosen to align the highest occupied and lowest unoccupied molecular orbitals (ie the HOMO and LUMO) with (minus) the ionization potential (IP) and electron affinity (EA) of the gas phase molecule The ionization potential (IP = E(+e) -E(0)) and electron affinity (EA = E(0) -E(-e)) are calculated from total energy calculation of the gas phase molecule. E(0) is the total energy of the neutral molecule, E(+e) is the energy of the molecule with one electron removed (i.e. positively charged), and E(-e) is the total energy of the molecule with one added electron. The energy-gap Eg of a molecule (sometimes called additional energy) could be calculated from IP and EA as: Eg = IP - EA [1214]. The important conceptual point is that the electrochemical potential μ should lie between the affinity levels (above μ) and ionization levels (below μ) in the ground state. Note that IP and EA are traditionally defined positive energies below the vacuum level, whereas HOMO and LUMO level are negative, if they are below the vacuum level. The Coulomb interactions in the isolated molecule are screened if it is placed in close proximity to a metallic surface. Due to this image charge interactions, the occupied energy levels shift up whereas the unoccupied states move down resulting in shrinking of energy gap. This could be taken into account by using a simple image charge model, where the molecule is replaced by a point charge located at the middle point of the molecule and where the image planes are placed ~ 1 Å above the electrodes’ surfaces. Then the shifts are corrected by screening effects Z = e2 ln2(8πϵ0a) where a is the distance between the image plane and the point image charge and ϵ0 = 8.85 × 10-12F∕m is the vacuum permittivity. F. Hopping versus tunnelling Charge transport mechanism through the molecular scale junctions are either coherent transport via tunnelling (called also superexchange) or incoherent thermally activated hopping. Transport through the short molecules with the length of < 3 - 4nm has been demonstrated [56, 57] to be coherent tunnelling. This is characterized by an exponential dependence of conductance G with length given by -βL G = Gce where Gc is a pre-exponential factor dependent on junction contacts and nature of metallic leads, β is the tunnelling decay constant (called also attenuation or β factor), and L is the length of the molecule. The coherent process is also characterized by temperature independence. In contrast, incoherent hopping is believed to be the charge transport mechanism along longer molecules. In incoherent hopping transport regime, the conductance follows an Arrhenius relation -EA∕kBT G = Gae where Ga is a constant pre-exponential factor for each chemical reaction, EA is the hopping activation energy, T is the temperature and kB is the Boltzmann’s constant. In this regime, the conductance varies as the inverse of molecular length and also decays exponentially with temperature. Therefore, the length and temperature dependent conductance measurements are widely used in the literature to distinguish between different transport regimes [58, 59]. Fig 18 Fig. 18.  Magnetisim. Schematic represents a scattering region with two 1D chain for each spin. ε1 and ε2 are site energies for spin-up and spin-down, respectively. In the para-magnetic case, the site energies for spin-up and spin-down are equal ε0. In the presence of spin-orbit coupling, there are coupling between spin-up and spin-down sites. G. Spin Hamiltonian To take electrons spin into account in a Hamiltonian H (= ), in the absence of spin-orbit coupling, the Schrödinger equation can be written, ( ) ( )( ) ψ H 0 ψ E ψ¯ = 0 ¯H ¯ψ where ψ and ψ are spin-up and spin-down component of wavefunction. If electrons travel with high velocities, relativistic effects can become significant. This is not usually a case in solids but near the nuclei of atoms where the electric fields are high, weak relativistic effects might be expected. Therefore, a spin-orbit correction need to be considered using the Dirac equation. ( ) ( ) ( ) ψ H 0 ψ E ¯ψ = 0 H¯ ¯ψ ( Mz Mx - iMy) (ψ ) + Mx +iMy - Mz ψ¯ Spin-orbit Hamiltonian is often written as σ.M where σ is Pauli spin matrices. ( ) ( ) ( ) σ = 0 1 , σ = 0 - i , σ = 1 0 x 1 0 y i 0 z 0 - 1 If M points in the direction (θ,ϕ), Mx = sinθcosϕ, My = sinθsinϕ and Mz = cosθ, therefore ( ) ( ) Mz Mx - iMy cosθ sinθe-iϕ Mx + iMy - Mz = sinθeiϕ - cosθ The spin wavefunction, spinors thus is obtained ψ = (cos(θ∕2)e-iiϕϕ∕∕22 ) sin(θ∕2)e, ψ = (- sin(θ∕2)e-iϕi∕ϕ∕22) cos(θ∕2)e . If there is no spin-orbit coupling, the Hamiltonian is block diagonal for spin up and spin down. Therefore, transmission coefficient for spin up and down could be treated independently. If an scattering region cause spin flip, the new Hamiltonian is twice bigger than Hamiltonian without spin and to calculate transmission coefficient, one need to consider whole Hamiltonian. For a para-magnetic system, one could obtain the total transmission by multiplying one spin transmission by 2. Figure 18 shows examples of ferromagnetic and anti-ferromagnetic systems. H. Inclusion of a Gauge field For a scattering region of area a, if a magnetic field B is applied the magnetic flux ϕ = B ×a. To compute transport properties in the presence of a magnetic field, a Peierls substitution is considered by changing the phase factors of the coupling elements between atomic orbitals. For example in the case of a nearest-neighbor tight-binding Hamiltonian, the hoping matrix element Hij between site i and site j is replaced with the modified element, HBij = Hije-iϕ, (196) e ∫ ri ϕ = -- A(r)dr ℏ rj and ri and rj are the positions of site i and j and A is the vector potential. The gauge should be chosen such that the principal layers of the leads remain translationally invariant after the substitution. Fig 19 Fig. 19. Two-terminal device consists of two physical leads connected to a scattering region containing two superconductors with order parameters 1 and 2. The left (right) physical lead consists of two virtual leads p1 and h1 (p2 and h2) carrying particle and hole channels, respectively. I. Superconducting systems Figure 19 a shows a two-probe normal-superconductor-normal device with left and right normal reservoirs connected to a scattering region containing one or more superconductors. If the complete Hamiltonian describing a normal system is HN, then in the presence of superconductivity within the extended scattering region, the new system is described by the Bogoliubov-de Gennes Hamiltonian ( H Δ ) ( Δ N* - H* ) H = N where the elements of the matrix Δ are non-zero only in the region occupied by a superconductor, as indicated in figure 19 b. Physically, HN describes particle degrees of freedom, -HN* describes hole degrees of freedom and Δ is the superconducting order parameter. The multi-channel scattering theory for such a normal-superconducting-normal structure could be written as [29]: ( ) ( ) IL 2 μL-eμ- ( IR ) = 2e-a( μR-eμ-) h where IL (IR) is the current from the left (right) reservoir, μL -μ (μR -μ) is the difference between the chemical potential of the left (right) reservoir and the chemical potential μ of the superconducting condensate and the voltage difference between the left and right reservoirs is (μL - μR)∕e. In this equation, ( ′ ′ ) NL - Ro + Ra - To +′Ta ′ a = ( - To + Ta NR - R o + R a) where NL (NR) is the number of open channels in the left (right) lead, Ro,To (Ra,Ta) are normal (Andreev) reflection and transmission coefficients for quasi-particles emitted from the right lead, Ro,To(Ra,Ta) are normal (Andreev) reflection and transmission coefficients from the left lead and all quantities are evaluated at the Fermi energy E = μ. As a consequence of unitarity of the scattering matrix, these satisfy Ro + To + Ra + Ta = NL and Ro+ To+ Ra+ Ta= NR. The current-voltage relation of equation 199 is fundamentally different from that encountered for normal systems, because unitarity of the s-matrix does not imply that the sum of each row or column of the matrix a is zero. Consequently, the currents do not automatically depend solely on the applied voltage difference (μL - μR)∕e (or more generally on the differences between incoming quasi-particle distributions). In practice such a dependence arises only after the chemical potential of the superconductor adjusts itself self-consistently to ensure that the current from the left reservoir is equal to the current entering the right reservoir. Insisting that IL = -IR = I, the two-probe conductance G = I∕((μL - μR)∕e) takes the form of 2e2---a11a22---a12a21--- G = h a11 + a22 + a12 + a21 If a superconductor is disordered, then as its length L increases, all transmission coefficients vanish. Consequently, the superconductor possesses zero resistivity (equation 201 ). In this limit, the above equation reduces to (h∕2e2)G = 2∕Ra + 2∕Ra. In contrast with a normal scatterer, this shows that in the presence of Andreev scattering, as L tends to infinity, the resistance ( = 1/conductance) remains finite and therefore the resistivity (i.e. resistance per unit length) vanishes. J. Environmental effects Environmental effects such as presence of water, counter-ions, solvents, pH, molecules conformational change, dopants, nearby molecules, charge trap in a substrate can affect transport properties through a molecular junction. In order to model these effects, a statistical analysis needs to be carried out. Since a molecular junction in the presence of surrounding molecules is a dynamic object at room temperature, a molecular dynamics simulation is usually needed to understand the range of possible configurations of the system. A few configuration then should be extracted and full DFT calculations carried out to obtain the mean field Hamiltonian of the system in the presence of surrounding molecules. An alternative way to study the environmental effect is to create a series of more ideal configurations systematically in the presence of surrounding molecules (e.g. by moving surroundings in different directions). Then without geometry optimization, one could calculate the binding energy of surroundings to the molecule for each configuration and study transport properties of the confirmations with higher binding energies. Alternatively, Boltzmann distribution Tnew = Told × e-Ebinding∕k BT can be used to account for the weight of each confirmation on total ensemble transmission coefficient. Both of these methods are widely used in the literature to model environmental effects. To calculate the binding energy between surroundings S and the backbone B, three total energy calculations need to be carried out. (1) The total energy of whole structure including surroundings and the backbone ESB, (2) The total energy of the surroundings in the presence of the backbone ghost states ESb, (3) The total energy of backbone in the presence of the surroundings ghost states EsB. The binding energy is then obtained by ESBbinding = ESB - (ESb + EsB). It is worth mentioning that since different effects such as physobrtion and charge transfer can play important role in these simulations, SCF methods need to be used to calculate transport from the mean-field DFT Hamiltonian. Recently, machine learning algorithms have been proposed to predict behavior of a junction in the absence/presence of surroundings. In these methods, machine is trained using a series of molecular junctions which are fully characterised using SCF methods. Machine then predicts the behavior of new junctions using its data base and pattern recognition algorithms. V. Conclusion To understand, predict and explain new phenomenon in the nanoscale junctions, one needs to study electron, phonon and spin transport through these junctions. The focus of this paper was on exploring theoretical methods to study electron, phonon and spin transports through molecules and low dimensional materials. Systematic study of such systems begins with understanding their vibrational and electronic structure. Then, one need to calculate the core quantities, transmission function of electrons Te and phonons Tph traversing through such systems from one electrode to the other. This is not only vital to understand their properties and related experiments but also to develop new concepts and design strategies for future applications. We mainly considered junctions where the transport is assumed to be elastic and coherent. In addition, we showed that how these methods could be extended to the incoherent and inelastic regimes. A good agreement between experiments and theories devolved using these methods [6, 9, 11] demonstrate that they are accurate enough to predict trends and develop new design strategies for future applications. However, in terms of predicting actual numbers, there are rooms for improvements. Apart from those quantities that depends on the actual shape of the contact (e.g. screening effects) and cannot be easily calculated; new computationally cheap approaches are needed to take into account electron-electron interactions more accurately. This would help to predict more accurate energy gap and level spacing leading to more realistic modells. [1] “Itrs roadmap 2015,” ITRS, 2015. [2] M. Chhowalla, D. Jena, and H. Zhang, “Two-dimensional semiconductors for transistors,” Nat. Rev. Mater., vol. 1, no. 11, p. 16052, 2016. [3] “Visions for a molecular future,” Nature Nanotechnology, vol. 8, no. 6, pp. 385–389, 2013. [4] J. L. Christopher, L. A. Estroff, J. K. Kriebel, R. G. Nuzzo, and G. M. Whitesides, “Self-assembled monolayers of thiolates on metals as a form of nanotechnology,” Chemical Reviews, vol. 105, no. 4, pp. 1103–1170, 2005. PMID: 15826011. [5] S. V. Aradhya and L. Venkataraman, “Single-molecule junctions beyond electronic transport,” Nature Nanotechnology, vol. 8, no. 6, pp. 399–410, 2013. [6] S. Sangtarash, C. Huang, H. Sadeghi, G. Sorohhov, J. Hauser, T. Wandlowski, W. Hong, S. Decurtins, S.-X. Liu, and C. J. Lambert, “Searching the Hearts of Graphene-like Molecules for Simplicity, Sensitivity, and Logic,” Journal of the American Chemical Society, vol. 137, no. 35, pp. 11425–11431, 2015. [7] H. Sadeghi, L. Algaragholy, T. Pope, S. Bailey, D. Visontai, D. Manrique, J. Ferrer, V. Garcia-Suarez, S. Sangtarash, and C. J. Lambert, “Graphene sculpturene nanopores for DNA nucleobase sensing,” Journal of Physical Chemistry B, vol. 118, no. 24, pp. 6908–6914, 2014. [8] T. Prodromakis, C. Toumazou, and L. Chua, “Two centuries of memristors,” Nature Materials, vol. 11, no. 6, pp. 478–481, 2012. [9] H. Sadeghi, J. A. Mol, C. S. Lau, G. A. D. Briggs, J. Warner, and C. J. Lambert, “Conductance enlargement in picoscale electroburnt graphene nanojunctions,” Proceedings of the National Academy of Sciences, vol. 112, no. 9, pp. 2658–2663, 2015. [10] H. Sadeghi, S. Sangtarash, and C. J. Lambert, “Oligoyne molecular junctions for efficient room temperature thermoelectric power generation,” Nano letters, vol. 15, no. 11, pp. 7467–7472, 2015. [11] C. Evangeli, K. Gillemot, E. Leary, M. T. González, G. Rubio-Bollinger, C. J. Lambert, and N. Agraït, “Engineering the thermopower of C60 molecular junctions,” Nano Lett., vol. 13, no. 5, pp. 2141–2145, 2013. [12] S. Datta, Quantum Transport : Atom to Transistor. Cambridge Univ Pr, 2005. [13] “nanohub-u: Fundamentals of nanoelectronics - part a.” [14] “nanohub-u: Fundamentals of nanoelectronics - part b.” [15] R. Gebauer and R. Car, “Kinetic theory of quantum transport at the nanoscale,” Phys. Rev. B, vol. 70, p. 125324, Sep 2004. [16] M. Büttiker, “Scattering theory of current and intensity noise correlations in conductors and wave guides,” Phys. Rev. B, vol. 46, pp. 12485–12507, Nov 1992. [17] E. Bonet, M. M. Deshmukh, and D. C. Ralph, “Solving rate equations for electron tunneling via discrete quantum states,” Phys. Rev. B, vol. 65, p. 045317, Jan 2002. [18] E. Schrödinger, “An Undulatory Theory of the Mechanics of Atoms and Molecules,” Phys. Rev., vol. 28, pp. 1049–1070, dec 1926. [19] E. Runge and E. K. U. Gross, “Density-functional theory for time-dependent systems,” Phys. Rev. Lett., vol. 52, pp. 997–1000, Mar 1984. [20] E. Engel and R. M. Dreizler, Density Functional Theory, vol. 2011. Springer Verlag, 2011. [21] N. Harrison, “An introduction to density functional theory,” NATO Science Series Sub Series III: Computer and systems sciences, vol. 187, pp. 45–70, 2003. [22] C. Lee, W. Yang, and R. G. Parr, “Development of the colle-salvetti correlation-energy formula into a functional of the electron density,” Phys. Rev. B, vol. 37, pp. 785–789, Jan 1988. [23] J. Heyd, G. E. Scuseria, and M. Ernzerhof, “Hybrid functionals based on a screened coulomb potential,” The Journal of Chemical Physics, vol. 118, no. 18, pp. 8207–8215, 2003. [24] M. Dion, H. Rydberg, E. Schröder, D. C. Langreth, and B. I. Lundqvist, “Van der waals density functional for general geometries,” Phys. Rev. Lett., vol. 92, p. 246401, Jun 2004. [25] J. M. J. M. Soler, E. Artacho, J. D. Gale, A. García, J. Junquera, P. Ordejón, D. Sánchez-Portal, A. Garcia, J. Junquera, P. Ordejon, and D. Sanchez-Portal, “The SIESTA method for ab initio order-N materials simulation,” Journal of physics. Condensed matter :, vol. 2745, p. 22, mar 2002. [26] H. Sadeghi, S. Sangtarash, and C. J. Lambert, “Electron and heat transport in porphyrin-based singlemolecule transistors with electro-burnt graphene electrodes,” Beilstein J. Nanotechnol., vol. 6, no. 1, pp. 1413–1420, 2015. [27] H. Sadeghi, S. Sangtarash, and C. Lambert, “Robust Molecular Anchoring to Graphene Electrodes,” Nano Lett., vol. 17, no. 8, pp. 4611–4618, 2017. [28] S. Sanvito, C. J. Lambert, J. H. Jefferson, and a. M. Bratkovsky, “General Green’s-function formalism for transport calculations with spd Hamiltonians and giant magnetoresistance in Co- and Ni-based magnetic multilayers,” Phys. Rev. B, vol. 59, no. 18, pp. 936–948, 1999. [29] J. Ferrer, C. J. Lambert, V. M. García-Suárez, D. Z. Manrique, D. Visontai, L. Oroszlany, R. Rodríguez-Ferradás, I. Grace, S. Bailey, K. Gillemot, et al., “Gollum: a next-generation simulation tool for electron, thermal and spin transport,” New Journal of Physics, vol. 16, no. 9, p. 093029, 2014. [30] A. E. Miroshnichenko, S. Flach, and Y. S. Kivshar, “Fano resonances in nanoscale structures,” Rev. Mod. Phys., vol. 82, pp. 2257–2298, Aug 2010. [31] U. Fano, “Effects of configuration interaction on intensities and phase shifts,” Phys. Rev., vol. 124, pp. 1866–1878, Dec 1961. [32] L. Oroszlny, A. Kormnyos, J. Koltai, J. Cserti, and C. J. Lambert, “Nonthermal broadening in the conductance of double quantum dot structures,” Physical Review B, vol. 76, p. 045318, 2007. [33] S. Sangtarash, A. Vezzoli, H. Sadeghi, N. Ferri, H. M. O’Brien, I. Grace, L. Bouffier, S. J. Higgins, R. J. Nichols, and C. J. Lambert, “Gateway state-mediated, long-range tunnelling in molecular wires,” Nanoscale, vol. 10, pp. 3060–3067, 2018. [34] H. Sadeghi, Theory of electron and phonon transport in nano and molecular quantum devices: design strategies for molecular electronics and thermoelectricity. PhD thesis, 2016. [35] D. John and D. Pulfrey, “Green’s function calculations for semi-infinite carbon nanotubes,” physica status solidi (b), vol. 243, no. 2, pp. 442–448, 2006. [36] R. Landauer, “Electrical transport in open and closed systems,” Z. Phys. B: Condens. Matter, vol. 68, no. 2-3, pp. 217–228, 1987. [37] M. Buttiker, “Symmetry of electrical conduction,” IBM Journal of Research and Development, vol. 32, no. 3, pp. 317–334, 1988. [38] M. Buttiker, “Coherent and sequential tunneling in series barriers,” IBM Journal of Research and Development, vol. 32, no. 1, pp. 63–75, 1988. [39] M. Brandbyge, J.-L. Mozos, P. Ordejón, J. Taylor, and K. Stokbro, “Density-functional method for nonequilibrium electron transport,” Phys. Rev. B, vol. 65, p. 165401, Mar 2002. [40] F. Furche, R. Ahlrichs, C. Httig, W. Klopper, M. Sierka, and F. Weigend, “Turbomole,” Wiley Interdisciplinary Reviews: Computational Molecular Science, vol. 4, no. 2, pp. 91–100. [41] S. Sangtarash, H. Sadeghi, and C. J. Lambert, “Exploring quantum interference in heteroatom-substituted graphene-like molecules,” Nanoscale, vol. 8, no. 27, pp. 13199–13205, 2016. [42] S. Sangtarash, H. Sadeghi, and C. J. Lambert, “Connectivity-driven bi-thermoelectricity in heteroatom-substituted molecular junctions,” Phys. Chem. Chem. Phys., vol. 20, no. 14, pp. 9630–9637, 2018. [43] C. C. Escott, F. A. Zwanenburg, and A. Morello, “Resonant tunnelling features in quantum dots,” Nanotechnology, vol. 21, no. 27, p. 274018, 2010. [44] J. Koch, F. von Oppen, F. V. Oppen, and F. von Oppen, “Franck-Condon Blockade and Giant Fano Factors in Transport through Single Molecules,” Physical Review Letters, vol. 94, no. 20, p. 206804, 2005. [45] C. S. Lau, H. Sadeghi, G. Rogers, S. Sangtarash, P. Dallas, K. Porfyrakis, J. Warner, C. J. Lambert, G. A. D. Briggs, and J. A. Mol, “Redox-dependent franck–condon blockade and avalanche transport in a graphene–fullerene single-molecule transistor,” Nano letters, vol. 16, no. 1, pp. 170–176, 2015. [46] E. Burzuri, Y. Yamamoto, M. Warnock, X. Zhong, K. Park, A. Cornia, and H. S. J. Van Der Zant, “Franck-condon blockade in a single-molecule transistor,” Nano Lett., vol. 14, no. 6, pp. 3191–3196, 2014. [47] T. Frederiksen, M. Paulsson, M. Brandbyge, and A.-P. Jauho, “Inelastic transport theory from first principles: Methodology and application to nanoscale devices,” Physical Review B, vol. 75, no. 20, p. 205413, 2007. [48] O. Stetsovych, P. Mutombo, M. vec, M. mal, J. Nejedl, I. Csaov, H. Vzquez, M. Moro-Lagares, J. Berger, J. Vacek, I. G. Star, I. Star, and P. Jelnek, “Large converse piezoelectric effect measured on a single molecule on a metallic surface,” Journal of the American Chemical Society, vol. 140, no. 3, pp. 940–946, 2018. PMID: 29275621. [49] J. M. Seminario, “An introduction to density functional theory in chemistry,” Theoretical and Computational Chemistry, vol. 2, no. C, pp. 1–27, 1995. [50] A. D. Becke, “A new mixing of hartree–fock and local density-functional theories,” The Journal of Chemical Physics, vol. 98, no. 2, pp. 1372–1377, 1993. [51] L. Hedin, “New method for calculating the one-particle green’s function with application to the electron-gas problem,” Phys. Rev., vol. 139, pp. A796–A823, Aug 1965. [52] M. Strange, C. Rostgaard, H. Häkkinen, and K. S. Thygesen, “Self-consistent gw calculations of electronic transport in thiol-and amine-linked molecular junctions,” Physical Review B, vol. 83, no. 11, p. 115108, 2011. [53] T. Markussen, C. Jin, and K. S. Thygesen, “Quantitatively accurate calculations of conductance and thermopower of molecular junctions,” physica status solidi (b), vol. 250, no. 11, pp. 2394–2402, 2013. [54] T. Kim, P. Darancet, J. R. Widawsky, M. Kotiuga, S. Y. Quek, J. B. Neaton, and L. Venkataraman, “Determination of energy level alignment and coupling strength in 4,4-bipyridine single-molecule junctions,” Nano Letters, vol. 14, no. 2, pp. 794–798, 2014. PMID: 24446585. [55] R. Frisenda, S. Tarkuç, E. Galán, M. L. Perrin, R. Eelkema, F. C. Grozema, and H. S. van der Zant, “Electrical properties and mechanical stability of anchoring groups for single-molecule electronics,” Beilstein J. Nanotechnol., vol. 6, no. 1, pp. 1558–1567, 2015. [56] X. Zhao, C. Huang, M. Gulcur, A. S. Batsanov, M. Baghernejad, W. Hong, M. R. Bryce, and T. Wandlowski, “Oligo (aryleneethynylene) s with terminal pyridyl groups: synthesis and length dependence of the tunneling-to-hopping transition of single-molecule conductances,” Chemistry of materials, vol. 25, no. 21, pp. 4340–4347, 2013. [57] G. Sedghi, V. M. García-Suárez, L. J. Esdaile, H. L. Anderson, C. J. Lambert, S. Martín, D. Bethell, S. J. Higgins, M. Elliott, N. Bennett, et al., “Long-range electron tunnelling in oligo-porphyrin molecular wires,” Nature nanotechnology, vol. 6, no. 8, pp. 517–523, 2011. [58] M. D. Yates, J. P. Golden, J. Roy, S. M. Strycharz-Glaven, S. Tsoi, J. S. Erickson, M. Y. El-Naggar, S. Calabrese Barton, and L. M. Tender, “Thermally activated long range electron transport in living biofilms,” Phys. Chem. Chem. Phys., vol. 17, pp. 32564–32570, 2015. [59] X. Zhao, C. Huang, M. Gulcur, A. S. Batsanov, M. Baghernejad, W. Hong, M. R. Bryce, and T. Wandlowski, “Oligo(aryleneethynylene)s with terminal pyridyl groups: Synthesis and length dependence of the tunneling-to-hopping transition of single-molecule conductances,” Chemistry of Materials, vol. 25, no. 21, pp. 4340–4347, 2013.
de99b907bc0ac955
Forty years as an AFOSR PI: Rod Bartlett’s Personal History Written By Rod Bartlett When most read a popular account of scientific progress, the focus is on the ‘big name’ projects that one knows from the press: solar energy storage, hydrogen fuel, reduction of greenhouse gases, cures for cancer, etc. And the ‘experimental’ tools used to address these issues that measure the success or failure of some hypothesis. That, after all, is the scientific method. But as observations are made, science is trying to construct an underlying, organizing ‘theory’ that explains the experiment and will explain untold other future observations. An example is the difference between Newton showing that a prism splits light into many different colors—an experiment—and deriving the equations from his very general laws that explain the observed optics of the prism. The latter enable ‘predicting’ untold other optical phenomena in the absence of experiment. Therefore, in this case Sherlock’s admonition that it is ‘dangerous to theorize without the facts’ needs some modification for ‘predictive’ theory. When the equations are correct and can be solved, the results have to be true. Today, that kind of predictive theory is what has been developed by quantum mechanics that in Dirac’s phrase underlies ‘all of chemistry.’ Except in his opinion, ‘the equations are too difficult to be soluble.’ The latter is no longer true. All those highly visible ‘big name’ projects depend upon chemistry, and chemistry deals with in Mulliken’s phrase, ‘what the electrons are really doing in molecules.’ With this knowledge, the energies of reactions, the activation barriers that control what reactions occur, and the spectroscopic fingerprints that identify the molecules become known. The description of electrons requires the solution to the familiar, quantized equations of quantum mechanics for the electronic ‘wavefunctions’ and their energies, HΨk=EkΨk. But the H in these equations describe the Coulombic interactions among a molecule’s ‘many-electrons’. That means the water molecule’s 10 electrons produce a ‘10 body’ problem (45 electron-electron interactions), or for benzene, a’46 body one’ (1081 interactions), or for a piece of DNA, many more. Yet, we can only solve the Schrodinger equation of QM exactly for 1 electron, the hydrogen atom. So, we are faced with having to develop mathematical and computational tools that allow sufficiently accurate solutions of such many-electron problems to obtain the secrets of the molecules in question. When we are able to do that, we have a direct route to facts that are not typically amenable to experimental observation, like for molecules under extreme conditions as in explosions, or in interstellar space, or the detection and identification of rocket plumes, or the design of new concepts for fuels, among many other applications. Providing these solutions is the science of quantum chemistry. But one major problem remained in its application. The problem of ‘electron correlation’. Electrons are charged particles meaning they interact instantaneously through Coulomb forces that cause their motions to be ‘correlated’, and these interactions are missing from an average (‘mean-field’ approximation) like the well-known Hartree-Fock theory. The latter approximates Ψ0 by Φ0, the familiar molecular orbital approximation that provides the conceptual interpretation of much of chemistry. Quantum chemical solutions to define Φ0 have been practical for many applications since the sixties, but the relatively small ‘correlation’ contribution that distinguishes the correct solution is critical to a ‘predictive’ theory for bond energies, activation barriers, spectra, and structure, indeed chemistry. As such, it has been the dominant unsolved problem in quantum chemistry for about 50 years. In our forty years of AFOSR support, a number of notable advances have been made in the solution of the correlation problem. As a young scientist at Battelle in Columbus, Ohio, I approached Ralph Kelley, an AFOSR program manager in physics about support. I told him about using many-body perturbation theory (MBPT) and its diagrammatic framework borrowed from quantum field theory and Feynman diagrams to treat ‘electron correlation.’ He and AFOSR enabled me to start as an AFOSR PI in 1978. As a postdoc at John Hopkins with Robert Parr, I had been given the freedom to pursue the many-body theory I had begun as an NSF postdoc at Aarhus, University in Denmark, in 1973. I and my collaborator, David Silver from the Hopkins Applied Physics Lab had written the first papers in chemistry in 1974-76 showing the potential power of MBPT. Prior work was due to Hugh Kelly in physics, who applied MBPT to atoms, but molecules require a very different treatment, so these were the first such applications. The reason it is called many-body perturbation theory (MBPT) is that the theory is based on the linked-diagram theorem of Brueckner and Goldstone that guarantees correct scaling with the number of electrons. Linked diagrams describe the electron-electron interactions in its most compact way. The energy of one of these quantum states has to be ‘extensive’ , so it should grow correctly with the number of electrons, a feature we later termed, ‘size-extensivity’ as the rationale for all many-body treatments. Although it should be obvious that when all the units (or atoms) in a molecule are too far apart to interact, the correct energy should be the sum of the energies of the units; but this condition is not met by the variational, configuration interaction approximations that were in dominate use during those 50 years. The many manifestations of size-extensivity were not to be fully realized until the turn of the century. Today, it is deemed a fundamental property that all worthy electronic structure approximations should satisfy. Two years after our initial MBPT papers, John Pople decided to apply this method, but chose to call it Moeller-Plesset perturbation theory (MPPT) as he tried to avoid the less familiar diagrammatic tools we used. But his terminology hides the fundamental rationale for these many-body methods, in that the identification of ‘linked diagrams’ guarantees size-extensivity, and this feature is not apparent in ordinary perturbation theory. Today, the MBPT=MPPT methods for solving the Schrödinger equation are in virtually all quantum chemical programs. A search of the Web of Science shows that though there were only a hand-full of citations in the 70’s, and a couple of hundred until ~1989, there are now more than 295,700 citations to the method and 8105 papers written about it. But perturbation methods are limited to some order and since the correlation correction is not small (in extreme forms it accounts for phase transitions in solids and super-conductivity), a far more powerful many-body approach is to sum many such linked diagram terms that describe correlation to infinite order. This is the idea of coupled-cluster theory that shows that the correct, infinite-order MBPT wavefunction for any system, Ψ0 = exp(T)|Φ0>. The exponential form guarantees size-extensivity. The cluster separation of, T=T1 +T2 +T3 +…where the subscripts indicate one-electron, two-electrons, three-electron,… provides a framework for a wealth of approximations determined by the number of clusters retained, like CCSD for single and double ones. The size-extensive property is at work at any truncation of T, providing superior solutions to any that had been previously obtained for the same computational effort. This is because the CC wavefunction even limited to T2, the double excitation cluster operator, automatically has all products like ⅟2T22 which are ‘quadruple’ excitations, and ⅟6 T23, ‘hextuple,’ etc., in its wavefunction. As T1, and T3 are added, one rapidly exhausts the effects of electron correlation converging to the exact solution. The first report of general applications of CC theory, i.e. CCD for just T2 were reported in 1978 by us and Pople in back-to-back papers. Then George Purvis and I first reported CCSD (CC for single and double excitations) in 1982. Our CC papers were supported by AFOSR. Because of the products included in exp(T2), unlike CI, most ‘quadruple’ effects are already included in CCSD, so the next most important term in due to T3. In our next AFOSR work (1984) we reported the first general inclusion of triple excitations (CCSDT-1), followed by the first non-iterative approximation CCSD[T], in 1985. A better non-iterative approximation, CCSD(T), that added one small term to [T] was introduced by the Pople group (1989) without a rigorous derivation. We presented that in 1993. The latter is now called the ‘gold standard’ in quantum chemical calculations. In 1987 we reported the full CCSDT method for the first time, sometimes called the ‘platinum standard’, followed later by full quadruples, CCSDTQ, and pentuples, CSDTQP! In this way we were able to show the rapid convergence of CC theory to the exact result, documenting its predictive character. Another citation check shows that from virtually no mention in the seventies, to less than a hundred citations in the eighties, CC theory has now spawned 28,780 papers and over 700,000 citations. Another advantage that calculations have over experiment is the flexibility of application. In a second project with AFOSR, the physics program manager with responsibility for non-linear optics (NLO), Col. Gordon Wepfer, showed me experimental results for electric-field induced second and third harmonic generation experiments in the gas phase, compared to the theory of the time. The theory was hopeless! NLO effects are critical to all kinds of problems from protecting pilots’ eyes from lasers to doing selective surface chemistry. They are in principle, amendable to quantum chemistry, as they depend upon the higher terms in the expansion of a molecule’s energy in the presence of (frequency dependent) electric fields. These quantities are called hyperpolarizabilities, as they are higher-order generalizations of the well-known dipole polarizability for a molecule. I could not promise that we could resolve the discrepancy between theory and experiment, but with our new CC/MBPT methods I could promise to do calculations for such quantities with the best correlated quantum chemistry that existed. It took a few years and required some new theory for the treatment of the frequency dependent effects, but, indeed, we were able to explain the observed experimental values for the first time. Another illustration of the flexibility of application occurred when Capt. Pat Saatzer of the Rocket Lab asked us at Battelle to provide a theory complement to two experimental efforts, one directed by John Fenn, later to be a Nobel Laureate, to determine the cross sections for vibrational excitations when components of combusted fuel collide with O atoms in the upper atmosphere. The idea is that depending upon the products in the fuel, a knowledge of these signatures allows one to identify whose missile it is. This kind of problem requires the combined efforts of molecular dynamics and quantum chemistry, the latter to provide the potential energy surface of interactions between the molecules and O atoms, and the dynamics, done by Mike Redmon, to add the time-dependent aspects. Both experiments failed, leaving only the theory to provide the cross sections required in the deployment of detectors. A third illustration deals with NMR spectra. NMR has two components, a vector term that gives the chemical shift and a scalar term that provides the J-J spin-coupling constants for molecules. As the latter connects any two atoms in a molecule through its electronic density, the analogy with a chemical bond has inspired a lot of discussion. Once again, inspired by the lack of agreement between theory and experiment as pointed out in some review articles, Ajith Perera and I decided to apply our new CC/MBPT tools to resolving this issue. Once again these methods were remarkably successful, providing the first ‘predictive’ theory of J-J coupling constants. We went on to use them to further resolve the long-term argument between Georg Olah and H. C. Brown about the existence of non-classical C bonding, and with Janet Del Bene, to study the ‘two-bond’ coupling across an H-bond in nucleic acid bases. By measuring the latter, one can infer the location of the H- bond that cannot be seen in X-ray analysis. We also offered a J-J signature for the meaning of a strong, weak, or normal H-bond. The next major theory effort for AFOSR was our further development of CC/MBPT but now focused on excited states. In the Schrodinger equation above, the ‘k’ indicates one of the many quantized solutions to the problem. The others are important to electronic spectroscopy and photochemistry among many other needs. In its original formulation, CC/MBPT provided very accurate results for one state, but we changed that by introducing what we call the equation-of-motion (EOM) CC starting in 1984-1992. This enables one to add a spectrum of excited states on top of a CC solution for the ground state. EOM also permits ‘excited’ states that differ from the ground state in the number of electrons, as in ionizations in photoelectron spectroscopy (IP-EOM-CC), or by adding an electron (EA-EOM-CC), or kicking out two electrons, DIP-EOM-CC or adding two, DEA-EOM-CC. Hence, one now has a wide array of ways to describe ‘what the electrons are doing in molecules’ for a wealth of different situations. Subjecting EOM-CC (sometimes called CC linear response) to the same measure of use as the other two developments shows over 23,500 citations and 690 papers using these methods today. Armed with all these tools, a fascinating problem arose in the new, high-energy density material (HEDM) program geared toward new ideas for ‘revolutionary’ improvements in rocket fuels. I submitted a proposal entitled “An Investigation of Metastability in Molecules” to Drs. Larry Curtiss and Larry Burggraf in AFOSR chemistry, that asked the question how much energy could be stored in a molecule with a sufficient barrier to decomposition to keep it around long enough to be useful. Later Dr. Mike Berman became the program manager, and remains my program manager today. My proposal planned to use our predictive set of quantum chemical tools to address this question. Unlike synthesis, which is difficult, expensive, and dangerous, quantum chemical applications can explore prospects that exhibit different principles to see if any might be worthy of further study. One strategy for storing energy into molecules would be to force some atoms to bind in unfamiliar ways, a concept we termed ‘geometric metastability.’ A case in point is the tetrahedral form of N4. As the normal form of P4 is a tetrahedron, and N and P are isovalent such a molecule makes sense. But while P2 is not very stable compared to P4, the N2 triple bond is one of the strongest bonds known, and four N atoms energetically prefer two N2 molecules to four single bonded N atoms in a tetrahedron. That, of course, is exactly what one would like, since if the four N atoms could be put into a tetrahedron, and if there is a barrier to decomposition that would keep it around; then under stimulus all the energy in N4 could be released to N2 molecules. Our calculations show that N2 would release 190 kcal/mol and would be held together by a barrier of 40 kcal/mol, once the four atoms could be put into the tetrahedron. That, of course, is the difficulty. Although there have been some potential experimental observations, perhaps the best one is from mass spec, where its isoelectronic analogue, N3O+ has been seen. Another of our predictions was the existence of the N5 pentazole anion. Again, this makes perfect sense in terms of its bonding, even achieving extra stability via its pi electron aromaticity like in benzene. In this case we predict a barrier to decomposition of 27 kcal/mol. It has now been observed in negative ion mass spec as a byproduct of a known pentazole containing molecule. The targets for the HEDM project originated with theory that spun-off further work by DARPA and NASA, with the former pursuing serious synthetic efforts. Recently, another of our predictions, N8, seems to have been seen experimentally. Some have also been seen in high pressure experiments Everyone in the computational field would love to be able to make accurate calculations by using an effective one-particle theory, so that all the complicated two-particle terms that must be described in CC theory could be avoided. This is the impetus for the development of Kohn-Sham density functional theory (KS-DFT). But unlike CC, there is no way to converge to the right answer, since the correct density functional is not known in any useful way. Instead, thousands of density based approximations to the KS-DFT theory are made and used to get answers quickly, without any guarantee of veracity. In our current work for AFOSR we have tried to improve upon such an approach by insisting upon a rigorous foundation. That foundation starts by our formulation of a ‘correlated orbital theory’ (COT). It was derived from manipulating the IP/EA-EOM-CC equations that are formally exact into an effective one- particle form, whose eigenvalues have to correspond to the energy required to remove any electron from the molecule (IP) or to add an electron to the molecule (EA). This approach augments the mean-field Hartree-Fock approximation with a correlation orbital potential (COP)! Since KS-DFT is a special case of COT, using this rigorous theory as a model, one can assess the accuracy of various DFT approximations. Finding that none satisfy our conditions, we took some well-known forms and, by virtue of the 2-4 parameters in them, fit them to satisfy our eigenvalue property initially only for water’s five Ip’s. In this way we introduced QTP(00). Two new minimally parameterized approximations, QTP(01), and QTP(02) followed. All provide accurate one-particle spectra to some threshold from the eigenvalue attached to each MO, proven by testing them against 401 experimental values from 63 molecules. QTP(01) gets all valence IP’s accurate to ~10%. An important application is core ionization and excitation, where QTP(00) is without peer. It accurately describes the core spectra of all the amino acids. Unlike any other DFT approximation, QTP(02) correctly describes the EA, both bound and unbound. The QTP family also gives excellent activation barriers, excited state excitation energies, and the molecular densities, themselves. As the avowed goal of DFT is to provide accurate densities, the QTP functionals do that better than most. This QTP family defines what we call ‘consistent’ KS-DFT approximations, since one cannot get the IP-eigenvalues right without a good KS potential. Also, the connection between the orbital eigenvalues and Ip’s requires that the excitations given by adiabatic time-dependent DFT (TDDFT), be correct for excitation into the continuum, i.e. ionization. Further, an accurate potential mitigates the debilitating self-interaction error of KS-DFT, where electrons incorrectly interact with themselves. When we insist upon ‘consistency,’ we are a step closer to our goal of mimicking the predictive results of CC theory in a highly efficient one-particle theory. This is another testament to the CC revolution that began and was nurtured by AFOSR! Besides the AFOSR work mentioned here, it is important to recognize that other aspects of our formative many-body developments benefitted from exceptional support from ONR (Bobby Junker) and ARO (Mikal Ciftan), and their successors. But it is true that ALL these accomplishments are uniquely a research product of the DoD agencies who had the foresight to back them in their infancy. I am extremely appreciative of the confidence shown in our effort over these 40 years.
ce5ecfce3ab4504f
Artificial Intelligence Solves Schrödinger’s Equation, a Fundamental Problem in Quantum Chemistry – SciTechDaily Quantum Physics Concept Scientists at Freie Universität Berlin develop a deep learning method to solve a fundamental problem in quantum chemistry. Up to now, it has been impossible to find an exact solution for arbitrary molecules that can be efficiently computed. But the team at Freie Universität has developed a deep learning method that can achieve an unprecedented combination of accuracy and computational efficiency. AI has transformed many technological and scientific areas, from computer vision to materials science. “We believe that our approach may significantly impact the future of quantum chemistry,” says Professor Frank Noé, who led the team effort. The results were published in the reputed journal Nature Chemistry. Central to both quantum chemistry and the Schrödinger equation is the wave function – a mathematical object that completely specifies the behavior of the electrons in a molecule. The wave function is a high-dimensional entity, and it is therefore extremely difficult to capture all the nuances that encode how the individual electrons affect each other. Many methods of quantum chemistry in fact give up on expressing the wave function altogether, instead attempting only to determine the energy of a given molecule. This however requires approximations to be made, limiting the prediction quality of such methods. Other methods represent the wave function with the use of an immense number of simple mathematical building blocks, but such methods are so complex that they are impossible to put into practice for more than a mere handful of atoms. “Escaping the usual trade-off between accuracy and computational cost is the highest achievement in quantum chemistry,” explains Dr. Jan Hermann of Freie Universität Berlin, who designed the key features of the method in the study. “As yet, the most popular such outlier is the extremely cost-effective density functional theory. We believe that deep “Quantum Monte Carlo,” the approach we are proposing, could be equally, if not more successful. It offers unprecedented accuracy at a still acceptable computational cost.” The deep neural network designed by Professor Noé’s team is a new way of representing the wave functions of electrons. “Instead of the standard approach of composing the wave function from relatively simple mathematical components, we designed an artificial neural network capable of learning the complex patterns of how electrons are located around the nuclei,” Noé explains. “One peculiar feature of electronic wave functions is their antisymmetry. When two electrons are exchanged, the wave function must change its sign. We had to build this property into the neural network architecture for the approach to work,” adds Hermann. This feature, known as “Pauli’s exclusion principle,” is why the authors called their method “PauliNet.”  Besides the Pauli exclusion principle, electronic wave functions also have other fundamental physical properties, and much of the innovative success of PauliNet is that it integrates these properties into the deep neural network, rather than letting deep learning figure them out by just observing the data. “Building the fundamental physics into the AI is essential for its ability to make meaningful predictions in the field,” says Noé. “This is really where scientists can make a substantial contribution to AI, and exactly what my group is focused on.” There are still many challenges to overcome before Hermann and Noé’s method is ready for industrial application. “This is still fundamental research,” the authors agree, “but it is a fresh approach to an age-old problem in the molecular and material sciences, and we are excited about the possibilities it opens up.” Reference: “Deep-neural-network solution of the electronic Schrödinger equation” by Jan Hermann, Zeno Schätzle and Frank Noé, 23 September 2020, Nature Chemistry. DOI: 10.1038/s41557-020-0544-y Leave a Reply %d bloggers like this:
6e51a3b543cb7e3f
hamiltonian operator for hydrogen atom {\displaystyle \psi _{n\ell m}} π R In 1913, Niels Bohr obtained the energy levels and spectral frequencies of the hydrogen atom after making a number of simple assumptions in order to correct the failed classical model. {\displaystyle \Phi (\phi )} and takes the form. B • The Hamiltonian of a Hydrogen atom in a uniform B-field is –Can neglect diamagnetic term • Eigenstates are unchanged • Energy eigenvalues now depend on m: • The additional term is called the Zeeman shift –We already know that it will be no larger than 10-22 J~10-4eV –E.g. z If this were true, all atoms would instantly collapse, however atoms seem to be stable. The solution of the Schrödinger equation (wave equation) for the hydrogen atom uses the fact that the Coulomb potential produced by the nucleus is isotropic (it is radially symmetric in space and only depends on the distance to the nucleus). r / , Reduced mass is defined below for two masses 1 and 2. These 2 Free protons are common in the interstellar medium, and solar wind. sin 2.1 Review of hydrogen atom The hydrogen atom Hamiltonian is by now familiar to you. In this case, one can solve the energy eigenvalue equation at any specific instant of time. r ± d These figures, when added to 1 in the denominator, represent very small corrections in the value of R, and thus only small corrections to all energy levels in corresponding hydrogen isotopes. After appropriate adjustments are made to compensate for the change of variables, the Schrödinger equation becomes: \[-\hbar{^2}\dfrac{\partial{}}{\partial{r}}\left(r^2\dfrac{\partial{}\psi{}}{\partial{r}}\right)+\hat{L}^2\psi{}+2m_er^2[V(r)-E]\psi{(}r,\theta{,}\phi{)}=0\]. in Dirac notation, and {\displaystyle r} R wavefunction. Since d is odd operator under the parity transformation r → … P / {\displaystyle (2,1,\pm 1)} The quantum numbers determine the layout of these nodes. Wikipedia entries should probably be referenced here. is the electron charge, Using the reduced mass effectively converts the two-body problem (two moving and interacting bodies in space) into a one-body problem (a single electron moving about a fixed point). 1 Exact analytical answers are available for the nonrelativistic hydrogen atom. By extending the symmetry group O(4) to the dynamical group O(4,2), 1 + ¯ = The Hamiltonian operator, H, is patterned after those discussed previously for the one electron "box" and atom. | d {\displaystyle e} Helium Atom A helium atom consists of a nucleus of charge surrounded by two electrons. Black lines occur in each but the first orbital: these are the nodes of the wavefunction, i.e. ( 2 Bohr's predictions matched experiments measuring the hydrogen spectral series to the first order, giving more confidence to a theory that used quantized values. The su (1, 1) dynamical algebra from the Schrödinger ladder operators for N -dimensional systems: hydrogen atom, Mie-type potential, harmonic oscillator and pseudo-harmonic oscillator D Martínez, J C Flores-Urbina, R D Mota and V D Granados. {\displaystyle \ell =0,1,\ldots ,n-1} Electrons do not emit radiation while in one of these stationary states. , The formulas below are valid for all three isotopes of hydrogen, but slightly different values of the Rydberg constant (correction formula given below) must be used for each hydrogen isotope. m where Furthermore, the spiral inward would release a smear of electromagnetic frequencies as the orbit got smaller. can always be represented as a suitable superposition of the various states of different The angular momentum quantum number . If the electron is assumed to orbit in a perfect circle and radiates energy continuously, the electron would rapidly spiral into the nucleus with a fall time of:[3]. {\displaystyle a_{0}} We now have the tools to study the hydrogen atom, which has a central potential given by. Un-normalized Ground state of Hydrogen Atom. 0 4 , 2 So­lu­tion us­ing sep­a­ra­tion of vari­ables . are hydrogen-like atoms in this context. Further, by applying special relativity to the elliptic orbits, Sommerfeld succeeded in deriving the correct expression for the fine structure of hydrogen spectra (which happens to be exactly the same as in the most elaborate Dirac theory). {\displaystyle 2\mathrm {p} } sin 2 If instead a hydrogen atom gains a second electron, it becomes an anion. #hatHpsi = Epsi,# the wave function #psi# describes the state of a quantum-mechanical system such as an atom or molecule, while the eigenvalue of the Hamiltonian operator #hatH# corresponds to the observable energy #E#.. However, since the nucleus is much heavier than the electron, the electron mass and reduced mass are nearly the same. What are some other possibilities? = The additional magnetic field terms are important in a plasma because the typical radii can be much bigger than in an atom. n (but same and the Laplace–Runge–Lenz vector. Hamiltonian operator for the hydrogen atom can be differentiated with respect to time. m θ ) We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. e where is the usual quantum number for the z component of orbital angular momentum. An electron can gain or lose energy by jumping from one discrete orbit to another. 2 , {\displaystyle {\frac {1}{\Phi }}{\frac {{\rm {d}}^{2}\Phi }{{\rm {d}}\phi ^{2}}}+B=0.}. = , Yummy House Number, Training In Construction Management, Types Of Information System Pdf, Almond Flour Montreal, Dell Inspiron 15 5501 Price, Warehouse Operative Interview Questions And Answers, Chamberlain College Of Nursing Acceptance Rate, Kirby Up B Spam, Nikon D3400 Best Price,
a0c3d0b66ae96047
Canonical Commutation Relation Get Canonical Commutation Relation essential facts below. View Videos or join the Canonical Commutation Relation discussion. Add Canonical Commutation Relation to your PopFlock.com topic list for future reference or share this resource on social media. Canonical Commutation Relation In quantum mechanics, the canonical commutation relation is the fundamental relation between canonical conjugate quantities (quantities which are related by definition such that one is the Fourier transform of another). For example, between the position operator x and momentum operator px in the x direction of a point particle in one dimension, where [x , px] = x px - pxx is the commutator of x and px, i is the imaginary unit, and ? is the reduced Planck's constant h/2π . In general, position and momentum are vectors of operators and their commutation relation between different components of position and momentum can be expressed as where is the Kronecker delta. This relation is attributed to Max Born (1925),[1] who called it a "quantum condition" serving as a postulate of the theory; it was noted by E. Kennard (1927)[2] to imply the Heisenberg uncertainty principle. The Stone-von Neumann theorem gives a uniqueness result for operators satisfying (an exponentiated form of) the canonical commutation relation. Relation to classical mechanics By contrast, in classical physics, all observables commute and the commutator would be zero. However, an analogous relation exists, which is obtained by replacing the commutator with the Poisson bracket multiplied by i?, This observation led Dirac to propose that the quantum counterparts , of classical observables f, g satisfy In 1946, Hip Groenewold demonstrated that a general systematic correspondence between quantum commutators and Poisson brackets could not hold consistently.[3][4] However, he further appreciated that such a systematic correspondence does, in fact, exist between the quantum commutator and a deformation of the Poisson bracket, today called the Moyal bracket, and, in general, quantum operators and classical observables and distributions in phase space. He thus finally elucidated the consistent correspondence mechanism, the Wigner-Weyl transform, that underlies an alternate equivalent mathematical representation of quantum mechanics known as deformation quantization.[3][5] The Weyl relations The group generated by exponentiation of the 3-dimensional Lie algebra determined by the commutation relation is called the Heisenberg group. This group can be realized as the group of upper triangular matrices with ones on the diagonal.[6] According to the standard mathematical formulation of quantum mechanics, quantum observables such as and should be represented as self-adjoint operators on some Hilbert space. It is relatively easy to see that two operators satisfying the above canonical commutation relations cannot both be bounded. Certainly, if and were trace class operators, the relation gives a nonzero number on the right and zero on the left. Alternately, if and were bounded operators, note that , hence the operator norms would satisfy ,   so that, for any n, However, n can be arbitrarily large, so at least one operator cannot be bounded, and the dimension of the underlying Hilbert space cannot be finite. If the operators satisfy the Weyl relations (an exponentiated version of the canonical commutation relations, described below) then as a consequence of the Stone-von Neumann theorem, both operators must be unbounded. Still, these canonical commutation relations can be rendered somewhat "tamer" by writing them in terms of the (bounded) unitary operators and . The resulting braiding relations for these operators are the so-called Weyl relations These relations may be thought of as an exponentiated version of the canonical commutation relations; they reflect that translations in position and translations in momentum do not commute. One can easily reformulate the Weyl relations in terms of the representations of the Heisenberg group. The uniqueness of the canonical commutation relations--in the form of the Weyl relations--is then guaranteed by the Stone-von Neumann theorem. It is important to note that for technical reasons, the Weyl relations are not strictly equivalent to the canonical commutation relation . If and were bounded operators, then a special case of the Baker-Campbell-Hausdorff formula would allow one to "exponentiate" the canonical commutation relations to the Weyl relations.[7] Since, as we have noted, any operators satisfying the canonical commutation relations must be unbounded, the Baker-Campbell-Hausdorff formula does not apply without additional domain assumptions. Indeed, counterexamples exist satisfying the canonical commutation relations but not the Weyl relations.[8] (These same operators give a counterexample to the naive form of the uncertainty principle.) These technical issues are the reason that the Stone-von Neumann theorem is formulated in terms of the Weyl relations. A discrete version of the Weyl relations, in which the parameters s and t range over , can be realized on a finite-dimensional Hilbert space by means of the clock and shift matrices. The simple formula valid for the quantization of the simplest classical system, can be generalized to the case of an arbitrary Lagrangian .[9] We identify canonical coordinates (such as x in the example above, or a field ?(x) in the case of quantum field theory) and canonical momenta πx (in the example above it is p, or more generally, some functions involving the derivatives of the canonical coordinates with respect to time): This definition of the canonical momentum ensures that one of the Euler-Lagrange equations has the form The canonical commutation relations then amount to where δij is the Kronecker delta. Further, it can be easily shown that Using , it can be easily shown that by mathematical induction Gauge invariance Canonical quantization is applied, by definition, on canonical coordinates. However, in the presence of an electromagnetic field, the canonical momentum p is not gauge invariant. The correct gauge-invariant momentum (or "kinetic momentum") is   (SI units)        (cgs units), where q is the particle's electric charge, A is the vector potential, and c is the speed of light. Although the quantity pkin is the "physical momentum", in that it is the quantity to be identified with momentum in laboratory experiments, it does not satisfy the canonical commutation relations; only the canonical momentum does that. This can be seen as follows. The non-relativistic Hamiltonian for a quantized charged particle of mass m in a classical electromagnetic field is (in cgs units) where A is the three-vector potential and ? is the scalar potential. This form of the Hamiltonian, as well as the Schrödinger equation H? = i/?t, the Maxwell equations and the Lorentz force law are invariant under the gauge transformation and ?=?(x,t) is the gauge function. The angular momentum operator is and obeys the canonical quantization relations defining the Lie algebra for so(3), where is the Levi-Civita symbol. Under gauge transformations, the angular momentum transforms as The gauge-invariant angular momentum (or "kinetic angular momentum") is given by which has the commutation relations is the magnetic field. The inequivalence of these two formulations shows up in the Zeeman effect and the Aharonov-Bohm effect. Uncertainty relation and commutators All such nontrivial commutation relations for pairs of operators lead to corresponding uncertainty relations,[10] involving positive semi-definite expectation contributions by their respective commutators and anticommutators. In general, for two Hermitian operators A and B, consider expectation values in a system in the state ?, the variances around the corresponding expectation values being (?A)2 ≡ ⟨(A - ⟨A⟩)2, etc. where [A, B] ≡ A BB A is the commutator of A and B, and {A, B} ≡ A B + B A is the anticommutator. This follows through use of the Cauchy-Schwarz inequality, since |⟨A2⟩| |⟨B2⟩| ≥ |⟨A B⟩|2, and A B = ([A, B] + {A, B})/2 ; and similarly for the shifted operators A - ⟨A and B - ⟨B. (Cf. uncertainty principle derivations.) Substituting for A and B (and taking care with the analysis) yield Heisenberg's familiar uncertainty relation for x and p, as usual. Uncertainty relation for angular momentum operators For the angular momentum operators Lx = y pz - z py, etc., one has that where is the Levi-Civita symbol and simply reverses the sign of the answer under pairwise interchange of the indices. An analogous relation holds for the spin operators. Here, for Lx and Ly,[10] in angular momentum multiplets ? = |,m, one has, for the transverse components of the Casimir invariant Lx2 + Ly2+ Lz2, the z-symmetric relations Lx2⟩ = ⟨Ly2⟩ = ( ( + 1) - m2) ?2/2 , as well as Lx⟩ = ⟨Ly⟩ = 0 . Consequently, the above inequality applied to this commutation relation specifies and therefore so, then, it yields useful constraints such as a lower bound on the Casimir invariant: ( + 1) ≥ m (m + 1), and hence m, among others. See also 1. ^ Born, M.; Jordan, P. (1925). "Zur Quantenmechanik". Zeitschrift für Physik. 34: 858. Bibcode:1925ZPhy...34..858B. doi:10.1007/BF01328531. 2. ^ Kennard, E. H. (1927). "Zur Quantenmechanik einfacher Bewegungstypen". Zeitschrift für Physik. 44 (4-5): 326-352. Bibcode:1927ZPhy...44..326K. doi:10.1007/BF01391200. 3. ^ a b Groenewold, H. J. (1946). "On the principles of elementary quantum mechanics". Physica. 12 (7): 405-460. Bibcode:1946Phy....12..405G. doi:10.1016/S0031-8914(46)80059-4. 4. ^ Hall 2013 Theorem 13.13 5. ^ Curtright, T. L.; Zachos, C. K. (2012). "Quantum Mechanics in Phase Space". Asia Pacific Physics Newsletter. 01: 37-46. arXiv:1104.5269. doi:10.1142/S2251158X12000069. 6. ^ Hall 2015 Section 1.2.6 and Proposition 3.26 7. ^ See Section 5.2 of Hall 2015 for an elementary derivation 8. ^ Hall 2013 Example 14.5 9. ^ Townsend, J. S. (2000). A Modern Approach to Quantum Mechanics. Sausalito, CA: University Science Books. ISBN 1-891389-13-0. 10. ^ a b Robertson, H. P. (1929). "The Uncertainty Principle". Physical Review. 34 (1): 163-164. Bibcode:1929PhRv...34..163R. doi:10.1103/PhysRev.34.163. • Hall, Brian C. (2013), Quantum Theory for Mathematicians, Graduate Texts in Mathematics, 267, Springer. • Hall, Brian C. (2013), Lie Groups, Lie Algebras and Representations, An Elementary Introduction, Graduate Texts in Mathematics, 222 (2nd ed.), Springer. Music Scenes
8fc5e28f94a0b67a
The end of “Love”? #cirquedusoleil #beatles It was sad to hear the news today that one of the victims of the Covid-19 virus is Cirque Du Soleil as they declared bankruptcy and laid off thousands of workers. All of their shows have ceased operations due to the virus and the company incurred massive amounts of debt. Hopefully the company will be able to restructure and resume operations of some of its shows but with the recent resurgence of the Virus in the United States, it doesn’t look like they’ll be resuming productions anytime soon. Which very well could spell the end of my favorite Cirque show, the Beatles masterpiece, “Love”. I’ve seen Love a total of three times and every time am blown away. The Love Soundtrack alone is mind-blowing for any Beatles Fan. To create the show’s lush soundscape, producers Sir George Martin (RIP) and his son, Giles, worked at Abbey Road Studios with the entire archive of Beatles master recordings. The Beatles LOVE 1. Because (LOVE Version) 2. Get Back (LOVE Version) 3. Glass Onion (LOVE Version) 4. Eleanor Rigby/Julia (LOVE Version) 5. I Am The Walrus (LOVE Version) 6. I Want To Hold Your Hand (LOVE Version) 7. Drive My Car/The Word/What You’re Doing (LOVE Version) 8. Gnik Nus (LOVE Version) 9. Something/Blue Jay Way (LOVE Version) 10. Being For The Benefit Of Mr Kite!/I Want You (She’s So Heavy)/Helter Skelter (LOVE Version) 11. Help! (LOVE Version) 12. Blackbird/Yesterday (LOVE Version) 13. Strawberry Fields Forever (LOVE Version) 14. Within You Without You/Tomorrow Never Knows (LOVE Version) 15. Lucy In The Sky With Diamonds (LOVE Version) 16. Octopus’s Garden (LOVE Version) 17. Lady Madonna (LOVE Version) 18. Here Comes The Sun/The Inner Light (LOVE Version) 19. Come Together/Dear Prudence/Cry Baby Cry (LOVE Version) 20. Revolution (LOVE Version) 21. Back In The U.S.S.R. (LOVE Version) 22. While My Guitar Gently Weeps (LOVE Version) 23. A Day In The Life (LOVE Version) 24. Hey Jude (LOVE Version) 25. Sgt. Pepper’s Lonely Hearts Club Band (Reprise) (LOVE Version) 26. All You Need Is Love (LOVE Version) 27. Girl (Bonus) 28. Fool On The Hill (Bonus) My Cousin Dave’s Guide to Why Your Stupid Smart TV Sounds Damn Crappy #mycousindave My Cousin Dave recently sent me an email: Cousin Dave Why does the sound on everything stink today? TV’s, Tablets, computers, cells. (Corporate greed! Oh, buy this special sound component.) My Stupid Samsung Smart TV, brand new, on 100, the highest it goes, is just audible on some channels. And my hearing is perfectly normal. Well, Dave, there are several factors. But it basically boils down to two: audio compression and format. Almost all recorded audio today is compressed. CD-quality was established as an audio standard when CDs were first released. The quality is crystal clear and typically mixed to be presented as stereo sound with 2 channels (Left and Right). The problem is that the data files are really big and you can only fit about 80 minutes of music or 700mb. That’s about 35mb for a 3 minute song. Then the Internet became a thing and people very quickly realized that downloading a song took FOREVER. So one guy said, hey, since humans can’t really hear EVERY frequency why don’t we remove some of that “extra data”? And so he set to chopping out the bits (compressing) HE deemed weren’t important. So now we have a whole generation of kids that only grew up hearing compressed audio (MP3’s) and don’t know any better. This same compressed audio is used in streaming movies and television today because of the same logic. Some stations and streaming platforms compress the fuck out of the audio and/or picture. The only way to get really really good quality picture and sound is to buy the Blu-Ray versions and play them on a really good home theater system. That’s the first thing: compression. The second thing is audio format. Unless you have your smart TV connected to a fancy home theater audio system, you’re likely hearing plain old (compressed) stereo sound. If the source of that audio was originally mixed for stereo, it probably sounds fine. But if it was mixed for more than 2 speakers… such as Dolby 5.1 Surround, you are likely not hearing some of the mix. 5.1 refers to the number of speakers that an audio track is mixed to. In a typical 5.1 set up you would have Front L and R, Front Center, Side L and R, and a subwoofer. The Front Center would typically have the majority of dialogue where the sides and subwoofer would have music/Sfx and so on. This is how they create that “surround sound”. The problem is, if you don’t have that center speaker, you’re probably missing much of the dialog audio. Most smart TVs attempt to compensate for this with some audio trickery but it is inconsistent because there are so many different ways the original audio can be formatted. It’s like if someone were to listen to early Beatles with all the treble down and all the bass up…it would totally sound fucked up. Your smart TV likely has a few different audio settings (check your manual or look at your remote). You may try switching to a different format that sounds better to you for whatever you are watching at the time. Or you can start investing in a home theater system and spend thousands of dollars and thousands of hours learning the finer points of audio engineering. Or just get some damn headphones ya deaf bastard! Enjoyed #Devs on #Hulu? Yes..but is it real? I recently watched the FX-produced “limited series” Devs on Hulu. It stars Nick Offerman (aka Ron Swanson from “Parks & Recreation“) as the “mad genius” and Sonoya Mizuno as the protagonist trying to uncover the secret behind her boyfriend’s sudden and inexplicable disappearance. I might add that Sonoya Mizuno may very well be my new favorite actress. If you’re not familiar with her work, watch this: The Rise of Sonoya Mizuno The roughly 8 hour show (broken into 8 segments) is the brainchild of writer/director Alex Garland, who also wrote and directed Annihilation and Ex-Machina and wrote 28 Days Later. Garland has been on my radar for several years and has brought some of the most intriguing science fiction to both the big and little screens in recent years. Here’s what I wrote on Facebook after seeing Annihilation a couple years ago: After being a little late to the game on Devs, and despite the fact that I had seen the name pop up as recommended by several friends…I never went digging enough to find it. I saw it once on Apple TV+ but it wanted me to buy it…then I realized it was on Hulu for free! So I binged all 8 episodes in about 2 Covid quarantine days. The story goes like this: A russian-born software security developer working for a Silicon Valley tech giant gets recruited by the company’s Founder/CEO, Forest (Offerman), to join an elite team within the company called “Devs”. Shortly after he joins Devs, he disappears and his girlfriend, Lily (Mizuno) suspects foul play, leading her on a journey that weaves international espionage, high-tech, quantum computing, determinism and the concept of a multiverse. Lily and Forest in Devs The Multiverse theory is a very real theory that has its roots in ancient Greek philosophy and basically postulates the multiverse is a hypothetical group of multiple universes. Together, these universes comprise everything that exists: the entirety of spacetimematterenergyinformation, and the physical laws and constants that describe them. The different universes within the multiverse are called “parallel universes,” “other universes,” “alternate universes,” or “many worlds.”. The ideas of a Multiverse have been debated by physicists and philosophers alike, and has been the subject of many modern science fiction works, including the Marvel Cinematic Universe, Star Trek, Family Guy, Hitchhiker’s Guide to the Galaxy, Chronicles of Narnia…and many, many more. In Devs, the multiverse is explored as a predictive tool. If every action has a cause, say you drop a pen, the pen will fall to the floor. It’s predictable and quantifiable. Devs takes it a step further by saying that if the pen is already on the floor, you can calculate how it got there, essentially peering back in time to the initial drop. Once you can visualize it being dropped, in theory, you can continue predicting backwards (and forwards) further and further, using massive computing power to predict all possible scenarios and accurately visualize the most likely outcomes. It very quickly gets sticky and mired in the ethics of this technology…and the concepts of “free will”, determinism and quantum physics are all blended nicely in the Devs Universe. Forest is driven to build this technology due to a great loss he suffered and hopes to use it to recapture what he lost. Lily works for the same company and uncovers the mystery of her boyfriend’s disappearance and the truth behind Devs but begins to question her own thoughts and reality along the way. Trailer for Devs The series is visually stunning, filled with religious imagery and themes of death and rebirth. The Devs soundtrack is fantastic as well, with one notable episode starting and ending with a song called Congregation by Low. Every episode starts and ends with a unique song. It’s quite a fun watch, and I highly recommend it. But…could it happen for real??? Some physicists say perhaps. Excerpts from the article “Physicists Have Reversed Time on The Smallest Scale Using a Quantum Computer “It’s easy to take time’s arrow for granted – but the gears of physics actually work just as smoothly in reverse. Maybe that time machine is possible after all? “It’s a principle that explains why your coffee won’t stay hot in a cold room, why it’s easier to scramble an egg than unscramble it, and why nobody will ever let you patent a perpetual motion machine. “It’s also the closest we can get to a rule that tells us why we can remember what we had for dinner last night, but have no memory of next Christmas. “That law is closely related to the notion of the arrow of time that posits the one-way direction of time from the past to the future,” said quantum physicist Gordey Lesovik from the Moscow Institute of Physics and Technology. “Virtually every other rule in physics can be flipped and still make sense. For example, you could zoom in on a game of pool, and a single collision between any two balls won’t look weird if you happened to see it in reverse. “On the other hand, if you watched balls roll out of pockets and reform the starting pyramid, it would be a sobering experience. That’s the second law at work for you. “On the macro scale of omelettes and games of pool, we shouldn’t expect a lot of give in the laws of thermodynamics. But as we focus in on the tiny gears of reality – in this case, solitary electrons – loopholes appear. “Electrons aren’t like tiny billiard balls, they’re more akin to information that occupies a space. Their details are defined by something called the Schrödinger equation, which represents the possibilities of an electron’s characteristics as a wave of chance. Read more here: Physicists Have Reversed Time on The Smallest Scale Using a Quantum Computer Growing New Livers in a Lab #livertransplant Assembly and Function of a Bioengineered Human Liver for Transplantation Generated Solely from Induced Pluripotent Stem Cells That’s the fancy title of a new report by Kazuki Takeishi and other scientists who have successfully created miniature human livers from stem cells and put them into mice. I won’t get into the details, mostly because I don’t understand them, but here’s a picture: A picture is worth a 1000 liver transplants. You can read the very technical research paper here on Growing Mini Livers About 17,000 people are currently waiting for a liver transplant in the United States. This number greatly exceeds the amount of available, donated by deceased donors. Meanwhile, organ transplants can be prohibitively expensive. In 2017, patients receiving a liver transplant were billed an estimated $812,500. That includes pre and post-op care as well as immunosuppressant drugs to keep people’s bodies from rejecting the transplanted organ. I am one of those liver transplant recipients. My donor passed away on May 12th 2020, and in the early hours of May 13th, my dying liver was removed and replaced with the donor’s healthy liver in an operation that lasted about 4 hours. That was exactly three weeks ago, but I could have been much more unlucky. Each year an estimated 2000 people die while on the national transplant list…there are just not enough donated livers to keep up with demand. And you can’t live without a functioning liver…it is one of the most important organs and supports over 500 key body functions. While the science isn’t quite ready for prime-time, scientists expect that within 10 years, liver donations will be a thing of the past. You can read a much less science-y version of the story here: Lab Grown Human Mini Livers The Voicemail I Desperately Needed. #livertransplant As I write this, I consider myself very fortunate. I was diagnosed with End-Stage Liver Disease in October of 2019 and spent the past six months in and out of the hospital, in the ICU having life-extending procedures and taking drugs to keep my damaged liver from completely shutting down. I was officially placed on the National Donor list in late February, a list with 16000+ other transplant candidates and a list in which 2000+ hopefuls sadly pass away before finding the right organ. So I waited, battling the symptoms that made me weak and sick, draining fluid from my abdomen and chest cavities, suffering periodic life-threatening ammonia spikes that could cause me to become unconscious without warning, drops in hemoglobin, anemia, kidney failure, internal bleeding…The symptoms kept getting worse and tested my resolve many times. But then at about 10:30 pm, Vanessa, my Liver Transplant Coordinator left me this message: I was beside myself and shaking with this news. I was scared, but this was what I was waiting for. My buddy Jack came and picked me up and we drove to Advent and checked into pre-op (called the “Rapid In/Out” or “RIO:” department. Shower, Chest X-Ray, Blood tests, Covid-test, wait, wait, wait. Around 7am the surgeon came in and said they were looking at a noon-ish time for surgery. He said he had not seen the donor liver yet but he needed to see it before they brought me in to make sure it was viable. Noon turned into 2pm. 2pm turned into 4pm. At 4 pm a nurse came in and I did the final prep for surgery. Compression socks, hair net, enema…at about 4:50 the surgeon came in and, in a very somber tone said that he had finally seen the donor liver and it was not viable. It was “too fatty” and he couldn’t transplant it. Devastation. I was SO ready and this just felt like the wind was taken out of my sails. I went back home in a daze and slept. I barely got out of bed the next couple of days. I was in a daze, but I knew this was a possibility. And so again I waited. Fortunately I only had to wait a few days and I got another call. On Wednesday, May 13th I had liver transplant surgery. I went under about 12:30 am and woke up about 8 hours later in the post-transplant ICU. In less than 24 hours I was in a normal recovery room and eating solid foods. Post Liver Transplant Surgery with my sister Cara With 48 hours I was standing and walking with a walker and within 5 days I was discharged from the hospital, walking out on my own two feet. It was amazing and my recovery has been quite smooth. My transplant surgeon has already reduced some of my meds and, after a couple weeks of staying with my parents, I am happy to be back in my own home and sleeping in my own bed. I have a weekly blood test and visit with my doctor, but so far all my lab results have been good. I’m eating well and all of the symptoms of my disease have disappeared. I have a new life! On Civil Disobedience Canon Turns it to Eleven It’s been a long time coming, but Canon just released the beta version of their newest software, allowing your computer to see a Canon EOS camera to be natively recognized as a webcam device. This was possible before, but required extra video capture cards and degraded quality over HDMI. I have been a big fan of Canon cameras for a very long time but always lamented on why they didn’t use the built-in USB port to directly connect to a USB port on your camera. Harris Heller does a great job explaining the newest features in this Youtube Video The Zen of Alan Watts I love listening to (and reading) Alan Watts. His unique perspective on the Universe and humanity’s role as part of this thing we call “life” has altered my views on several subjects…from the meta to the mundane. Here is a recent recording of one of his lectures…I hope you’re ready 🙂 You can learn more about Alan Watts at the Alan Watts Organization and Alan Watts on Wikipedia.
19479de68125b0bc
LOG#070. Natural Units. Happy New Year 2013 to everyone and everywhere! Let me apologize, first of all, by my absence… I have been busy, trying to find my path and way in my field, and I am busy yet, but finally I could not resist without a new blog boost… After all, you should know the fact I have enough materials to write many new things. So, what’s next? I will dedicate some blog posts to discuss a nice topic I began before, talking about a classic paper on the subject here: The topic is going to be pretty simple: natural units in Physics. First of all, let me point out that the election of any system of units is, a priori, totally conventional. You are free to choose any kind of units for physical magnitudes. Of course, that is not very clever if you have to report data, so everyone can realize what you do and report. Scientists have some definitions and popular systems of units that make the process pretty simpler than in the daily life. Then, we need some general conventions about “units”. Indeed, the traditional wisdom is to use the international system of units, or S (Iabbreviated SI from French language: Le Système international d’unités). There, you can find seven fundamental magnitudes and seven fundamental (or “natural”) units: 1) Space: \left[ L\right]=\mbox{meter}=m 2) Time: \left[ T\right]=\mbox{second}=s 3) Mass: \left[ M\right]=\mbox{kilogram}=kg 4) Temperature: \left[ t\right]=\mbox{Kelvin degree}= K 5) Electric intensity: \left[ I\right]=\mbox{ampere}=A 6) Luminous intensity: \left[ I_L\right]=\mbox{candela}=cd 7) Amount of substance: \left[ n\right]=\mbox{mole}=mol(e) The dependence between these 7 great units and even their definitions can be found here http://en.wikipedia.org/wiki/International_System_of_Units and references therein. I can not resist to show you the beautiful graph of the 7 wonderful units that this wikipedia article shows you about their “interdependence”: In Physics, when you build a radical new theory, generally it has the power to introduce a relevant scale or system of units. Specially, the Special Theory of Relativity, and the Quantum Mechanics are such theories. General Relativity and Statistical Physics (Statistical Mechanics) have also intrinsic “universal constants”, or, likely, to be more precise, they allow the introduction of some “more convenient” system of units than those you have ever heard ( metric system, SI, MKS, cgs, …). When I spoke about Barrow units (see previous comment above) in this blog, we realized that dimensionality (both mathematical and “physical”), and fundamental theories are bound to the election of some “simpler” units. Those “simpler” units are what we usually call “natural units”. I am not a big fan of such terminology. It is confusing a little bit. Maybe, it would be more interesting and appropiate to call them “addapted X units” or “scaled X units”, where X denotes “relativistic, quantum,…”. Anyway, the name “natural” is popular and it is likely impossible to change the habits. In fact, we have to distinguish several “kinds” of natural units. First of all, let me list “fundamental and universal” constants in different theories accepted at current time: 1. Boltzmann constant: k_B. Essential in Statistical Mechanics, both classical and quantum. It measures “entropy”/”information”. The fundamental equation is: \boxed{S=k_B\ln \Omega} It provides a link between the microphysics and the macrophysics ( it is the code behind the equation above). It can be understood somehow as a measure of the “energetic content” of an individual particle or state at a given temperature. Common values for this constant are: k_B=1.3806488(13)\times 10^{-23}J/K = 8.6173324(78)\times 10^{-5}eV/K k_B=1.3806488(13)\times 10^{-16}erg/K Statistical Physics states that there is a minimum unit of entropy or a minimal value of energy at any given temperature. Physical dimensions of this constant are thus entropy, or since E=TS, \left[ k_B\right] =E/t=J/K, where t denotes here dimension of temperature. 2. Speed of light.  c. From classical electromagnetism: The speed of light, according to the postulates of special relativity, is a universal constant. It is frame INDEPENDENT. This fact is at the root of many of the surprising results of special relativity, and it took time to be understood. Moreover, it also connects space and time in a powerful unified formalism, so space and time merge into spacetime, as we do know and we have studied long ago in this blog. The spacetime interval in a D=3+1 dimensional space and two arbitrary events reads: \Delta s^2=\Delta x^2+\Delta y^2+\Delta z^2-c^2\Delta t^2 In fact, you can observe that “c” is the conversion factor between time-like and space-like coordinates.  How big the speed of light is? Well, it is a relatively large number from our common and ordinary perception. It is exactly: although you often take it as c\approx 3\cdot 10^{8}m/s=3\cdot 10^{10}cm/s.  However, it is the speed of electromagnetic waves in vacuum, no matter where you are in this Universe/Polyverse. At least, experiments are consistent with such an statement. Moreover, it shows that c is also the conversion factor between energy and momentum, since and c^2 is the conversion factor between rest mass and pure energy, because, as everybody knows,  E=mc^2! According to the special theory of relativity, normal matter can never exceed the speed of light. Therefore, the speed of light is the maximum velocity in Nature, at least if specially relativity holds. Physical dimensions of c are \left[c\right]=LT^{-1}, where L denotes length dimension and T denotes time dimension (please, don’t confuse it with temperature despite the capital same letter for both symbols). 3. Planck’s constant. h or generally rationalized \hbar=h/2\pi. Planck’s constant (or its rationalized version), is the fundamental universal constant in Quantum Physics (Quantum Mechanics, Quantum Field Theory). It gives \boxed{E=h\nu=\hbar \omega} Indeed, quanta are the minimal units of energy. That is, you can not divide further a quantum of light, since it is indivisible by definition! Furthermore, the de Broglie relationship relates momentum and wavelength for any particle, and it emerges from the combination of special relativity and the quantum hypothesis: \lambda=\dfrac{h}{p}\leftrightarrow \bar{\lambda}=\dfrac{\hbar}{p} In the case of massive particles, it yields \lambda=\dfrac{h}{Mv}\leftrightarrow \bar{\lambda}=\dfrac{\hbar}{Mv} In the case of massless particles (photons, gluons, gravitons,…) \lambda=\dfrac{hc}{E} or \bar{\lambda}=\dfrac{\hbar c}{E} Planck’s constant also appears to be essential to the uncertainty principle of Heisenberg: \boxed{\Delta x \Delta p\geq \hbar/2} \boxed{\Delta E \Delta t\geq \hbar/2} \boxed{\Delta A\Delta B\geq \hbar/2} Some particularly important values of this constant are: h=6.62606957(29)\times 10^{-34} J\cdot s h=4.135667516(91)\times 10^{-15}eV\cdot s h=6.62606957(29)\times 10^{-27} erg\cdot s \hbar =1.054571726(47)\times 10^{-34} J\cdot s \hbar =6.58211928(15)\times 10^{-16} eV\cdot s \hbar= 1.054571726(47)\times 10^{-27}erg\cdot s It is also useful to know that hc=1.98644568\times 10^{-25}J\cdot m hc=1.23984193 eV\cdot \mu m \hbar c=0.1591549hc or \hbar c=197.327 eV\cdot nm Planck constant has dimension of \mbox{Energy}\times \mbox{Time}=\mbox{position}\times \mbox{momentum}=ML^2T^{-1}. Physical dimensions of this constant coincide also with angular momentum (spin), i.e., with L=mvr. 4. Gravitational constant. G_N. Apparently, it is not like the others but it can also define some particular scale when combined with Special Relativity. Without entering into further details (since I have not discussed General Relativity yet in this blog), we can calculate the escape velocity of a body moving at the speed of light \dfrac{1}{2}mv^2-G_N\dfrac{Mm}{R}=0 with v=c implies a new length scale where gravitational relativistic effects do appear, the so-called Schwarzschild radius R_S: \boxed{R_S=\dfrac{2G_NM}{c^2}=\dfrac{2G_NM_{\odot}}{c^2}\left(\dfrac{M}{M_{\odot}}\right)\approx 2.95\left(\dfrac{M}{M_{\odot}}\right)km} 5. Electric fundamental charge. e. It is generally chosen as fundamental charge the electric charge of the positron (positive charged “electron”). Its value is: e=1.602176565(35)\times 10^{-19}C where C denotes Coulomb. Of course, if you know about quarks with a fraction of this charge, you could ask why we prefer this one. Really, it is only a question of hystory of Science, since electrons were discovered first (and positrons). Quarks, with one third or two thirds of this amount of elementary charge, were discovered later, but you could define the fundamental unit of charge as multiple or entire fraction of this charge. Moreover, as far as we know, electrons are “elementary”/”fundamental” entities, so, we can use this charge as unit and we can define quark charges in terms of it too. Electric charge is not a fundamental unit in the SI system of units. Charge flow, or electric current, is. An amazing property of the above 5 constants is that they are “universal”. And, for instance, energy is related with other magnitudes in theories where the above constants are present in a really wonderful and unified manner: \boxed{E=N\dfrac{k_BT}{2}=Mc^2=TS=Pc=N\dfrac{h\nu}{2}=N\dfrac{\hbar \omega}{2}=\dfrac{R_Sc^4}{2G_N}=\hbar c k=\dfrac{hc}{\lambda}} Caution: k is not the Boltzmann constant but the wave number. There is a sixth “fundamental” constant related to electromagnetism, but it is also related to the speed of light, the electric charge and the Planck’s constant in a very sutble way. Let me introduce you it too… 6. Coulomb constant. k_C. This is a second constant related to classical electromagnetism, like the speed of light in vacuum. Coulomb’s constant, the electric force constant, or the electrostatic constant (denoted k_C) is a proportionality factor that takes part in equations relating electric force between  point charges, and indirectly it also appears (depending on your system of units) in expressions for electric fields of charge distributions. Coulomb’s law reads Its experimental value is k_C=\dfrac{1}{4\pi \varepsilon_0}=\dfrac{c^2\mu_0}{4\pi}=c^2\cdot 10^{-7}H\cdot m^{-1}= 8.9875517873681764\cdot 10^9 Nm^2/C^2 Generally, the Coulomb constant is dropped out and it is usually preferred to express everything using the electric permitivity of vacuum \varepsilon_0 and/or numerical factors depending on the pi number \pi if you choose the gaussian system of units  (read this wikipedia article http://en.wikipedia.org/wiki/Gaussian_system_of_units ), the CGS system, or some hybrid units based on them. H.E.P. units High Energy Physicists use to employ units in which the velocity is measured in fractions of the speed of light in vacuum, and the action/angular momentum is some multiple of the Planck’s constant. These conditions are equivalent to set \boxed{c=1_c=1} \boxed{\hbar=1_\hbar=1} Complementarily, or not, depending on your tastes and preferences, you can also set the Boltzmann’s constant to the unit as well and thus the complete HEP system is defined if you set This “natural” system of units is lacking yet a scale of energy. Then, it is generally added the electron-volt eV as auxiliary quantity defining the reference energy scale. Despite the fact that this is not a “natural unit” in the proper sense because it is defined by a natural property, the electric charge,  and the anthropogenic unit of electric potential, the volt. The SI prefixes multiples of eV are used as well: keV, MeV, GeV, etc. Here, the eV is used as reference energy quantity, and with the above election of “elementary/natural units” (or any other auxiliary unit of energy), any quantity can be expressed. For example, a distance of 1 m can be expressed in terms of eV, in natural units, as 1m=\dfrac{1m}{\hbar c}\approx 510eV^{-1} This system of units have remarkable conversion factors A) 1 eV^{-1} of length is equal to 1.97\cdot 10^{-7}m =(1\text{eV}^{-1})\hbar c B) 1 eV of mass is equal to 1.78\cdot 10^{-36}kg=1\times \dfrac{eV}{c^2} C) 1 eV^{-1} of time is equal to 6.58\cdot 10^{-16}s=(1\text{eV}^{-1})\hbar D) 1 eV of temperature is equal to 1.16\cdot 10^4K=1eV/k_B E) 1 unit of electric charge in the Lorentz-Heaviside system of units is equal to 5.29\cdot 10^{-19}C=e/\sqrt{4\pi\alpha} F) 1 unit of electric charge in the Gaussian system of units is equal to 1.88\cdot 10^{-18}C=e/\sqrt{\alpha} This system of units, therefore, leaves free only the energy scale (generally it is chosen the electron-volt) and the electric measure of fundamentl charge. Every other unit can be related to energy/charge. It is truly remarkable than doing this (turning invisible the above three constants) you can “unify” different magnitudes due to the fact these conventions make them equivalent. For instance, with natural units: 1) Length=Time=1/Energy=1/Mass. It is due to x=ct, E=Mc^2 and E=hc/\lambda equations. Setting c and h or \hbar provides x=t, E=M and E=1/\lambda. Note that natural units turn invisible the units we set to the unit! That is the key of the procedure. It simplifies equations and expressions. Of course, you must be careful when you reintroduce constants! 2) Energy=Mass=Momemntum=Temperature. It is due to E=k_BT, E=Pc and E=Mc^2 again. One extra bonus for theoretical physicists is that natural units allow to build and write proper lagrangians and hamiltonians (certain mathematical operators containing the dynamics of the system enconded in them), or equivalently the action functional, with only the energy or “mass” dimension as “free parameter”. Let me show how it works. Natural units in HEP identify length and time dimensions. Thus \left[L\right]=\left[T\right]. Planck’s constant allows us to identify those 2 dimensions with 1/Energy (reciprocals of energy) physical dimensions. Therefore, in HEP units, we have The speed of light identifies energy and mass, and thus, we can often heard about “mass-dimension” of a lagrangian in the following sense. HEP units can be thought as defining “everything” in terms of energy, from the pure dimensional ground. That is, every physical dimension is (in HEP units) defined by a power of energy: Thus, we can refer to any magnitude simply saying the power of such physical dimension (or you can think logarithmically to understand it easier if you wish). With this convention, and recalling that energy dimension is mass dimension, we have that \left[L\right]=\left[T\right]=-1 and \left[E\right]=\left[M\right]=1 Using these arguments, the action functional is a pure dimensionless quantity, and thus, in D=4 spacetime dimensions, lagrangian densities must have dimension 4 ( or dimension D is a general spacetime). \displaystyle{S=\int d^4x \mathcal{L}\rightarrow \left[\mathcal{L}\right]=4} \displaystyle{S=\int d^Dx \mathcal{L}\rightarrow \left[\mathcal{L}\right]=D} In D=4 spacetime dimensions, it can be easily showed that where \Phi is a scalar field, A^\mu is a vector field (like the electromagnetic or non-abelian vector gauge fields), and \Psi_D, \Psi_M, \chi, \eta are a Dirac spinor, a Majorana spinor, and \chi, \eta are Weyl spinors (of different chiralities). Supersymmetry (or SUSY) allows for anticommuting c-numbers (or Grassmann numbers) and it forces to introduce auxiliary parameters with mass dimension -1/2. They are the so-called SUSY transformation parameters \zeta_{SUSY}=\epsilon. There are some speculative spinors called ELKO fields that could be non-standandard spinor fields with mass dimension one! But it is an advanced topic I am not going to discuss here today. In general D spacetime dimensions a scalar (or vector) field would have mass dimension (D-2)/2, and a spinor/fermionic field in D dimensions has generally (D-1)/2 mass dimension (excepting the auxiliary SUSY grassmanian fields and the exotic idea of ELKO fields).  This dimensional analysis is very useful when theoretical physicists build up interacting lagrangians, since we can guess the structure of interaction looking at purely dimensional arguments every possible operator entering into the action/lagrangian density! In summary, therefore, for any D: \boxed{\left[\Phi\right]=\left[A_\mu\right]=\dfrac{D-2}{2}\equiv E^{\frac{D-2}{2}}=M^{\frac{D-2}{2}}} \boxed{\left[\Psi\right]=\dfrac{D-1}{2}\equiv E^{\frac{D-1}{2}}=M^{\frac{D-1}{2}}} Remark (for QFT experts only): Don’t confuse mass dimension with the final transverse polarization degrees or “degrees of freedom” of a particular field, i.e., “components” minus “gauge constraints”. E.g.: a gauge vector field has D-2 degrees of freedom in D dimensions. They are different concepts (although both closely related to the spacetime dimension where the field “lives”). In summary: i) HEP units are based on QM (Quantum Mechanics), SR (Special Relativity) and Statistical Mechanics (Entropy and Thermodynamics). ii) HEP units need to introduce a free energy scale, and it generally drives us to use the eV or electron-volt as auxiliary energy scale. iii) HEP units are useful to dimensional analysis of lagrangians (and hamiltonians) up to “mass dimension”. Stoney Units In Physics, the Stoney units form a alternative set of natural units named after the Irish physicist George Johnstone Stoney, who first introduced them as we know it today in 1881. However, he presented the idea in a lecture entitled “On the Physical Units of Nature” delivered to the British Association before that date, in 1874. They are the first historical example of natural units and “unification scale” somehow. Stoney units are rarely used in modern physics for calculations, but they are of historical interest but some people like Wilczek has written about them (see, e.g., http://arxiv.org/abs/0708.4361). These units of measurement were designed so that certain fundamental physical constants are taken as reference basis without the Planck scale being explicit, quite a remarkable fact! The set of constants that Stoney used as base units is the following: A) Electric charge, e=1_e. B) Speed of light in vacuum, c=1_c. C) Gravitational constant, G_N=1_{G_N}. D) The Reciprocal of Coulomb constant, 1/k_C=4\pi \varepsilon_0=1_{k_C^{-1}}=1_{4\pi \varepsilon_0}. Stony units are built when you set these four constants to the unit, i.e., equivalently, the Stoney System of Units (S) is determined by the assignments: Interestingly, in this system of units, the Planck constant is not equal to the unit and it is not “fundamental” (Wilczek remarked this fact here ) but: \hbar=\dfrac{1}{\alpha}\approx 137.035999679 Today, Planck units are more popular Planck than Stoney units in modern physics, and even there are many physicists who don’t know about the Stoney Units! In fact, Stoney was one of the first scientists to understand that electric charge was quantized!; from this quantization he deduced the units that are now named after him. The Stoney length and the Stoney energy are collectively called the Stoney scale, and they are not far from the Planck length and the Planck energy, the Planck scale. The Stoney scale and the Planck scale are the length and energy scales at which quantum processes and gravity occur together. At these scales, a unified theory of physics is thus likely required. The only notable attempt to construct such a theory from the Stoney scale was that of H. Weyl, who associated a gravitational unit of charge with the Stoney length and who appears to have inspired Dirac’s fascination with the large number hypothesis. Since then, the Stoney scale has been largely neglected in the development of modern physics, although it is occasionally discussed to this day. Wilczek likes to point out that, in Stoney Units, QM would be an emergent phenomenon/theory, since the Planck constant wouldn’t be present directly but as a combination of different constants. By the other hand, the Planck scale is valid for all known interactions, and does not give prominence to the electromagnetic interaction, as the Stoney scale does. That is, in Stoney Units, both gravitation and electromagnetism are on equal footing, unlike the Planck units, where only the speed of light is used and there is no more connections to electromagnetism, at least, in a clean way like the Stoney Units do. Be aware, sometimes, rarely though, Planck units are referred to as Planck-Stoney units. What are the most interesting Stoney system values? Here you are the most remarkable results: 1) Stoney Length, L_S. \boxed{L_S=\sqrt{\dfrac{G_Ne^2}{(4\pi\varepsilon)c^4}}\approx 1.38\cdot 10^{-36}m} 2) Stoney Mass, M_S. \boxed{M_S=\sqrt{\dfrac{e^2}{G_N(4\pi\varepsilon_0)}}\approx 1.86\cdot 10^{-9}kg} 3) Stoney Energy, E_S. \boxed{E_S=M_Sc^2=\sqrt{\dfrac{e^2c^4}{G_N(4\pi\varepsilon_0)}}\approx 1.67\cdot 10^8 J=1.04\cdot 10^{18}GeV} 4) Stoney Time, t_S. \boxed{t_S=\sqrt{\dfrac{G_Ne^2}{c^6(4\pi\varepsilon_0)}}\approx 4.61\cdot 10^{-45}s} 5) Stoney Charge, Q_S. \boxed{Q_S=e\approx 1.60\cdot 10^{-19}C} 6) Stoney Temperature, T_S. \boxed{T_S=E_S/k_B=\sqrt{\dfrac{e^2c^4}{G_Nk_B^2(4\pi\varepsilon_0)}}\approx 1.21\cdot 10^{31}K} Planck Units The reference constants to this natural system of units (generally denoted by P) are the following 4 constants: 1) Gravitational constant. G_N 2) Speed of light. c. 3) Planck constant or rationalized Planck constant. \hbar. 4) Boltzmann constant. k_B. The Planck units are got when you set these 4 constants to the unit, i.e., It is often said that Planck units are a system of natural units that is not defined in terms of properties of any prototype, physical object, or even features of any fundamental particle. They only refer to the basic structure of the laws of physics: c and G are part of the structure of classical spacetime in the relativistic theory of gravitation, also known as general relativity, and ℏ captures the relationship between energy and frequency which is at the foundation of elementary quantum mechanics. This is the reason why Planck units particularly useful and common in theories of quantum gravity, including string theory or loop quantum gravity. This system defines some limit magnitudes, as follows: 1) Planck Length, L_P. \boxed{L_P=\sqrt{\dfrac{G_N\hbar}{c^3}}\approx 1.616\cdot 10^{-35}s} 2) Planck Time, t_P. \boxed{t_P=L_P/c=\sqrt{\dfrac{G_N\hbar}{c^5}}\approx 5.391\cdot 10^{-44}s} 3) Planck Mass, M_P. \boxed{M_P=\sqrt{\dfrac{\hbar c}{G_N}}\approx 2.176\cdot 10^{-8}kg} 4) Planck Energy, E_P. \boxed{E_P=M_Pc^2=\sqrt{\dfrac{\hbar c^5}{G_N}}\approx 1.96\cdot 10^9J=1.22\cdot 10^{19}GeV} 5) Planck charge, Q_P. In Lorentz-Heaviside electromagnetic units \boxed{Q_P=\sqrt{\hbar c \varepsilon_0}=\dfrac{e}{\sqrt{4\pi\alpha}}\approx 5.291\cdot 10^{-19}C} In Gaussian electromagnetic units \boxed{Q_P=\sqrt{\hbar c (4\pi\varepsilon_0)}=\dfrac{e}{\sqrt{\alpha}}\approx 1.876\cdot 10^{-18}C} 6) Planck temperature, T_P. \boxed{T_P=E_P/k_B=\sqrt{\dfrac{\hbar c^5}{G_Nk_B^2}}\approx 1.417\cdot 10^{32}K} From these “fundamental” magnitudes we can build many derived quantities in the Planck System: 1) Planck area. A_P=L_P^2=\dfrac{\hbar G_N}{c^3}\approx 2.612\cdot 10^{-70}m^2 2) Planck volume. V_P=L_P^3=\left(\dfrac{\hbar G_N}{c^3}\right)^{3/2}\approx 4.22\cdot 10^{-105}m^3 3) Planck momentum. P_P=M_Pc=\sqrt{\dfrac{\hbar c^3}{G_N}}\approx 6.52485 kgm/s A relatively “small” momentum! 4) Planck force. F_P=E_P/L_P=\dfrac{c^4}{G_N }\approx 1.21\cdot 10^{44}N It is independent from Planck constant! Moreover, the Planck acceleration is a_P=F_P/M_P=\sqrt{\dfrac{c^7}{G_N\hbar}}\approx 5.561\cdot 10^{51}m/s^2 5) Planck Power. \mathcal{P}_P=\dfrac{c^5}{G_N}\approx 3.628\cdot 10^{52}W 6) Planck density. \rho_P=\dfrac{c^5}{\hbar G_N^2}\approx 5.155\cdot 10^{96}kg/m^3 Planck density energy would be equal to \rho_P c^2=\dfrac{c^7}{\hbar G_N^2}\approx 4.6331\cdot 10^{113}J/m^3 7) Planck angular frequency. \omega_P=\sqrt{\dfrac{c^5}{\hbar G_N}}\approx 1.85487\cdot 10^{43}Hz 8) Planck pressure. p_P=\dfrac{F_P}{A_P}=\dfrac{c^7}{G_N^2\hbar}=\rho_P c^2\approx 4.6331\cdot 10^{113}Pa Note that Planck pressure IS the Planck density energy! 9) Planck current. I_P=Q_P/t_P=\sqrt{\dfrac{4\pi\varepsilon_0 c^6}{G_N}}\approx 3.4789\cdot 10^{25}A 10) Planck voltage. v_P=E_P/Q_P=\sqrt{\dfrac{c^4}{4\pi\varepsilon_0 G_N}}\approx 1.04295\cdot 10^{27}V 11) Planck impedance. Z_P=v_P/I_P=\dfrac{\hbar^2}{Q_P}=\dfrac{1}{4\pi \varepsilon_0 c}\approx 29.979\Omega A relatively small impedance! 12) Planck capacitor. C_P=Q_P/v_P=4\pi\varepsilon_0\sqrt{\dfrac{\hbar G_N}{ c^3}} \approx 1.798\cdot 10^{-45}F Interestingly, it depends on the gravitational constant! Some Planck units are suitable for measuring quantities that are familiar from daily experience. In particular: 1 Planck mass is about 22 micrograms. 1 Planck momentum is about 6.5 kg m/s 1 Planck energy is about 500kWh. 1 Planck charge is about 11 elementary (electronic) charges. 1 Planck impendance is almost 30 ohms. i) A speed of 1 Planck length per Planck time is the speed of light, the maximum possible speed in special relativity. ii) To understand the Planck Era and “before” (if it has sense), supposing QM holds yet there, we need a quantum theory of gravity to be available there. There is no such a theory though, right now. Therefore, we have to wait if these ideas are right or not. iii) It is believed that at Planck temperature, the whole symmetry of the Universe was “perfect” in the sense the four fundamental foces were “unified” somehow. We have only some vague notios about how that theory of everything (TOE) would be. The physical dimensions of the known Universe in terms of Planck units are “dramatic”: i) Age of the Universe is about t_U=8.0\cdot 10^{60} t_P. ii) Diameter of the observable Universe is about d_U=5.4\cdot 10^{61}L_P iii) Current temperature of the Universe is about 1.9 \cdot 10^{-32}T_P iv) The observed cosmological constant is about 5.6\cdot 10^{-122}t_P^{-2} v) The mass of the Universe is about 10^{60}m_p. vi) The Hubble constant is 71km/s/Mpc\approx 1.23\cdot 10^{-61}t_P^{-1} Schrödinger Units The Schrödinger Units do not obviously contain the term c, the speed of light in a vacuum. However, within the term of the Permittivity of Free Space [i.e., electric constant or vacuum permittivity], and the speed of light plays a part in that particular computation. The vacuum permittivity results from the reciprocal of the speed of light squared times the magnetic constant. So, even though the speed of light is not apparent in the Schrödinger equations it does exist buried within its terms and therefore influences the decimal placement issue within square roots. The essence of Schrödinger units are the following constants: A) Gravitational constant G_N. B) Planck constant \hbar. C) Boltzmann constant k_B. D) Coulomb constant or equivalently the electric permitivity of free space/vacuum k_C=1/4\pi\varepsilon_0. E) The electric charge of the positron e. In this sistem \psi we have \boxed{G_N=\hbar =k_B =k_C =1} 1) Schrödinger Length L_{Sch}. L_\psi=\sqrt{\dfrac{\hbar^4 G_N(4\pi\varepsilon_0)^3}{e^6}}\approx 2.593\cdot 10^{-32}m 2) Schrödinger time t_{Sch}. t_\psi=\sqrt{\dfrac{\hbar^6 G_N(4\pi\varepsilon_0)^5}{e^{10}}}\approx 1.185\cdot 10^{-38}s 3) Schrödinger mass M_{Sch}. M_\psi=\sqrt{\dfrac{e^2}{G_N(4\pi\varepsilon_0)}}\approx 1.859\cdot 10^{-9}kg 4) Schrödinger energy E_{Sch}. E_\psi=\sqrt{\dfrac{e^{10}}{\hbar^4(4\pi\varepsilon_0)^5G_N}}\approx 8890 J=5.55\cdot 10^{13}GeV 5) Schrödinger charge Q_{Sch}. Q_\psi =e=1.602\cdot 10^{-19}C 6) Schrödinger temperature T_{Sch}. T_\psi=E_\psi/k_B=\sqrt{\dfrac{e^{10}}{\hbar^4(4\pi\varepsilon_0)^5G_Nk_B^2}}\approx 6.445\cdot 10^{26}K Atomic Units There are two alternative systems of atomic units, closely related: 1) Hartree atomic units:  \boxed{e=m_e=\hbar=k_B=1} and \boxed{c=\alpha^{-1}} 2) Rydberg atomic units: \boxed{\dfrac{e}{\sqrt{2}}=2m_e=\hbar=k_B=1} and \boxed{c=2\alpha^{-1}} There, m_e is the electron mass and \alpha is the electromagnetic fine structure constant. These units are designed to simplify atomic and molecular physics and chemistry, especially the quantities related to the hydrogen atom, and they are widely used in these fields. The Hartree units were first proposed by Doublas Hartree, and they are more common than the Rydberg units. The units are adapted to characterize the behavior of an electron in the ground state of a hydrogen atom. For example, using the Hartree convention, in the Böhr model of the hydrogen atom, an electron in the ground state has orbital velocity = 1, orbital radius = 1, angular momentum = 1, ionization energy equal to 1/2, and so on. Some quantities in the Hartree system of units are: 1) Atomic Length (also called Böhr radius): L_A=a_0=\dfrac{\hbar^2 (4\pi\varepsilon_0)}{m_ee^2}\approx 5.292\cdot 10^{-11}m=0.5292\AA 2) Atomic Time: t_A=\dfrac{\hbar^3(4\pi\varepsilon_0)^2}{m_ee^4}\approx 2.419\cdot 10^{-17}s 3) Atomic Mass: M_A=m_e\approx 9.109\cdot 10^{-31}kg 4) Atomic Energy: E_A=m_ec^2=\dfrac{m_ee^4}{\hbar^2(4\pi\varepsilon_0)^2} \approx 4.36\cdot 10^{ -18}J=27.2eV=2\times(13.6)eV=2Ry 5) Atomic electric Charge: Q_A=q_e=e\approx 1.602\cdot 10^{-19}C 6) Atomic temperature: T_A=E_A/k_B=\dfrac{m_ee^4}{\hbar^2(4\pi\varepsilon_0)^2k_B}\approx 3.158\cdot 10^5K The fundamental unit of energy is called the Hartree energy in the Hartree system and the Rydberg energy in the Rydberg system. They differ by a factor of 2. The speed of light is relatively large in atomic units (137 in Hartree or 274 in Rydberg), which comes from the fact that an electron in hydrogen tends to move much slower than the speed of light. The gravitational constant  is extremely small in atomic units (about 10−45), which comes from the fact that the gravitational force between two electrons is far weaker than the Coulomb force . The unit length, LA, is the so-called and well known Böhr radius, a0. The values of c and e shown above imply that e=\sqrt{\alpha \hbar c}, as in Gaussian units, not Lorentz-Heaviside units. However, hybrids of the Gaussian and Lorentz–Heaviside units are sometimes used, leading to inconsistent conventions for magnetism-related units. Be aware of these issues! QCD Units In the framework of Quantum Chromodynamics, a quantum field theory (QFT) we know as QCD, we can define the QCD system of units based on: 1) QCD Length L_{QCD}. L_{QCD}=\dfrac{\hbar}{m_pc}\approx 2.103\cdot 10^{-16}m and where m_p is the proton mass (please, don’t confuse it with the Planck mass M_P). 2) QCD Time t_{QCD}. t_{QCD}=\dfrac{\hbar}{m_pc^2}\approx 7.015\cdot 10^{-25}s 3) QCD Mass M_{QCD}. M_{QCD}=m_p\approx 1.673\cdot 10^{-27}kg 4) QCD Energy E_{QCD}. E_{QCD}=M_{QCD}c^2=m_pc^2\approx 1.504\cdot 10^{-10}J=938.6MeV=0.9386GeV Thus, QCD energy is about 1 GeV! 5) QCD Temperature T_{QCD}. T_{QCD}=E_{QCD}/k_B=\dfrac{m_pc^2}{k_B}\approx 1.089\cdot 10^{13}K 6) QCD Charge Q_{QCD}. In Heaviside-Lorent units: Q_{QCD}=\dfrac{1}{\sqrt{4\pi\alpha}}e\approx 5.292\cdot 10^{-19}C In Gaussian units: Q_{QCD}=\dfrac{1}{\sqrt{\alpha}}e\approx 1.876\cdot 10^{-18}C Geometrized Units The geometrized unit system, used in general relativity, is not a completely defined system. In this system, the base physical units are chosen so that the speed of light and the gravitational constant are set equal to unity. Other units may be treated however desired. By normalizing appropriate other units, geometrized units become identical to Planck units. That is, we set: and the remaining constants are set to the unit according to your needs and tastes. Conversion Factors This table from wikipedia is very useful: i) \alpha is the fine-structure constant, approximately 0.007297. ii) \alpha_G=\dfrac{m_e^2}{M_P^2}\approx 1.752\cdot 10^{-45} is the gravitational fine-structure constant. Some conversion factors for geometrized units are also available: Conversion from kg, s, C, K into m: G_N/c^2  [m/kg] c [m/s] \sqrt{G_N/(4\pi\varepsilon_0)}/c^2 [m/C] G_Nk_B/c^2 [m/K] Conversion from m, s, C, K into kg: c^2/G_N [kg/m] c^3/G_N [kg/s] 1/\sqrt{G_N4\pi\varepsilon_0} [kg/C] Conversion from m, kg, C, K into s 1/c [s/m] \sqrt{\dfrac{G_N}{4\pi\varepsilon_0}}/c^3 [s/C] G_Nk_B/c^5 [s/K] Conversion from m, kg, s, K into C (G_N4\pi\varepsilon_0)^{1/2} [C/kg] k_B\sqrt{G_N4\pi\varepsilon_0}/c^2   [C/K] Conversion from m, kg, s, C into K c^2/k_B [K/kg] c^5/(G_Nk_B) [K/s] c^2/(k_B\sqrt{G_N4\pi\varepsilon_0}) [K/C] Or you can read off factors from this table as well: Advantages and Disadvantages of Natural Units Natural units have some advantages (“Pro”): 1) Equations and mathematical expressions are simpler in Natural Units. 2) Natural units allow for the match between apparently different physical magnitudes. 3) Some natural units are independent from “prototypes” or “external patterns” beyond some clever and trivial conventions. 4) They can help to unify different physical concetps. However, natural units have also some disadvantages (“Cons”): 1) They generally provide less precise measurements or quantities. 2) They can be ill-defined/redundant and own some ambiguity. It is also caused by the fact that some natural units differ by numerical factors of pi and/or pure numbers, so they can not help us to understand the origin of some pure numbers (adimensional prefactors) in general. Moreover, you must not forget that natural units are “human” in the sense you can addapt them to your own needs, and indeed,you can create your own particular system of natural units! However, said this, you can understand the main key point: fundamental theories are who finally hint what “numbers”/”magnitudes” determine a system of “natural units”. Remark: the smart designer of a system of natural unit systems must choose a few of these constants to normalize (set equal to 1). It is not possible to normalize just any set of constants. For example, the mass of a proton and the mass of an electron cannot both be normalized: if the mass of an electron is defined to be 1, then the mass of a proton has to be \approx 6\pi^5\approx 1936. In a less trivial example, the fine-structure constant, α≈1/137, cannot be set to 1, because it is a dimensionless number. The fine-structure constant is related to other fundamental constants through a very known equation: \alpha=\dfrac{k_Ce^2}{\hbar c} where k_C is the Coulomb constant, e is the positron electric charge (elementary charge), ℏ is the reduced Planck constant, and c is the again the speed of light in vaccuum. It is believed that in a normal theory is not possible to simultaneously normalize all four of the constants c, ℏ, e, and kC. Fritzsch-Xing  plot Fritzsch and Xing have developed a very beautiful plot of the fundamental constants in Nature (those coming from gravitation and the Standard Model). I can not avoid to include it here in the 2 versions I have seen it. The first one is “serious”, with 29 “fundamental constants”: However, I prefer the “fun version” of this plot. This second version is very cool and it includes 28 “fundamental constants”: The Okun Cube Long ago, L.B. Okun provided a very interesting way to think about the Planck units and their meaning, at least from current knowledge of physics! He imagined a cube in 3d in which we have 3 different axis. Planck units are defined as we have seen above by 3 constants c, \hbar, G_N plus the Boltzmann constant. Imagine we arrange one axis for c-Units, one axis for \hbar-units and one more for G_N-units. The result is a wonderful cube: Or equivalently, sometimes it is seen as an equivalent sketch ( note the Planck constant is NOT rationalized in the next cube, but it does not matter for this graphical representation): Classical physics (CP) corresponds to the vanishing of the 3 constants, i.e., to the origin (0,0,0). Newtonian mechanics (NM) , or more precisely newtonian gravity plus classical mechanics, corresponds to the “point” (0,0,G_N). Special relativity (SR) corresponds to the point (0,1/c,0), i.e., to “points” where relativistic effects are important due to velocities close to the speed of light. Quantum mechanics (QM) corresponds to the point (h,0,0), i.e., to “points” where the action/angular momentum fundamental unit is important, like the photoelectric effect or the blackbody radiation. Quantum Field Theory (QFT) corresponds to the point (h,1/c,0), i.e, to “points” where both, SR and QM are important, that is, to situations where you can create/annihilate pairs, the “particle” number is not conserved (but the particle-antiparticle number IS), and subatomic particles manifest theirselves simultaneously with quantum and relativistic features. Quantum Gravity (QG) would correspond to the point (h,0,G_N) where gravity is quantum itself. We have no theory of quantum gravity yet, but some speculative trials are effective versions of (super)-string theory/M-theory, loop quantum gravity (LQG) and some others. Finally, the Theory Of Everything (TOE) would be the theory in the last free corner, that arising in the vertex (h,1/c,G_N). Superstring theories/M-theory are the only serious canditate to TOE so far. LQG does not generally introduce matter fields (some recent trials are pushing into that direction, though) so it is not a TOE candidate right now. Some final remarks and questions 1) Are fundamental “constants” really constant? Do they vary with energy or time? 2) How many fundamental constants are there? This questions has provided lots of discussions. One of the most famous was this one: The trialogue (or dialogue if you are precise with words) above discussed the opinions by 3 eminent physicists about the number of fundamental constants: Michael Duff suggested zero, Gabriel Veneziano argued that there are only 2 fundamental constants while L.B. Okun defended there are 3 fundamental constants 3) Should the cosmological constant be included as a new fundamental constant? The cosmological constant behaves as a constant from current cosmological measurements and cosmological data fits, but is it truly constant? It seems to be…But we are not sure. Quintessence models (some of them related to inflationary Universes) suggest that it could vary on cosmological scales very slowly. However, the data strongly suggest that P_\Lambda=-\rho c^2 It is simple, but it is not understood the ultimate nature of such a “fluid” because we don’t know what kind of “stuff” (either particles or fields) can make the cosmological constant be so tiny and so abundant (about the 72% of the Universe is “dark energy”/cosmological constant) as it seems to be. We do know it can not be “known particles”. Dark energy behaves as a repulsive force, some kind of pressure/antigravitation on cosmological scales. We suspect it could be some kind of scalar field but there are many other alternatives that “mimic” a cosmological constant. If we identify the cosmological constant with the vacuum energy we obtain about 122 orders of magnitude of mismatch between theory and observations. A really bad “prediction”, one of the worst predictions in the history of physics! Be natural and stay tuned! LOG#033. Electromagnetism in SR. The Maxwell’s equations and the electromagnetism phenomena are one of the highest achievements and discoveries of the human kind. Thanks to it, we had radio waves, microwaves, electricity, the telephone, the telegraph, TV, electronics, computers, cell-phones, and internet. Electromagnetic waves are everywhere and everytime (as far as we know, with the permission of the dark matter and dark energy problems of Cosmology). Would you survive without electricity today? The language used in the formulation of Maxwell equations has changed a lot since Maxwell treatise on Electromagnetis, in which he used the quaternions. You can see the evolution of the Mawell equations “portrait” with the above picture. Today, from the mid 20th centure, we can write Maxwell equations into a two single equations. However, it is less know that Maxwell equations can be written as a single equation \nabla F=J using geometric algebra in Clifford spaces, with \nabla =\nabla \cdot +\nabla\wedge, or the so-called Kähler-Dirac-Clifford formalism in an analogue way. Before entering into the details of electromagnetic fields, let me give some easy notions of tensor calculus. If x^2=\mbox{invariant}, how does x^\mu transform under Lorentz transformations? Let me start with the tensor components in this way: x^\mu e_\mu=x^{\mu'}e_{\mu'}=\Lambda^{\mu'}_{\;\; \nu}x^\mu e_{\mu'}=\Lambda^{\mu'}_{\;\; \mu}x^\mu e_{\mu'} e_\mu=\Lambda^{\mu'}_{\;\; \mu} e_{\mu'}\rightarrow e_{\mu'}=\left(\Lambda^{-1}\right)_{\;\; \mu'}^{\mu}e_\mu=\left[\left(\Lambda^{-1}\right)^T\right]^{\;\; \mu}_{\nu}e_\mu Note, we have used with caution: 1st. Einstein’s convention: sum over repeated subindices and superindices is understood, unless it is stated some exception. 2nd. Free indices can be labelled to the taste of the user segment. 3rd. Careful matrix type manipulations. We define a contravariant vector (or tensor (1,0) ) as some object transforming in the next way: \boxed{a^{\mu'}=\Lambda^{\mu'}_{\;\; \nu}a^\nu}\leftrightarrow\boxed{a^{\mu'}=\left(\dfrac{\partial x^{\mu'}}{\partial x^\nu}\right)a^\nu} where \left(\dfrac{\partial x^{\mu'}}{\partial x^\nu}\right) denotes the Jabobian matrix of the transformation. In similar way, we can define a covariant vector ( or tensor (0,1) ) with the aid of the following equations \boxed{a_{\mu'}=\left[\left(\Lambda^{-1}\right)^{T}\right]_{\mu'}^{\:\;\; \nu}a_\nu}\leftrightarrow\boxed{a_{\mu'}=\left(\dfrac{\partial x^{\nu}}{\partial x^{\mu'}}\right)a_\nu} Note: \left(\dfrac{\partial x^{\nu}}{\partial x^{\mu'}}\right)=\left(\dfrac{\partial x^{\mu'}}{\partial x^{\nu}}\right)^{-1} Contravariant tensors of second order ( tensors type (2,0)) are defined with the next equations: \boxed{b^{\mu'\nu'}=\Lambda^{\mu'}_{\;\; \lambda}\Lambda^{\nu'}_{\;\; \sigma}b^{\lambda\sigma}=\Lambda^{\mu'}_{\;\; \lambda}b^{\lambda\sigma}\Lambda^{T \;\; \nu'}_{\sigma}\leftrightarrow b^{\mu'\nu'}=\dfrac{\partial x^{\mu'}}{\partial x^\lambda}\dfrac{\partial x^{\nu'}}{\partial x^\sigma}b^{\lambda\sigma}} Covariant tensors of second order ( tensors type (0,2)) are defined similarly: \boxed{c_{\mu'\nu'}=\left(\left(\Lambda\right)^{-1}\right)^{T \;\;\lambda}_{\mu'}\left(\left(\Lambda\right)^{-1T}\right)^{\;\; \sigma}_{\nu'}c_{\lambda\sigma}=\left(\Lambda^{-1T}\right)^{\;\; \lambda}_{\mu'}c_{\lambda\sigma}\Lambda^{-1 \;\; \nu'}_{\sigma}\leftrightarrow c_{\mu'\nu'}=\dfrac{\partial x^{\lambda}}{\partial x^{\mu'}}\dfrac{\partial x^{\sigma}}{\partial x^{\nu'}}c_{\lambda\sigma}} Mixed tensors of second order (tensors type (1,1)) can be also made: \boxed{d^{\mu'}_{\;\; \nu'}=\Lambda^{\mu'}_{\;\; \lambda}\left(\left(\Lambda\right)^{-1T}\right)^{\;\;\;\; \sigma}_{\nu'}d^{\lambda}_{\;\;\sigma}=\Lambda^{\mu'}_{\;\; \lambda}d^{\lambda}_{\;\;\sigma}\left(\left(\Lambda\right)^{-1}\right)^{\sigma}_{\;\;\; \nu'}\leftrightarrow d^{\mu'}_{\;\; \nu'}=\dfrac{\partial x^{\mu'}}{\partial x^{\lambda}}\dfrac{\partial x^{\sigma}}{\partial x^{\nu'}}d^{\lambda}_{\;\; \sigma}} We can summarize these transformations rules in matrix notation making the transcript from the index notation easily: 1st. Contravariant vectors change of coordinates rule: X'=\Lambda X 2nd. Covariant vectors change of coordinates rule: X'=\Lambda^{-1T} X 3rd. (2,0)-tensors change of coordinates rule: B'=\Lambda B \Lambda^T 4rd. (0,2)-tensors change of coordinates rule: C'=\Lambda^{-1T}C\Lambda^{-1} 5th. (1,1)-tensors change of coordinates rule: D'=\Lambda D \Lambda^{-1} Indeed, without taking care with subindices and superindices, and the issue of the inverse and transpose for transformation matrices, a general tensor type (r,s) is defined as follows: \boxed{T^{\mu'_1\mu'_2\ldots \mu'_r}_{\nu'_1\nu'_2\ldots \nu'_s}=L^{\nu_s}_{\nu'_s}\cdots L^{\nu_1}_{\nu'_1}L^{\mu'_r}_{\mu_r}\cdots L^{\mu'_1}_{\mu_1}T^{\mu_1\mu_2\ldots\mu_r}_{\nu_1\nu_2\ldots \nu_s}} We return to electromagnetism! The easiest examples of electromagnetic wave motion are plane waves: x=x_0\exp (iKX)=x_0\exp (ix^\mu p_\mu) where \phi=XK=KX=X\cdot K=x^\mu p_\mu=\mathbf{k}\cdot\mathbf{r}-\omega t Indeed, the cuadrivector K can be “guessed” from the phase invariant (\phi=\phi' since the phase is a dot product): K=\square \phi where \square is the four dimensional nabla vector defined by \square=\left(\dfrac{\partial}{c\partial t},\dfrac{\partial}{\partial x},\dfrac{\partial}{\partial y},\dfrac{\partial}{\partial z}\right) and so Now, let me discuss different notions of velocity when we are considering electromagnetic fields, beyond the usual notions of particle velocity and observer relative motion, we have the following notions of velocity in relativistic electromagnetism: 1st. The light speed c. It is the ultimate limit in vacuum and SR to the propagation of electromagnetic signals. Therefore, it is sometimes called energy transfer velocity in vacuum or vacuum speed of light. 2nd. Phase velocity v_{ph}. It is defined as the velocity of the modulated signal in a plane wave, if \omega =\omega (k)=\sqrt{c^2\mathbf{k}-K^2}, we have v_{ph}=\dfrac{\omega (\mathbf{k})}{k} where k is the modulus of \mathbf{k}. It measures how much fast the phase changes with the wavelength vector. From the definition of cuadrivector wave length, we deduce: Then, we can rewrite the distinguish three cases according to the sign of the invariant K^2: a) K^2>0. The separation is spacelike and we get v_p<c. b)K^2=0. The separation is lightlike or isotropic. We obtain v_p=c. c)K^2<0. The separation is timelike. We deduce that v_p>c. This situation is not contradictory with special relativity since phase oscillations can not transport information. 3rd. Group velocity v_g. It is defined like the velocity that a “wave packet” or “pulse” has in its propagation. Therefore, where we used the Planck relationships for photons E=\hbar \omega and p=\hbar k, with \hbar=\dfrac{h}{2\pi} 4th. Particle velocity. It is defined in SR by the cuadrivector U=\gamma (c,\mathbf{v}) 5th. Observer relative velocity, V. It is the velocity (constant) at which two inertial observes move. There is a nice relationship between the group velocity, the phase velocity and the energy transfer, the lightspeed in vacuum. To see it, look at the invariant: Deriving this expression, we get v_g=d\omega/dk=kc^2/\omega=c^2/v_{ph} so we have the very important equation Other important concept in electromagnetism is “light intensity”. Light intensity can be thought like the “flux of light”, and you can imagine it both in the wave or particle (photon corpuscles) theory in a similar fashion. Mathematically speaking: \mbox{Light intensity=Flux of light}=\dfrac{\mbox{POWER}}{\mbox{Area}}\rightarrow I=\dfrac{\mathcal{P}}{A}=\dfrac{E/V}{tA/V}=\dfrac{uV}{tA}=uc so I=uc where u is the energy density of the electromagnetic field and c is the light speed in vacuum. By Lorentz transformations, it can be showed that for electromagnetic waves, energy, wavelength, energy density and intensity change in the following way: u'= \dfrac{E'}{\lambda' NA}=\dfrac{1-\beta}{1+\beta}\dfrac{E}{N\lambda A} The relativistic momentum can be related to the wavelength cuadrivector using the Planck relation P^\mu=\hbar K^\mu. Under a Lorentz transformation, momenergy transforms P'=\Lambda P. Assign to the wave number vector \mathbf{k} a direction in the S-frame: \vert \mathbf{k}\vert \left( \cos \theta, \sin \theta, 0 \right)=\dfrac{\omega}{c}\left(\cos\theta,\sin\theta,0\right) and then In matrix notation, the whole change is written as: \begin{pmatrix}\dfrac{\omega'}{c}\\ k'_x\\ k'_y\\k'_z\end{pmatrix}=\begin{pmatrix}\gamma & -\beta\gamma & 0 & 0\\ -\beta\gamma & \gamma & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0& 1\end{pmatrix}\dfrac{\omega}{c}\begin{pmatrix}1\\ \cos\theta\\ \sin\theta\\ 0\end{pmatrix} K'\begin{cases}\omega'=\gamma \omega(1-\beta\cos\theta)\\ \;\\ k'_x=\gamma\dfrac{\omega}{c}\\ \;\\ k'_y=\dfrac{\omega}{c}\sin\theta\\\;\\ k'_z=0\end{cases} Using the first two equations, we get: Using the first and the third equation, we obtain: Dividing the last two equations, we deduce: This formula is the so-called stellar aberration formula, and we will dedicate it a post in the future. If we write the first equation with the aid of frequency f (and f_0) instead of angular frequency, where we wrote the frequency of the source as \omega'=2\pi f_0 and the frequency of the receiver as \omega=2\pi \nu. This last formula is called the relativistic Doppler shift. Now, we are going to introduce a very important object in electromagnetism: the electric charge and the electric current. We are going to make an analogy with the momenergy \mathbb{P}=m\gamma\left(c,\mathbf{v}\right). The cuadrivector electric current is something very similar: \mathbb{J}=\rho_0\gamma\left(c,\mathbf{u}\right)=\rho\left(c,\mathbf{u}\right)=\left(\rho c,\mathbf{j}\right) where \rho=\gamma \rho_0 is the electric current density, and \mathbf{u} is the charge velocity. Moreover, \rho_0=nq and where q is the electric charge and n=N/V is the electric charge density number, i.e., the number of “elementary” charges in certain volume. Indeed, we can identify the components of such a cuadrivector: \mathbb{J}=\left(J^0,J^1,J^2,J^3\right)=\rho_0\left(c\gamma,\gamma\mathbf{v}\right)=\rho_0\gamma\left(c,\mathbf{u}\right). We can make some interesting observations. Suppose certain rest frame S where we have \rho=\rho_++\rho_-=0, i.e., a frame with equilibred charges \rho_+=-\rho_-, and suppose we move with the relative velocity of the electron (or negative charge) observer. Then u=v(e) and j_x=\rho_-v, while the other components are j_y=j_z=0. Then, the charge density current transforms as follows: \begin{pmatrix}\rho'c\\ j'_x\\ j'_y\\j'_z\end{pmatrix}=\begin{pmatrix}\gamma & -\beta\gamma & 0 & 0\\ -\beta\gamma & \gamma & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0& 1\end{pmatrix}\begin{pmatrix}0\\ \rho_- v\\ 0\\ 0\end{pmatrix}=\begin{pmatrix}-\gamma \beta \rho_- v\\ \gamma \rho_- v\\ 0\\ 0\end{pmatrix} \rho'=-\gamma \beta^2\rho_-=\gamma\beta^2\rho_+ j'_x=\gamma\rho_- v=-\gamma \rho_+ v We conclude: 1st. Length contraction implies that the charge density increases by a gamma factor, i.e., \rho_+\rightarrow \rho_+\gamma. 2nd. The crystal lattice “hole” velocity -v in the primed frame implies the existence in that frame of a current density j'_x=-\gamma \rho_+ v. 3rd. The existence of charges in motion when seen from an inertial frame (boosted from a rest reference S) implies that in a moving reference frame electric fields are not alone but with magnetic fields. From this perspective, magnetic fields are associated to the existence of moving charges. That is, electric fields and magnetic fields are intimately connected and they are caused by static and moving charges, as we do know from classical non-relativistic physics. Remember now the general expression of the FORPOWER tetravector, or Power-Force tetravector, in SR: \mathcal{F}=\mathcal{F}^\mu e_\mu=\gamma\left(\dfrac{\mathbf{f}\cdot\mathbf{v}}{c},f_x,f_y,f_z\right) and using the metric, with the mainly plus convention, we get the covariant componets for the power-force tetravector: We define the Lorentz force as the sum of the electric and magnetic forces \mathbf{f}_L=\mathbf{f}_e+\mathbf{f}_m=q\mathbf{E}+\mathbf{v}\times \mathbf{B} Noting that (\mathbf{v}\times\mathbf{B})\cdot \mathbf{v}=0, the Power-Force tetravector for the Lorentz electromagnetic force reads: \mathcal{F}_L=\mathcal{F}^\mu e_\mu=\gamma q\left(\dfrac{\mathbf{E}\cdot{\mathbf{v}}}{c},\mathbf{E}+\mathbf{v}\times\mathbf{B}\right) And now, we realize that we can understand the electromagnetic force in terms of a tensor (1,1), i.e., a matrix, if we write: \begin{pmatrix}\mathcal{F}^0\\ \mathcal{F}^1\\ \mathcal{F}^2\\ \mathcal{F}^3\end{pmatrix}=\dfrac{q}{c}\begin{pmatrix}0 &E_x & E_y & E_z\\ E_x & 0 & cB_z& -cB_y\\ E_y & -cB_z & 0 & cB_x\\ E_z & cB_y& -cB_x& 0\end{pmatrix}\begin{pmatrix}\gamma c\\ \gamma v_x\\ \gamma v_y\\ \gamma v_z\end{pmatrix} Therefore, \mathcal{F}^\mu=\dfrac{q}{c}F^\mu_{\;\; \nu}U^\nu\leftrightarrow \mathcal{F}=\dfrac{q}{c}\mathbb{F}\mathbb{U} where the components of the (1,1) tensor can be read: \mathbb{F}=\mathbf{F}^\mu _{\;\; \nu}=\begin{pmatrix}0& E_x& E_y& E_z\\ E_x & 0 & cB_z& -cB_y\\ E_y& -cB_z& 0 & cB_x\\ E_z & cB_y& -cB_x& 0\end{pmatrix} We can lower the indices with the metric \eta=diag(-1,1,1,1) in order to have a more “natural” equation and to read the symmetry of the electromagnetic tensor F_{\mu\nu} (note that we can not study symmetries with indices covariant and contravariant), \mathbf{F}_{\mu\nu}=\eta_{\mu \alpha}\mathbf{F}^{\alpha}_{\;\; \nu} \mathbf{F}^{\mu\nu}=\mathbf{F}^{\alpha}_{\;\; \beta}\eta^{\beta \nu}=\begin{pmatrix}0& E_x& E_y& E_z\\ -E_x & 0 & cB_z& -cB_y\\ -E_y& -cB_z& 0 & cB_x\\ -E_z & cB_y& -cB_x& 0\end{pmatrix} Please, note that F_{\mu\nu}=-F_{\nu\mu}. Focusing on the components of the electromagnetic tensor as a tensor type (1,1), we have seen that under Lorentz transformations its components change as F'=LFL^{-1} under a boost with \mathbf{v}=\left(v,0,0\right) in such a case. So, we write: \boxed{F'=\begin{pmatrix}\gamma & -\beta\gamma & 0 & 0\\ -\beta\gamma & \gamma & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0& 1\end{pmatrix}\begin{pmatrix}0& E_x& E_y& E_z\\ E_x & 0 & cB_z& -cB_y\\ E_y& -cB_z& 0 & cB_x\\ E_z & cB_y& -cB_x& 0\end{pmatrix}\begin{pmatrix}\gamma & \beta\gamma & 0 & 0\\ \beta\gamma & \gamma & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0& 1\end{pmatrix}} \boxed{F'=\mathbf{F}^{\mu'}_{\;\; \nu'}=\begin{pmatrix}0& E_x& \gamma (E_y-vB_z)& \gamma (E_z+vB_y)\\ E_x & 0 & c\gamma(B_z-\frac{v}{c^2}E_y) & -c\gamma (B_y+\frac{v}{c^2}E_z)\\ \gamma (E_y-vB_z)& -c\gamma (B_z-\frac{v}{c^2}E_y) & 0 & cB_x\\ \gamma (E_z+vB_y) & c\gamma (B_y+\frac{v}{c^2}E_z)& -cB_x& 0\end{pmatrix}} From this equation we deduce that: \mbox{EM fields after a boost}\begin{cases}E_{x'}=E_x,\; \; E_{y'}=\gamma \left( E_y-vB_z\right),\;\; E_{z'}=\gamma \left(E_z+vB_y\right)\\ B_{x'}=B_x,\;\; B_{y'}=\gamma \left(B_y+\frac{v}{c^2}E_z\right),\;\;B_{z'}=\gamma \left(B_z-\frac{v}{c^2}E_y\right)\end{cases} Example: In the S-frame we have the fields E=(0,0,0) and B=(0,B_y,0). The Coulomb force is f_C=qE=(0,0,0) and the Lorentz force is f_L=(0,0,qvB_y). How are these fields seen from the S’-frame? It is easy using the above transformations. We obtain that E'=(0,0,\gamma vB_y), B'=(0,\gamma B_y,0), f'_C=qE'=(0,0,\gamma qvB_y), f'_L=(0,0,0) Surprinsingly, or not, the S’-observer sees a boosted electric field (non null!), a boosted magnetic field,  a boosted non-null Coulomb force and a null Lorentz force! We can generalize the above transformations to the case of a general velocity in 3d-space \mathbf{v}=(v_x,v_y,v_z) \mathbf{E}_{\parallel'}=\mathbf{E}_\parallel \mathbf{B}_{\parallel'}=\mathbf{B}_{\parallel} \mathbf{E}_{\perp'}=\gamma \left[\mathbf{E}_\perp+(\mathbf{v}\times \mathbf{B})_\perp\right]=\gamma \left[\mathbf{E}_\perp+(\mathbf{v}\times \mathbf{B})\right] \mathbf{B}_{\perp'}=\gamma \left[\mathbf{B}_\perp-\dfrac{1}{c^2}(\mathbf{v}\times \mathbf{E})_\perp\right]=\gamma \left[\mathbf{B}_\perp-\dfrac{1}{c^2}(\mathbf{v}\times \mathbf{E})\right] The last equal in the last two equations is due to the orthogonality of the position vector to the velocity in 3d space due to the cross product. From these equations, we easily obtain: E_\parallel=\dfrac{(v\cdot E)v}{v^2}=\dfrac{\beta\cdot E}{\beta^2} E_\perp=E-E_\parallel=E-\dfrac{(v\cdot E)v}{v^2} and similarly with the magnetic field. The final tranformations we obtain are: \boxed{E'=E_{\parallel'}+E_{\perp'}=\dfrac{(v\cdot E)v}{v^2}+\gamma \left[ E-\dfrac{(v\cdot E)v}{v^2}+v\times B\right]} \boxed{B'=B_{\parallel'}+B_{\perp'}=\dfrac{(v\cdot B)v}{v^2}+\gamma \left[ B-\dfrac{(v\cdot B)v}{v^2}-v\times E\right]} \boxed{E'=\gamma \left(E+v\times B\right)-\left(\gamma-1\right)\dfrac{\left(v\cdot E\right) v}{v^2}} \boxed{B'=\gamma \left(B-v\times \dfrac{E}{c^2}\right)-\left(\gamma-1\right)\dfrac{\left(v\cdot B\right) v}{v^2}} In the limit where c\rightarrow \infty or \dfrac{v}{c}\rightarrow 0, we get that E'=E+v\times B B'=B-\dfrac{v\times E}{c^2} There are two invariants for electromagnetic fields: I_1=\mathbf{E}\cdot\mathbf{B} and I_2=\mathbf{E}^2-c^2\mathbf{B}^2 It can be checked that \mathbf{E}^2-c^2\mathbf{B}^2=\mathbf{E'}^2-c^2\mathbf{B'}^2=invariant  under Lorentz transformations. It is obvious since, up to a multiplicative constant, I_1=\dfrac{1}{4} F^\star_{\mu\nu}F^{\mu\nu}=\dfrac{1}{8}\epsilon_{\mu\nu\sigma \tau}F^{\sigma \tau}F^{\mu\nu}=\dfrac{1}{2}tr \left(F^{\star T}F\right) and where we have defined the dual electromagnetic field as \star F=F^\star_{\mu\nu}=\dfrac{1}{2}\epsilon_{\mu\nu \sigma \tau}F^{\sigma \tau} or if we write it in components ( duality sends \mathbf{E} to \mathbf{B} and \mathbf{B} to -\mathbf{E}) \star F=F^\star_{\mu\nu}=\begin{pmatrix}0& -B_x& -B_y& -B_z\\ B_x & 0 & -cE_z& cE_y\\ B_y& cE_z& 0 & -cE_x\\ B_z & -cE_y& cE_x& 0\end{pmatrix} We can guess some consequences for the electromagnetic invariants: 1st. If E\perp B, then E\cdot B=0 and thus E_\perp B in every frame! This fact is important, since it shows that plane waves are orthogonal in any frame in SR. It is also the case of electromagnetic radiation. 2nd. As E\cdot B=\vert E\vert \vert B\vert \cos \varphi can be in the non-orthogonal case either positive or negative. If E\cdot B is positive, then it will be positive in any frame and similarly in the negative case. Morevoer, a transformation into a frame with E=0 (null electric field) and/or B=0 (null magnetic field) is impossible. That is, if a Lorentz transformation of the electric field or the magnetic field turns it to zero, it means that the electric field and magnetic field are orthogonal. 3rd. If E=cB, i.e., if E^2-c^2B^2=0, then it is valid in every frame. 4th. If there is a electric field but there is no magnetic field B in S, a Lorentz transformation to a pure B’ in S’ is impossible and viceversa. 5th. If the electric field is such that E>cB or E<cB, then they can be turned in a pure electric or magnetic frame only if the electric field and the magnetic field are orthogonal. 6th. There is a trick to remember the two invariants. It is due to Riemann. We can build the hexadimensional vector( six-vector, or sixtor) and complex valued entity The two invariants are easily obtained squaring F: We can introduce now a vector potencial tetravector: \mathbb{A}=A^\mu e_\mu=\left( A^0,A^1,A^2,A^3\right)=\left(\dfrac{V}{c},\mathbf{A}\right)=\left(\dfrac{V}{c},A_x,A_y,A_z\right) This tetravector is also called gauge field. We can write the Maxwell tensor in terms of this field: F_{\mu\nu}=\partial_\mu A_\nu-\partial _\nu A_\mu It can be easily probed that, up to a multiplicative constant in front of the electric current tetravector, the first set of Maxwell equations are: \boxed{\partial_\mu F^{\mu\nu}=j^\nu \leftrightarrow \square \cdot \mathbf{F}=\mathbb{J}} The second set of Maxwell equations (sometimes called Bianchi identities) can be written as follows: \boxed{\partial_\mu F^{\star \mu \nu}=\dfrac{1}{2}\epsilon^{\nu\mu\alpha\beta}\partial_\mu F_{\alpha\beta}=0} The Maxwell equations are invariant under the gauge transformations in spacetime: \boxed{A^{\mu'}=A^\mu+e\partial^\mu \Psi} where the potential tetravector and the function \Psi are arbitrary functions of the spacetime. Some elections of gauge are common in the solution of electromagnetic problems: A) Lorentz gauge: \square \cdot A=\partial_\mu A^\mu=0 B) Coulomb gauge: \nabla \cdot \mathbf{A}=0 C) Temporal gauge: A^0=V/c=0 If we use the Lorentz gauge, and the Maxwell equations without sources, we deduce that the vector potential components satisfy the wave equation, i.e., \boxed{\square^2 A^\mu=0 \leftrightarrow \square^2 \mathbb{A}=0} Finally, let me point out an important thing about Maxwell equations. Specifically, about its invariance group. It is known that Maxwell equations are invariant under Lorentz transformations, and it was the guide Einstein used to extend galilean relativity to the case of electromagnetic fields, enlarging the mechanical concepts. But, the larger group leaving invariant the Maxwell equation’s invariant is not the Lorentz group but the conformal group. But it is another story unrelated to this post.
1cc94255dca77c5a
Quantum Mechanics/Time Independent Schrödinger Consider a particle confined to a one-dimensional box with impenetrable walls. When you solve the Schrödinger equation for the wavefunctions you get two sets of solutions: those of positive parity, and those of negative parity: \Psi_{P=1} = A \cos \left[\frac{(2n+1) \pi x}{a}\right] and \Psi_{P=-1} = A \cos \left(\frac{2n \pi x}{a}\right), where n is any positive integer and A is a normalisation constant. Now, we can have all of these infinite states and if you've ever studied Fourier Analysis you may have noticed, with these states you can form any function you wish---that is, the wavefunctions are complete. So what have we learned? Well, a lot actually: we have discovered the eigenstates of the Hamiltonian which can be used to determine the particle's time dependance. Derivation of the Time-Independent Schrödinger Equation We start with the general Schrödinger Equation, and use separation of variables. We have H \Psi = \hat \epsilon \Psi We separate \Psi into two functions: \Psi ( x , t ) = T ( t ) X ( x ) So now the Schrödinger Equation is H T X = \hat \epsilon T X We know from earlier that the "interesting" part of the energy operator \hat \epsilon is a partial derivative with respect to time, and the "interesting" part of the Hamiltonian H is a partial derivative with respect to position. As T does not depend on position, it is not affected by H. Similarly, X is not affected by \hat \epsilon. So we have: T H X = X \hat \epsilon T We can multiply on the left by T^{-1} X^{-1} to obtain X^{-1} H X = T^{-1} \hat \epsilon T Note that the left side only depends on x, on the right side only depends on t. We have two functions which are totally indenpendent, but are somehow equal to each other. This is only possible if both functions are equal to a consant, which we call E. X^{-1} H X = ET^{-1} \hat \epsilon T = E Naturally this implies H X = E X \hat \epsilon T = E T We can then expand H and \hat \epsilon and solve this equation. ↑Jump back a section Last modified on 21 November 2012, at 07:38
2f1fd010ee570618
Complex number From Wikipedia, the free encyclopedia   (Redirected from Complex numbers) Jump to: navigation, search A complex number can be visually represented as a pair of numbers (a, b) forming a vector on a diagram called an Argand diagram, representing the complex plane. "Re" is the real axis, "Im" is the imaginary axis, and i is the imaginary unit, satisfying i2 = −1. A complex number is a number that can be expressed in the form a + bi, where a and b are real numbers and i is the imaginary unit, where i2 = −1.[1] In this expression, a is the real part and b is the imaginary part of the complex number. Complex numbers extend the idea of the one-dimensional number line to the two-dimensional complex plane by using the horizontal axis for the real part and the vertical axis for the imaginary part. The complex number a + bi can be identified with the point (a, b) in the complex plane. A complex number whose real part is zero is said to be purely imaginary, whereas a complex number whose imaginary part is zero is a real number. In this way the complex numbers contain the ordinary real numbers while extending them in order to solve problems that cannot be solved with real numbers alone. As well as their use within mathematics, complex numbers have practical applications in many fields, including physics, chemistry, biology, economics, electrical engineering, and statistics. The Italian mathematician Gerolamo Cardano is the first known to have introduced complex numbers. He called them "fictitious" during his attempts to find solutions to cubic equations in the 16th century,[2] but complex numbers are no more or less "fictitious" or "imaginary" than any other kind of number. Complex numbers allow for solutions to certain equations that have no real solutions: the equation (x+1)^2 = -9 \, has no real solution, since the square of a real number is either 0 or positive. Complex numbers provide a solution to this problem. The idea is to extend the real numbers with the imaginary unit i where i^2=-1, so that solutions to equations like the preceding one can be found. In this case the solutions are −1 + 3i and −1 − 3i, as can be verified using the fact that i^2=-1: ((-1+3i)+1)^2 = (3i)^2 = (3^2)(i^2) = 9(-1) = -9 ((-1-3i)+1)^2 = (-3i)^2 = (-3)^2(i^2) = 9(-1) = -9 In fact not only quadratic equations, but all polynomial equations in a single variable can be solved using complex numbers. An illustration of the complex plane. The real part of a complex number z = x + iy is x, and its imaginary part is y. a+bi, \ where a and b are real numbers and i is the imaginary unit, satisfying i2 = −1. For example, −3.5 + 2i is a complex number. It is common to write a for a + 0i and bi for 0 + bi. Moreover, when the imaginary part is negative, it is common to write abi with b > 0 instead of a + (−b)i, for example 3 − 4i instead of 3 + (−4)i. The set of all complex numbers is denoted by , \mathbf{C} or \mathbb{C}. The real number a in the complex number z = a + bi is called the real part of z, and the real number b is often called the imaginary part. By this convention the imaginary part is a real number – not including the imaginary unit: hence b, not bi, is the imaginary part.[3][4] The real part a is denoted by Re(z) or ℜ(z), and the imaginary part b is denoted by Im(z) or ℑ(z). For example, \operatorname{Re}(-3.5 + 2i) &= -3.5 \\ \operatorname{Im}(-3.5 + 2i) &= 2 A real number a can be regarded as a complex number a + 0i with an imaginary part of zero. A pure imaginary number bi is a complex number 0 + bi whose real part is zero. Some authors write a + ib instead of a + bi. In some disciplines, in particular electromagnetism and electrical engineering, j is used instead of i,[5] since i is frequently used for electric current. In these cases complex numbers are written as a + bj or a + jb. Complex plane[edit] Figure 1: A complex number plotted as a point (red) and position vector (blue) on an Argand diagram; a+bi is the rectangular expression of the point. A complex number can be viewed as a point or position vector in a two-dimensional Cartesian coordinate system called the complex plane or Argand diagram (see Pedoe 1988 and Solomentsev 2001), named after Jean-Robert Argand. The numbers are conventionally plotted using the real part as the horizontal component, and imaginary part as vertical (see Figure 1). These two values used to identify a given complex number are therefore called its Cartesian, rectangular, or algebraic form. The defining characteristic of a position vector is that it has magnitude and direction. These are emphasised in a complex number's polar form and it turns out notably that the operations of addition and multiplication take on a very natural geometric character when complex numbers are viewed as position vectors: addition corresponds to vector addition while multiplication corresponds to multiplying their magnitudes and adding their arguments (i.e. the angles they make with the x axis). Viewed in this way the multiplication of a complex number by i corresponds to rotating a complex number counterclockwise through 90° about the origin: (a+bi)i = ai+bi^2 = -b+ai . History in brief[edit] Main section: History The solution in radicals (without trigonometric functions) of a general cubic equation contains the square roots of negative numbers when all three roots are real numbers, a situation that cannot be rectified by factoring aided by the rational root test if the cubic is irreducible (the so-called casus irreducibilis). This conundrum led Italian mathematician Gerolamo Cardano to conceive of complex numbers in around 1545, though his understanding was rudimentary. Work on the problem of general polynomials ultimately led to the fundamental theorem of algebra, which shows that with complex numbers, a solution exists to every polynomial equation of degree one or higher. Complex numbers thus form an algebraically closed field, where any polynomial equation has a root. Many mathematicians contributed to the full development of complex numbers. The rules for addition, subtraction, multiplication, and division of complex numbers were developed by the Italian mathematician Rafael Bombelli.[6] A more abstract formalism for the complex numbers was further developed by the Irish mathematician William Rowan Hamilton, who extended this abstraction to the theory of quaternions. Elementary operations[edit] Geometric representation of z and its conjugate \bar{z} in the complex plane The complex conjugate of the complex number z = x + yi is defined to be xyi. It is denoted \bar{z} or z*. Geometrically, \bar{z} is the "reflection" of z about the real axis. In particular, conjugating twice gives the original complex number: \bar{\bar{z}}=z. The real and imaginary parts of a complex number can be extracted using the conjugate: \operatorname{Re}\,(z) = \tfrac{1}{2}(z+\bar{z}), \, \operatorname{Im}\,(z) = \tfrac{1}{2i}(z-\bar{z}). \, Moreover, a complex number is real if and only if it equals its conjugate. Conjugation distributes over the standard arithmetic operations: \overline{z+w} = \bar{z} + \bar{w}, \, \overline{z w} = \bar{z} \bar{w}, \, \overline{(z/w)} = \bar{z}/\bar{w}. \, The reciprocal of a nonzero complex number z = x + yi is given by \frac{1}{z}=\frac{\bar{z}}{z \bar{z}}=\frac{\bar{z}}{x^2+y^2}. This formula can be used to compute the multiplicative inverse of a complex number if it is given in rectangular coordinates. Inversive geometry, a branch of geometry studying more general reflections than ones about a line, can also be expressed in terms of complex numbers. Addition and subtraction[edit] Addition of two complex numbers can be done geometrically by constructing a parallelogram. Complex numbers are added by adding the real and imaginary parts of the summands. That is to say: (a+bi) + (c+di) = (a+c) + (b+d)i.\ Similarly, subtraction is defined by (a+bi) - (c+di) = (a-c) + (b-d)i.\ Using the visualization of complex numbers in the complex plane, the addition has the following geometric interpretation: the sum of two complex numbers A and B, interpreted as points of the complex plane, is the point X obtained by building a parallelogram three of whose vertices are O, A and B. Equivalently, X is the point such that the triangles with vertices O, A, B, and X, B, A, are congruent. Multiplication and division[edit] The multiplication of two complex numbers is defined by the following formula: (a+bi) (c+di) = (ac-bd) + (bc+ad)i.\ In particular, the square of the imaginary unit is −1: i^2 = i \times i = -1.\ The preceding definition of multiplication of general complex numbers follows naturally from this fundamental property of the imaginary unit. Indeed, if i is treated as a number so that di means d times i, the above multiplication rule is identical to the usual rule for multiplying two sums of two terms. (a+bi) (c+di) = ac + bci + adi + bidi \ (distributive law) = ac + bidi + bci + adi \ (commutative law of addition—the order of the summands can be changed) = ac + bdi^2 + (bc+ad)i \ (commutative law of multiplication—the order of the multiplicands can be changed) = (ac-bd) + (bc + ad)i \ (fundamental property of the imaginary unit). The division of two complex numbers is defined in terms of complex multiplication, which is described above, and real division. Where at least one of c and d is non-zero: \,\frac{a + bi}{c + di} = \left({ac + bd \over c^2 + d^2}\right) + \left( {bc - ad \over c^2 + d^2} \right)i. Division can be defined in this way because of the following observation: \,\frac{a + bi}{c + di} = \frac{\left(a + bi\right) \cdot \left(c - di\right)}{\left (c + di\right) \cdot \left (c - di\right)} = \left({ac + bd \over c^2 + d^2}\right) + \left( {bc - ad \over c^2 + d^2} \right)i. As shown earlier, cdi is the complex conjugate of the denominator c + di. The real part c and the imaginary part d of the denominator must not both be zero for division to be defined. Square root[edit] The square roots of a + bi (with b ≠ 0) are \pm (\gamma + \delta i), where \gamma = \sqrt{\frac{a + \sqrt{a^2 + b^2}}{2}} \delta = \sgn (b) \sqrt{\frac{-a + \sqrt{a^2 + b^2}}{2}}, where sgn is the signum function. This can be seen by squaring \pm (\gamma + \delta i) to obtain a + bi.[7][8] Here \sqrt{a^2 + b^2} is called the modulus of a + bi, and the square root with non-negative real part is called the principal square root. Polar form[edit] Figure 2: The argument φ and modulus r locate a point on an Argand diagram; r(\cos \varphi + i \sin \varphi) or r e^{i\varphi} are polar expressions of the point. Absolute value and argument[edit] An alternative way of defining a point P in the complex plane, other than using the x- and y-coordinates, is to use the distance of the point from O, the point whose coordinates are (0, 0) (the origin), together with the angle between the line through P and O and the (horizontal) line which is the positive part of the real axis. This idea leads to the polar form of complex numbers. The absolute value (or modulus or magnitude) of a complex number z = x + yi is \textstyle r=|z|=\sqrt{x^2+y^2}.\, If z is a real number (i.e., y = 0), then r = | x |. In general, by Pythagoras' theorem, r is the distance of the point P representing the complex number z to the origin. The argument or phase of z is the angle of the radius OP with the positive real axis, and is written as \arg(z). As with the modulus, the argument can be found from the rectangular form x+yi:[9] \varphi = \arg(z) = \arctan(\frac{y}{x}) & \mbox{if } x > 0 \\ \arctan(\frac{y}{x}) + \pi & \mbox{if } x < 0 \mbox{ and } y \ge 0\\ \arctan(\frac{y}{x}) - \pi & \mbox{if } x < 0 \mbox{ and } y < 0\\ \frac{\pi}{2} & \mbox{if } x = 0 \mbox{ and } y > 0\\ -\frac{\pi}{2} & \mbox{if } x = 0 \mbox{ and } y < 0\\ \mbox{indeterminate } & \mbox{if } x = 0 \mbox{ and } y = 0. The value of φ must always be expressed in radians. It can change by any multiple of and still give the same angle. Hence, the arg function is sometimes considered as multivalued. Normally, as given above, the principal value in the interval (−π,π] is chosen. Values in the range [0,2π) are obtained by adding if the value is negative. The polar angle for the complex number 0 is indeterminate, but arbitrary choice of the angle 0 is common. The value of φ equals the result of atan2: \varphi = \mbox{atan2}(\mbox{imaginary}, \mbox{real}). Together, r and φ give another way of representing complex numbers, the polar form, as the combination of modulus and argument fully specify the position of a point on the plane. Recovering the original rectangular co-ordinates from the polar form is done by the formula called trigonometric form z = r(\cos \varphi + i\sin \varphi ).\, Using Euler's formula this can be written as z = r e^{i \varphi}.\, Using the cis function, this is sometimes abbreviated to z = r \ \operatorname{cis} \ \varphi. \, In angle notation, often used in electronics to represent a phasor with amplitude r and phase φ, it is written as[10] z = r \ang \varphi . \, Multiplication, division and exponentiation in polar form[edit] Multiplication of 2 + i (blue triangle) and 3 + i (red triangle). The red triangle is rotated to match the vertex of the blue one and stretched by 5, the length of the hypotenuse of the blue triangle. Formulas for multiplication, division and exponentiation are simpler in polar form than the corresponding formulas in Cartesian coordinates. Given two complex numbers z1 = r1(cos φ1 + i sin φ1) and z2 =r2(cos φ2 + i sin φ2) the formula for multiplication is z_1 z_2 = r_1 r_2 (\cos(\varphi_1 + \varphi_2) + i \sin(\varphi_1 + \varphi_2)).\, In other words, the absolute values are multiplied and the arguments are added to yield the polar form of the product. For example, multiplying by i corresponds to a quarter-turn counter-clockwise, which gives back i2 = −1. The picture at the right illustrates the multiplication of (2+i)(3+i)=5+5i. \, Since the real and imaginary part of 5 + 5i are equal, the argument of that number is 45 degrees, or π/4 (in radian). On the other hand, it is also the sum of the angles at the origin of the red and blue triangles are arctan(1/3) and arctan(1/2), respectively. Thus, the formula \frac{\pi}{4} = \arctan\frac{1}{2} + \arctan\frac{1}{3} holds. As the arctan function can be approximated highly efficiently, formulas like this—known as Machin-like formulas—are used for high-precision approximations of π. Similarly, division is given by \frac{z_1}{ z_2} = \frac{r_1}{ r_2} \left(\cos(\varphi_1 - \varphi_2) + i \sin(\varphi_1 - \varphi_2)\right). This also implies de Moivre's formula for exponentiation of complex numbers with integer exponents: z^n = r^n\,(\cos n\varphi + i \sin n \varphi). The nth roots of z are given by \sqrt[n]{z} = \sqrt[n]r \left( \cos \left(\frac{\varphi+2k\pi}{n}\right) + i \sin \left(\frac{\varphi+2k\pi}{n}\right)\right) for any integer k satisfying 0 ≤ kn − 1. Here nr is the usual (positive) nth root of the positive real number r. While the nth root of a positive real number r is chosen to be the positive real number c satisfying cn = x there is no natural way of distinguishing one particular complex nth root of a complex number. Therefore, the nth root of z is considered as a multivalued function (in z), as opposed to a usual function f, for which f(z) is a uniquely defined number. Formulas such as \sqrt[n]{z^n} = z (which holds for positive real numbers), do in general not hold for complex numbers. Field structure[edit] The set C of complex numbers is a field. Briefly, this means that the following facts hold: first, any two complex numbers can be added and multiplied to yield another complex number. Second, for any complex number z, its additive inverse z is also a complex number; and third, every nonzero complex number has a reciprocal complex number. Moreover, these operations satisfy a number of laws, for example the law of commutativity of addition and multiplication for any two complex numbers z1 and z2: z_1+ z_2 = z_2 + z_1, z_1 z_2 = z_2 z_1. These two laws and the other requirements on a field can be proven by the formulas given above, using the fact that the real numbers themselves form a field. Unlike the reals, C is not an ordered field, that is to say, it is not possible to define a relation z1 < z2 that is compatible with the addition and multiplication. In fact, in any ordered field, the square of any element is necessarily positive, so i2 = −1 precludes the existence of an ordering on C. When the underlying field for a mathematical topic or construct is the field of complex numbers, the thing's name is usually modified to reflect that fact. For example: complex analysis, complex matrix, complex polynomial, and complex Lie algebra. Solutions of polynomial equations[edit] Given any complex numbers (called coefficients) a0, …, an, the equation a_n z^n + \dotsb + a_1 z + a_0 = 0 has at least one complex solution z, provided that at least one of the higher coefficients a1, …, an is nonzero. This is the statement of the fundamental theorem of algebra. Because of this fact, C is called an algebraically closed field. This property does not hold for the field of rational numbers Q (the polynomial x2 − 2 does not have a rational root, since 2 is not a rational number) nor the real numbers R (the polynomial x2 + a does not have a real root for a > 0, since the square of x is positive for any real number x). There are various proofs of this theorem, either by analytic methods such as Liouville's theorem, or topological ones such as the winding number, or a proof combining Galois theory and the fact that any real polynomial of odd degree has at least one root. Because of this fact, theorems that hold for any algebraically closed field, apply to C. For example, any non-empty complex square matrix has at least one (complex) eigenvalue. Algebraic characterization[edit] The field C has the following three properties: first, it has characteristic 0. This means that 1 + 1 + ⋯ + 1 ≠ 0 for any number of summands (all of which equal one). Second, its transcendence degree over Q, the prime field of C, is the cardinality of the continuum. Third, it is algebraically closed (see above). It can be shown that any field having these properties is isomorphic (as a field) to C. For example, the algebraic closure of Qp also satisfies these three properties, so these two fields are isomorphic. Also, C is isomorphic to the field of complex Puiseux series. However, specifying an isomorphism requires the axiom of choice. Another consequence of this algebraic characterization is that C contains many proper subfields that are isomorphic to C. Characterization as a topological field[edit] The preceding characterization of C describes the algebraic aspects of C, only. That is to say, the properties of nearness and continuity, which matter in areas such as analysis and topology, are not dealt with. The following description of C as a topological field (that is, a field that is equipped with a topology, which allows the notion of convergence) does take into account the topological properties. C contains a subset P (namely the set of positive real numbers) of nonzero elements satisfying the following three conditions: • P is closed under addition, multiplication and taking inverses. • If x and y are distinct elements of P, then either xy or yx is in P. • If S is any nonempty subset of P, then S + P = x + P for some x in C. Moreover, C has a nontrivial involutive automorphism xx* (namely the complex conjugation), such that x x* is in P for any nonzero x in C. Any field F with these properties can be endowed with a topology by taking the sets B(x, p) = { y | p − (yx)(yx)* ∈ P }  as a base, where x ranges over the field and p ranges over P. With this topology F is isomorphic as a topological field to C. The only connected locally compact topological fields are R and C. This gives another characterization of C as a topological field, since C can be distinguished from R because the nonzero complex numbers are connected, while the nonzero real numbers are not. Formal construction[edit] Formal development[edit] Above, complex numbers have been defined by introducing i, the imaginary unit, as a symbol. More rigorously, the set C of complex numbers can be defined as the set R2 of ordered pairs (a, b) of real numbers. In this notation, the above formulas for addition and multiplication read (a, b) \cdot (c, d) = (ac - bd, bc + ad).\, It is then just a matter of notation to express (a, b) as a + bi. Though this low-level construction does accurately describe the structure of the complex numbers, the following equivalent definition reveals the algebraic nature of C more immediately. This characterization relies on the notion of fields and polynomials. A field is a set endowed with an addition, subtraction, multiplication and division operations that behave as is familiar from, say, rational numbers. For example, the distributive law (x+y) z = xz + yz must hold for any three elements x, y and z of a field. The set R of real numbers does form a field. A polynomial p(X) with real coefficients is an expression of the form where the a0, …, an are real numbers. The usual addition and multiplication of polynomials endows the set R[X] of all such polynomials with a ring structure. This ring is called polynomial ring. The quotient ring R[X]/(X2 + 1) can be shown to be a field. This extension field contains two square roots of −1, namely (the cosets of) X and −X, respectively. (The cosets of) 1 and X form a basis of R[X]/(X2 + 1) as a real vector space, which means that each element of the extension field can be uniquely written as a linear combination in these two elements. Equivalently, elements of the extension field can be written as ordered pairs (a, b) of real numbers. Moreover, the above formulas for addition etc. correspond to the ones yielded by this abstract algebraic approach – the two definitions of the field C are said to be isomorphic (as fields). Together with the above-mentioned fact that C is algebraically closed, this also shows that C is an algebraic closure of R. Matrix representation of complex numbers[edit] Complex numbers a + ib can also be represented by 2 × 2 matrices that have the following form: a & -b \\ b & \;\; a Here the entries a and b are real numbers. The sum and product of two such matrices is again of this form, and the sum and product of complex numbers corresponds to the sum and product of such matrices. The geometric description of the multiplication of complex numbers can also be phrased in terms of rotation matrices by using this correspondence between complex numbers and such matrices. Moreover, the square of the absolute value of a complex number expressed as a matrix is equal to the determinant of that matrix: |z|^2 = a & -b \\ b & a = (a^2) - ((-b)(b)) = a^2 + b^2. The conjugate \overline z corresponds to the transpose of the matrix. Though this representation of complex numbers with matrices is the most common, many other representations arise from matrices other than \begin{pmatrix}0 & -1 \\1 & 0 \end{pmatrix} that square to the negative of the identity matrix. See the article on 2 × 2 real matrices for other representations of complex numbers. Complex analysis[edit] Color wheel graph of sin(1/z). Black parts inside refer to numbers having large absolute values. The study of functions of a complex variable is known as complex analysis and has enormous practical use in applied mathematics as well as in other branches of mathematics. Often, the most natural proofs for statements in real analysis or even number theory employ techniques from complex analysis (see prime number theorem for an example). Unlike real functions, which are commonly represented as two-dimensional graphs, complex functions have four-dimensional graphs and may usefully be illustrated by color-coding a three-dimensional graph to suggest four dimensions, or by animating the complex function's dynamic transformation of the complex plane. Complex exponential and related functions[edit] The notions of convergent series and continuous functions in (real) analysis have natural analogs in complex analysis. A sequence of complex numbers is said to converge if and only if its real and imaginary parts do. This is equivalent to the (ε, δ)-definition of limits, where the absolute value of real numbers is replaced by the one of complex numbers. From a more abstract point of view, C, endowed with the metric \operatorname{d}(z_1, z_2) = |z_1 - z_2| \, is a complete metric space, which notably includes the triangle inequality |z_1 + z_2| \le |z_1| + |z_2| for any two complex numbers z1 and z2. Like in real analysis, this notion of convergence is used to construct a number of elementary functions: the exponential function exp(z), also written ez, is defined as the infinite series \exp(z):= 1+z+\frac{z^2}{2\cdot 1}+\frac{z^3}{3\cdot 2\cdot 1}+\cdots = \sum_{n=0}^{\infty} \frac{z^n}{n!}. \, and the series defining the real trigonometric functions sine and cosine, as well as hyperbolic functions such as sinh also carry over to complex arguments without change. Euler's identity states: \exp(i\varphi) = \cos(\varphi) + i\sin(\varphi) \, for any real number φ, in particular \exp(i \pi) = -1 \, Unlike in the situation of real numbers, there is an infinitude of complex solutions z of the equation \exp(z) = w \, for any complex number w ≠ 0. It can be shown that any such solution z—called complex logarithm of a—satisfies \log(x+iy)=\ln|w| + i\arg(w), \, where arg is the argument defined above, and ln the (real) natural logarithm. As arg is a multivalued function, unique only up to a multiple of 2π, log is also multivalued. The principal value of log is often taken by restricting the imaginary part to the interval (−π,π]. Complex exponentiation zω is defined as z^\omega = \exp(\omega \log z). \, Consequently, they are in general multi-valued. For ω = 1 / n, for some natural number n, this recovers the non-uniqueness of nth roots mentioned above. Complex numbers, unlike real numbers, do not in general satisfy the unmodified power and logarithm identities, particularly when naïvely treated as single-valued functions; see failure of power and logarithm identities. For example they do not satisfy \,a^{bc} = (a^b)^c. Both sides of the equation are multivalued by the definition of complex exponentiation given here, and the values on the left are a subset of those on the right. Holomorphic functions[edit] A function f : CC is called holomorphic if it satisfies the Cauchy–Riemann equations. For example, any R-linear map CC can be written in the form with complex coefficients a and b. This map is holomorphic if and only if b = 0. The second summand b \overline z is real-differentiable, but does not satisfy the Cauchy–Riemann equations. Complex analysis shows some features not apparent in real analysis. For example, any two holomorphic functions f and g that agree on an arbitrarily small open subset of C necessarily agree everywhere. Meromorphic functions, functions that can locally be written as f(z)/(zz0)n with a holomorphic function f, still share some of the features of holomorphic functions. Other functions have essential singularities, such as sin(1/z) at z = 0. Some applications of complex numbers are: Control theory[edit] In control theory, systems are often transformed from the time domain to the frequency domain using the Laplace transform. The system's poles and zeros are then analyzed in the complex plane. The root locus, Nyquist plot, and Nichols plot techniques all make use of the complex plane. In the root locus method, it is especially important whether the poles and zeros are in the left or right half planes, i.e. have real part greater than or less than zero. If a linear, time-invariant (LTI) system has poles that are If a system has zeros in the right half plane, it is a nonminimum phase system. Improper integrals[edit] In applied fields, complex numbers are often used to compute certain real-valued improper integrals, by means of complex-valued functions. Several methods exist to do this; see methods of contour integration. Fluid dynamics[edit] In fluid dynamics, complex functions are used to describe potential flow in two dimensions. Dynamic equations[edit] In differential equations, it is common to first find all complex roots r of the characteristic equation of a linear differential equation or equation system and then attempt to solve the system in terms of base functions of the form f(t) = ert. Likewise, in difference equations, the complex roots r of the characteristic equation of the difference equation system are used, to attempt to solve the system in terms of base functions of the form f(t) = r t. Electromagnetism and electrical engineering[edit] In electrical engineering, the Fourier transform is used to analyze varying voltages and currents. The treatment of resistors, capacitors, and inductors can then be unified by introducing imaginary, frequency-dependent resistances for the latter two and combining all three in a single complex number called the impedance. This approach is called phasor calculus. In electrical engineering, the imaginary unit is denoted by j, to avoid confusion with I which is generally in use to denote electric current. Since the voltage in an AC circuit is oscillating, it can be represented as V = V_0 e^{j \omega t} = V_0 \left (\cos \omega t + j \sin\omega t \right ), To obtain the measurable quantity, the real part is taken: \mathrm{Re}(V) = \mathrm{Re}\left [ V_0 e^{j \omega t} \right ] = V_0 \cos \omega t. See for example.[11] Signal analysis[edit] Complex numbers are used in signal analysis and other fields for a convenient description for periodically varying signals. For given real functions representing actual physical quantities, often in terms of sines and cosines, corresponding complex functions are considered of which the real parts are the original quantities. For a sine wave of a given frequency, the absolute value | z | of the corresponding z is the amplitude and the argument arg(z) the phase. If Fourier analysis is employed to write a given real-valued signal as a sum of periodic functions, these periodic functions are often written as complex valued functions of the form f ( t ) = z e^{i\omega t} \, where ω represents the angular frequency and the complex number z encodes the phase and amplitude as explained above. This use is also extended into digital signal processing and digital image processing, which utilize digital versions of Fourier analysis (and wavelet analysis) to transmit, compress, restore, and otherwise process digital audio signals, still images, and video signals. Another example, relevant to the two side bands of amplitude modulation of AM radio, is: \cos((\omega+\alpha)t)+\cos\left((\omega-\alpha)t\right) & = \operatorname{Re}\left(e^{i(\omega+\alpha)t} + e^{i(\omega-\alpha)t}\right) \\ & = \operatorname{Re}\left((e^{i\alpha t} + e^{-i\alpha t})\cdot e^{i\omega t}\right) \\ & = \operatorname{Re}\left(2\cos(\alpha t) \cdot e^{i\omega t}\right) \\ & = 2 \cos(\alpha t) \cdot \operatorname{Re}\left(e^{i\omega t}\right) \\ & = 2 \cos(\alpha t)\cdot \cos\left(\omega t\right)\,. Quantum mechanics[edit] The complex number field is intrinsic to the mathematical formulations of quantum mechanics, where complex Hilbert spaces provide the context for one such formulation that is convenient and perhaps most standard. The original foundation formulas of quantum mechanics – the Schrödinger equation and Heisenberg's matrix mechanics – make use of complex numbers. In special and general relativity, some formulas for the metric on spacetime become simpler if one takes the time variable to be imaginary. (This is no longer standard in classical relativity, but is used in an essential way in quantum field theory.) Complex numbers are essential to spinors, which are a generalization of the tensors used in relativity. Certain fractals are plotted in the complex plane, e.g. the Mandelbrot set and Julia sets. Every triangle has a unique Steiner inellipse—an ellipse inside the triangle and tangent to the midpoints of the three sides of the triangle. The foci of a triangle's Steiner inellipse can be found as follows, according to Marden's theorem:[12][13] Denote the triangle's vertices in the complex plane as a = xA + yAi, b = xB + yBi, and c = xC + yCi. Write the cubic equation (x-a)(x-b)(x-c)=0, take its derivative, and equate the (quadratic) derivative to zero. Marden's Theorem says that the solutions of this equation are the complex numbers denoting the locations of the two foci of the Steiner inellipse. Algebraic number theory[edit] Construction of a regular polygon using straightedge and compass. As mentioned above, any nonconstant polynomial equation (in complex coefficients) has a solution in C. A fortiori, the same is true if the equation has rational coefficients. The roots of such equations are called algebraic numbers – they are a principal object of study in algebraic number theory. Compared to Q, the algebraic closure of Q, which also contains all algebraic numbers, C has the advantage of being easily understandable in geometric terms. In this way, algebraic methods can be used to study geometric questions and vice versa. With algebraic methods, more specifically applying the machinery of field theory to the number field containing roots of unity, it can be shown that it is not possible to construct a regular nonagon using only compass and straightedge – a purely geometric problem. Another example are Pythagorean triples (a, b, c), that is to say integers satisfying a^2 + b^2 = c^2 \, (which implies that the triangle having side lengths a, b, and c is a right triangle). They can be studied by considering Gaussian integers, that is, numbers of the form x + iy, where x and y are integers. Analytic number theory[edit] Analytic number theory studies numbers, often integers or rationals, by taking advantage of the fact that they can be regarded as complex numbers, in which analytic methods can be used. This is done by encoding number-theoretic information in complex-valued functions. For example, the Riemann zeta-function ζ(s) is related to the distribution of prime numbers. The earliest fleeting reference to square roots of negative numbers can perhaps be said to occur in the work of the Greek mathematician Heron of Alexandria in the 1st century AD, where in his Stereometrica he considers, apparently in error, the volume of an impossible frustum of a pyramid to arrive at the term 81 − 144 in his calculations, although negative quantities were not conceived of in Hellenistic mathematics and Heron merely replaced it by its positive.[14] The impetus to study complex numbers proper first arose in the 16th century when algebraic solutions for the roots of cubic and quartic polynomials were discovered by Italian mathematicians (see Niccolo Fontana Tartaglia, Gerolamo Cardano). It was soon realized that these formulas, even if one was only interested in real solutions, sometimes required the manipulation of square roots of negative numbers. As an example, Tartaglia's formula for a cubic equation of the form x^3 = px + q[15] gives the solution to the equation x3 = x as At first glance this looks like nonsense. However formal calculations with complex numbers show that the equation z3 = i has solutions i, {\scriptstyle\frac{\sqrt{3}}{2}}+{\scriptstyle\frac{1}{2}}i and {\scriptstyle\frac{-\sqrt{3}}{2}}+{\scriptstyle\frac{1}{2}}i. Substituting these in turn for {\scriptstyle\sqrt{-1}^{1/3}} in Tartaglia's cubic formula and simplifying, one gets 0, 1 and −1 as the solutions of x3x = 0. Of course this particular equation can be solved at sight but it does illustrate that when general formulas are used to solve cubic equations with real roots then, as later mathematicians showed rigorously, the use of complex numbers is unavoidable. Rafael Bombelli was the first to explicitly address these seemingly paradoxical solutions of cubic equations and developed the rules for complex arithmetic trying to resolve these issues. The term "imaginary" for these quantities was coined by René Descartes in 1637, although he was at pains to stress their imaginary nature[16] [...] quelquefois seulement imaginaires c’est-à-dire que l’on peut toujours en imaginer autant que j'ai dit en chaque équation, mais qu’il n’y a quelquefois aucune quantité qui corresponde à celle qu’on imagine. ([...] sometimes only imaginary, that is one can imagine as many as I said in each equation, but sometimes there exists no quantity that matches that which we imagine.) A further source of confusion was that the equation \sqrt{-1}^2=\sqrt{-1}\sqrt{-1}=-1 seemed to be capriciously inconsistent with the algebraic identity \sqrt{a}\sqrt{b}=\sqrt{ab}, which is valid for non-negative real numbers a and b, and which was also used in complex number calculations with one of a, b positive and the other negative. The incorrect use of this identity (and the related identity \scriptstyle 1/\sqrt{a}=\sqrt{1/a}) in the case when both a and b are negative even bedeviled Euler. This difficulty eventually led to the convention of using the special symbol i in place of −1 to guard against this mistake.[citation needed] Even so Euler considered it natural to introduce students to complex numbers much earlier than we do today. In his elementary algebra text book, Elements of Algebra, he introduces these numbers almost at once and then uses them in a natural way throughout. In the 18th century complex numbers gained wider use, as it was noticed that formal manipulation of complex expressions could be used to simplify calculations involving trigonometric functions. For instance, in 1730 Abraham de Moivre noted that the complicated identities relating trigonometric functions of an integer multiple of an angle to powers of trigonometric functions of that angle could be simply re-expressed by the following well-known formula which bears his name, de Moivre's formula: (\cos \theta + i\sin \theta)^{n} = \cos n \theta + i\sin n \theta. \, In 1748 Leonhard Euler went further and obtained Euler's formula of complex analysis: \cos \theta + i\sin \theta = e ^{i\theta } \, by formally manipulating complex power series and observed that this formula could be used to reduce any trigonometric identity to much simpler exponential identities. The idea of a complex number as a point in the complex plane (above) was first described by Caspar Wessel in 1799, although it had been anticipated as early as 1685 in Wallis's De Algebra tractatus. Wessel's memoir appeared in the Proceedings of the Copenhagen Academy but went largely unnoticed. In 1806 Jean-Robert Argand independently issued a pamphlet on complex numbers and provided a rigorous proof of the fundamental theorem of algebra. Gauss had earlier published an essentially topological proof of the theorem in 1797 but expressed his doubts at the time about "the true metaphysics of the square root of −1". It was not until 1831 that he overcame these doubts and published his treatise on complex numbers as points in the plane, largely establishing modern notation and terminology. The English mathematician G. H. Hardy remarked that Gauss was the first mathematician to use complex numbers in 'a really confident and scientific way' although mathematicians such as Niels Henrik Abel and Carl Gustav Jacob Jacobi were necessarily using them routinely before Gauss published his 1831 treatise.[17] Augustin Louis Cauchy and Bernhard Riemann together brought the fundamental ideas of complex analysis to a high state of completion, commencing around 1825 in Cauchy's case. The common terms used in the theory are chiefly due to the founders. Argand called \cos \phi + i\sin \phi the direction factor, and r = \sqrt{a^2+b^2} the modulus; Cauchy (1828) called \cos \phi + i\sin \phi the reduced form (l'expression réduite) and apparently introduced the term argument; Gauss used i for \sqrt{-1}, introduced the term complex number for a + bi, and called a2 + b2 the norm. The expression direction coefficient, often used for \cos \phi + i\sin \phi, is due to Hankel (1867), and absolute value, for modulus, is due to Weierstrass. Later classical writers on the general theory include Richard Dedekind, Otto Hölder, Felix Klein, Henri Poincaré, Hermann Schwarz, Karl Weierstrass and many others. Generalizations and related notions[edit] The process of extending the field R of reals to C is known as Cayley–Dickson construction. It can be carried further to higher dimensions, yielding the quaternions H and octonions O which (as a real vector space) are of dimension 4 and 8, respectively. However, with increasing dimension, the algebraic properties familiar from real and complex numbers vanish: the quaternions are only a skew field, i.e. for some x, y: x·yy·x for two quaternions, the multiplication of octonions fails (in addition to not being commutative) to be associative: for some x, y, z: (x·yzx·(y·z). However, all of these are normed division algebras over R. By Hurwitz's theorem they are the only ones. The next step in the Cayley–Dickson construction, the sedenions fail to have this structure. The Cayley–Dickson construction is closely related to the regular representation of C, thought of as an R-algebra (an R-vector space with a multiplication), with respect to the basis (1, i). This means the following: the R-linear map \mathbb{C} \rightarrow \mathbb{C}, z \mapsto wz for some fixed complex number w can be represented by a 2 × 2 matrix (once a basis has been chosen). With respect to the basis (1, i), this matrix is \operatorname{Re}(w) & -\operatorname{Im}(w) \\ \operatorname{Im}(w) & \;\; \operatorname{Re}(w) i.e., the one mentioned in the section on matrix representation of complex numbers above. While this is a linear representation of C in the 2 × 2 real matrices, it is not the only one. Any matrix J = \begin{pmatrix}p & q \\ r & -p \end{pmatrix}, \quad p^2 + qr + 1 = 0 has the property that its square is the negative of the identity matrix: J2 = −I. Then \{ z = a I + b J : a,b \in R \} is also isomorphic to the field C, and gives an alternative complex structure on R2. This is generalized by the notion of a linear complex structure. Hypercomplex numbers also generalize R, C, H, and O. For example this notion contains the split-complex numbers, which are elements of the ring R[x]/(x2 − 1) (as opposed to R[x]/(x2 + 1)). In this ring, the equation a2 = 1 has four solutions. The field R is the completion of Q, the field of rational numbers, with respect to the usual absolute value metric. Other choices of metrics on Q lead to the fields Qp of p-adic numbers (for any prime number p), which are thereby analogous to R. There are no other nontrivial ways of completing Q than R and Qp, by Ostrowski's theorem. The algebraic closure \overline {\mathbf{Q}_p} of Qp still carry a norm, but (unlike C) are not complete with respect to it. The completion \mathbf{C}_p of \overline {\mathbf{Q}_p} turns out to be algebraically closed. This field is called p-adic complex numbers by analogy. The fields R and Qp and their finite field extensions, including C, are local fields. See also[edit] 1. ^ Charles P. McKeague (2011). Elementary Algebra. Brooks/Cole. p. 524. ISBN 978-0-8400-6421-9.  2. ^ Burton (1995, p. 294) 3. ^ Complex Variables (2nd Edition), M.R. Spiegel, S. Lipschutz, J.J. Schiller, D. Spellman, Schaum's Outline Series, Mc Graw Hill (USA), ISBN 978-0-07-161569-3 4. ^ Aufmann, Richard N.; Barker, Vernon C.; Nation, Richard D. (2007), College Algebra and Trigonometry (6 ed.), Cengage Learning, p. 66, ISBN 0-618-82515-0 , Chapter P, p. 66 5. ^ Brown, James Ward; Churchill, Ruel V. (1996). Complex variables and applications (6th ed.). New York: McGraw-Hill. p. 2. ISBN 0-07-912147-0. "In electrical engineering, the letter j is used instead of i."  6. ^ Katz (2004, §9.1.4) 7. ^ Abramowitz, Milton; Stegun, Irene A. (1964), Handbook of mathematical functions with formulas, graphs, and mathematical tables, Courier Dover Publications, p. 17, ISBN 0-486-61272-4 , Section 3.7.26, p. 17 8. ^ Cooke, Roger (2008), Classical algebra: its nature, origins, and uses, John Wiley and Sons, p. 59, ISBN 0-470-25952-3 , Extract: page 59 9. ^ Kasana, H.S. (2005), Complex Variables: Theory And Applications (2nd ed.), PHI Learning Pvt. Ltd, p. 14, ISBN 81-203-2641-5 , Extract of chapter 1, page 14 10. ^ Nilsson, James William; Riedel, Susan A. (2008), Electric circuits (8th ed.), Prentice Hall, p. 338, ISBN 0-13-198925-1 , Chapter 9, page 338 11. ^ Electromagnetism (2nd edition), I.S. Grant, W.R. Phillips, Manchester Physics Series, 2008 ISBN 0-471-92712-0 12. ^ Kalman, Dan (2008a), "An Elementary Proof of Marden's Theorem", The American Mathematical Monthly 115: 330–338, ISSN 0002-9890  13. ^ Kalman, Dan (2008b), "The Most Marvelous Theorem in Mathematics", Journal of Online Mathematics and its Applications  14. ^ Nahin, Paul J. (2007). An Imaginary Tale: The Story of −1. Princeton University Press. ISBN 978-0-691-12798-9. Retrieved 20 April 2011.  15. ^ In modern notation, Tartaglia's solution is based on expanding the cube of the sum of two cube roots: \left(\sqrt[3]{u} + \sqrt[3]{v}\right)^3 = 3 \sqrt[3]{uv} \left(\sqrt[3]{u} + \sqrt[3]{v}\right) + u + v With x = \sqrt[3]{u} + \sqrt[3]{v}, p = 3 \sqrt[3]{uv}, q = u + v, u and v can be expressed in terms of p and q as u = q/2 + \sqrt{(q/2)^2-(p/3)^3} and v = q/2 - \sqrt{(q/2)^2-(p/3)^3}, respectively. Therefore, x = \sqrt[3]{q/2 + \sqrt{(q/2)^2-(p/3)^3}} + \sqrt[3]{q/2 - \sqrt{(q/2)^2-(p/3)^3}}. When (q/2)^2-(p/3)^3 is negative (casus irreducibilis), the second cube root should be regarded as the complex conjugate of the first one. 16. ^ Descartes, René (1954) [1637], La Géométrie | The Geometry of Rene Descartes with a facsimile of the first edition, Dover Publications, ISBN 0-486-60068-8, retrieved 20 April 2011  17. ^ Hardy, G. H.; Wright, E. M. (2000) [1938], An Introduction to the Theory of Numbers, OUP Oxford, p. 189 (fourth edition), ISBN 0-19-921986-9  Mathematical references[edit] Historical references[edit] • Burton, David M. (1995), The History of Mathematics (3rd ed.), New York: McGraw-Hill, ISBN 978-0-07-009465-9  • Katz, Victor J. (2004), A History of Mathematics, Brief Version, Addison-Wesley, ISBN 978-0-321-16193-2  • Nahin, Paul J. (1998), An Imaginary Tale: The Story of \scriptstyle\sqrt{-1} (hardcover ed.), Princeton University Press, ISBN 0-691-02795-1  A gentle introduction to the history of complex numbers and the beginnings of complex analysis. • H.-D. Ebbinghaus ... (1991), Numbers (hardcover ed.), Springer, ISBN 0-387-97497-0  An advanced perspective on the historical development of the concept of number. Further reading[edit] • The Road to Reality: A Complete Guide to the Laws of the Universe, by Roger Penrose; Alfred A. Knopf, 2005; ISBN 0-679-45443-8. Chapters 4–7 in particular deal extensively (and enthusiastically) with complex numbers. • Unknown Quantity: A Real and Imaginary History of Algebra, by John Derbyshire; Joseph Henry Press; ISBN 0-309-09657-X (hardcover 2006). A very readable history with emphasis on solving polynomial equations and the structures of modern algebra. • Visual Complex Analysis, by Tristan Needham; Clarendon Press; ISBN 0-19-853447-7 (hardcover, 1997). History of complex numbers and complex analysis with compelling and useful visual interpretations. • Conway, John B., Functions of One Complex Variable I (Graduate Texts in Mathematics), Springer; 2 edition (September 12, 2005). ISBN 0-387-90328-3. External links[edit]
348fd81738d97eaf
Computational chemistry From Wikipedia, the free encyclopedia Jump to: navigation, search Computational chemistry is a branch of chemistry that uses principles of computer science to assist in solving chemical problems. It uses the results of theoretical chemistry, incorporated into efficient computer programs, to calculate the structures and properties of molecules and solids. Its necessity arises from the well-known fact that apart from relatively recent results concerning the hydrogen molecular ion (see references therein for more details), the quantum many-body problem cannot be solved analytically, much less in closed form. While its results normally complement the information obtained by chemical experiments, it can in some cases predict hitherto unobserved chemical phenomena. It is widely used in the design of new drugs and materials. Examples of such properties are structure (i.e. the expected positions of the constituent atoms), absolute and relative (interaction) energies, electronic charge distributions, dipoles and higher multipole moments, vibrational frequencies, reactivity or other spectroscopic quantities, and cross sections for collision with other particles. The methods employed cover both static and dynamic situations. In all cases the computer time and other resources (such as memory and disk space) increase rapidly with the size of the system being studied. That system can be a single molecule, a group of molecules, or a solid. Computational chemistry methods range from highly accurate to very approximate; highly accurate methods are typically feasible only for small systems. Ab initio methods are based entirely on theory from first principles. Other (typically less accurate) methods are called empirical or semi-empirical because they employ experimental results, often from acceptable models of atoms or related molecules, to approximate some elements of the underlying theory. Both ab initio and semi-empirical approaches involve approximations. These range from simplified forms of the first-principles equations that are easier or faster to solve, to approximations limiting the size of the system (for example, periodic boundary conditions), to fundamental approximations to the underlying equations that are required to achieve any solution to them at all. For example, most ab initio calculations make the Born–Oppenheimer approximation, which greatly simplifies the underlying Schrödinger equation by assuming that the nuclei remain in place during the calculation. In principle, ab initio methods eventually converge to the exact solution of the underlying equations as the number of approximations is reduced. In practice, however, it is impossible to eliminate all approximations, and residual error inevitably remains. The goal of computational chemistry is to minimize this residual error while keeping the calculations tractable. In some cases, the details of electronic structure are less important than the long-time phase space behavior of molecules. This is the case in conformational studies of proteins and protein-ligand binding thermodynamics. Classical approximations to the potential energy surface are employed, as they are computationally less intensive than electronic calculations, to enable longer simulations of molecular dynamics. Furthermore, cheminformatics uses even more empirical (and computationally cheaper) methods like machine learning based on physicochemical properties. One typical problem in cheminformatics is to predict the binding affinity of drug molecules to a given target. Building on the founding discoveries and theories in the history of quantum mechanics, the first theoretical calculations in chemistry were those of Walter Heitler and Fritz London in 1927. The books that were influential in the early development of computational quantum chemistry include Linus Pauling and E. Bright Wilson's 1935 Introduction to Quantum Mechanics – with Applications to Chemistry, Eyring, Walter and Kimball's 1944 Quantum Chemistry, Heitler's 1945 Elementary Wave Mechanics – with Applications to Quantum Chemistry, and later Coulson's 1952 textbook Valence, each of which served as primary references for chemists in the decades to follow. With the development of efficient computer technology in the 1940s, the solutions of elaborate wave equations for complex atomic systems began to be a realizable objective. In the early 1950s, the first semi-empirical atomic orbital calculations were carried out. Theoretical chemists became extensive users of the early digital computers. A very detailed account of such use in the United Kingdom is given by Smith and Sutcliffe.[1] The first ab initio Hartree–Fock calculations on diatomic molecules were carried out in 1956 at MIT, using a basis set of Slater orbitals. For diatomic molecules, a systematic study using a minimum basis set and the first calculation with a larger basis set were published by Ransil and Nesbet respectively in 1960.[2] The first polyatomic calculations using Gaussian orbitals were carried out in the late 1950s. The first configuration interaction calculations were carried out in Cambridge on the EDSAC computer in the 1950s using Gaussian orbitals by Boys and coworkers.[3] By 1971, when a bibliography of ab initio calculations was published,[4] the largest molecules included were naphthalene and azulene.[5][6] Abstracts of many earlier developments in ab initio theory have been published by Schaefer.[7] In 1964, Hückel method calculations (using a simple linear combination of atomic orbitals (LCAO) method for the determination of electron energies of molecular orbitals of π electrons in conjugated hydrocarbon systems) of molecules ranging in complexity from butadiene and benzene to ovalene, were generated on computers at Berkeley and Oxford.[8] These empirical methods were replaced in the 1960s by semi-empirical methods such as CNDO.[9] In the early 1970s, efficient ab initio computer programs such as ATMOL, Gaussian, IBMOL, and POLYAYTOM, began to be used to speed up ab initio calculations of molecular orbitals. Of these four programs, only GAUSSIAN, now massively expanded, is still in use, but many other programs are now in use. At the same time, the methods of molecular mechanics, such as MM2, were developed, primarily by Norman Allinger.[10] One of the first mentions of the term "computational chemistry" can be found in the 1970 book Computers and Their Role in the Physical Sciences by Sidney Fernbach and Abraham Haskell Taub, where they state "It seems, therefore, that 'computational chemistry' can finally be more and more of a reality."[11] During the 1970s, widely different methods began to be seen as part of a new emerging discipline of computational chemistry.[12] The Journal of Computational Chemistry was first published in 1980. Fields of application[edit] The term theoretical chemistry may be defined as a mathematical description of chemistry, whereas computational chemistry is usually used when a mathematical method is sufficiently well developed that it can be automated for implementation on a computer. In theoretical chemistry, chemists, physicists and mathematicians develop algorithms and computer programs to predict atomic and molecular properties and reaction paths for chemical reactions. Computational chemists, in contrast, may simply apply existing computer programs and methodologies to specific chemical questions. There are two different aspects to computational chemistry: • Computational studies can be carried out to find a starting point for a laboratory synthesis, or to assist in understanding experimental data, such as the position and source of spectroscopic peaks. • Computational studies can be used to predict the possibility of so far entirely unknown molecules or to explore reaction mechanisms that are not readily studied by experimental means. Thus, computational chemistry can assist the experimental chemist or it can challenge the experimental chemist to find entirely new chemical objects. Several major areas may be distinguished within computational chemistry: • The prediction of the molecular structure of molecules by the use of the simulation of forces, or more accurate quantum chemical methods, to find stationary points on the energy surface as the position of the nuclei is varied. • Storing and searching for data on chemical entities (see chemical databases). • Identifying correlations between chemical structures and properties (see QSPR and QSAR). • Computational approaches to help in the efficient synthesis of compounds. • Computational approaches to design molecules that interact in specific ways with other molecules (e.g. drug design and catalysis). The words exact and perfect do not appear here, as very few aspects of chemistry can be computed exactly. However, almost every aspect of chemistry can be described in a qualitative or approximate quantitative computational scheme. Molecules consist of nuclei and electrons, so the methods of quantum mechanics apply. Computational chemists often attempt to solve the non-relativistic Schrödinger equation, with relativistic corrections added, although some progress has been made in solving the fully relativistic Dirac equation. In principle, it is possible to solve the Schrödinger equation in either its time-dependent or time-independent form, as appropriate for the problem in hand; in practice, this is not possible except for very small systems. Therefore, a great number of approximate methods strive to achieve the best trade-off between accuracy and computational cost. Accuracy can always be improved with greater computational cost. Significant errors can present themselves in ab initio models comprising many electrons, due to the computational expense of full relativistic-inclusive methods. This complicates the study of molecules interacting with high atomic mass unit atoms, such as transitional metals and their catalytic properties. Present algorithms in computational chemistry can routinely calculate the properties of molecules that contain up to about 40 electrons with sufficient accuracy. Errors for energies can be less than a few kJ/mol. For geometries, bond lengths can be predicted within a few picometres and bond angles within 0.5 degrees. The treatment of larger molecules that contain a few dozen electrons is computationally tractable by approximate methods such as density functional theory (DFT). There is some dispute within the field whether or not the latter methods are sufficient to describe complex chemical reactions, such as those in biochemistry. Large molecules can be studied by semi-empirical approximate methods. Even larger molecules are treated by classical mechanics methods that employ what are called molecular mechanics. In QM/MM methods, small portions of large complexes are treated quantum mechanically (QM), and the remainder is treated approximately (MM). A single molecular formula can represent a number of molecular isomers. Each isomer is a local minimum on the energy surface (called the potential energy surface) created from the total energy (i.e., the electronic energy, plus the repulsion energy between the nuclei) as a function of the coordinates of all the nuclei. A stationary point is a geometry such that the derivative of the energy with respect to all displacements of the nuclei is zero. A local (energy) minimum is a stationary point where all such displacements lead to an increase in energy. The local minimum that is lowest is called the global minimum and corresponds to the most stable isomer. If there is one particular coordinate change that leads to a decrease in the total energy in both directions, the stationary point is a transition structure and the coordinate is the reaction coordinate. This process of determining stationary points is called geometry optimization. The determination of molecular structure by geometry optimization became routine only after efficient methods for calculating the first derivatives of the energy with respect to all atomic coordinates became available. Evaluation of the related second derivatives allows the prediction of vibrational frequencies if harmonic motion is estimated. More importantly, it allows for the characterization of stationary points. The frequencies are related to the eigenvalues of the Hessian matrix, which contains second derivatives. If the eigenvalues are all positive, then the frequencies are all real and the stationary point is a local minimum. If one eigenvalue is negative (i.e., an imaginary frequency), then the stationary point is a transition structure. If more than one eigenvalue is negative, then the stationary point is a more complex one, and is usually of little interest. When one of these is found, it is necessary to move the search away from it if the experimenter is looking solely for local minima and transition structures. The total energy is determined by approximate solutions of the time-dependent Schrödinger equation, usually with no relativistic terms included, and by making use of the Born–Oppenheimer approximation, which allows for the separation of electronic and nuclear motions, thereby simplifying the Schrödinger equation. This leads to the evaluation of the total energy as a sum of the electronic energy at fixed nuclei positions and the repulsion energy of the nuclei. A notable exception are certain approaches called direct quantum chemistry, which treat electrons and nuclei on a common footing. Density functional methods and semi-empirical methods are variants on the major theme. For very large systems, the relative total energies can be compared using molecular mechanics. The ways of determining the total energy to predict molecular structures are: Ab initio methods[edit] The programs used in computational chemistry are based on many different quantum-chemical methods that solve the molecular Schrödinger equation associated with the molecular Hamiltonian. Methods that do not include any empirical or semi-empirical parameters in their equations – being derived directly from theoretical principles, with no inclusion of experimental data – are called ab initio methods. This does not imply that the solution is an exact one; they are all approximate quantum mechanical calculations. It means that a particular approximation is rigorously defined on first principles (quantum theory) and then solved within an error margin that is qualitatively known beforehand. If numerical iterative methods have to be employed, the aim is to iterate until full machine accuracy is obtained (the best that is possible with a finite word length on the computer, and within the mathematical and/or physical approximations made). Diagram illustrating various ab initio electronic structure methods in terms of energy. Spacings are not to scale. The simplest type of ab initio electronic structure calculation is the Hartree–Fock (HF) scheme, an extension of molecular orbital theory, in which the correlated electron–electron repulsion is not specifically taken into account; only its average effect is included in the calculation. As the basis set size is increased, the energy and wave function tend towards a limit called the Hartree–Fock limit. Many types of calculations (known as post-Hartree–Fock methods) begin with a Hartree–Fock calculation and subsequently correct for electron–electron repulsion, referred to also as electronic correlation. As these methods are pushed to the limit, they approach the exact solution of the non-relativistic Schrödinger equation. In order to obtain exact agreement with experiment, it is necessary to include relativistic and spin orbit terms, both of which are only really important for heavy atoms. In all of these approaches, in addition to the choice of method, it is necessary to choose a basis set. This is a set of functions, usually centered on the different atoms in the molecule, which are used to expand the molecular orbitals with the LCAO ansatz. Ab initio methods need to define a level of theory (the method) and a basis set. The Hartree–Fock wave function is a single configuration or determinant. In some cases, particularly for bond breaking processes, this is quite inadequate, and several configurations need to be used. Here, the coefficients of the configurations and the coefficients of the basis functions are optimized together. The total molecular energy can be evaluated as a function of the molecular geometry; in other words, the potential energy surface. Such a surface can be used for reaction dynamics. The stationary points of the surface lead to predictions of different isomers and the transition structures for conversion between isomers, but these can be determined without a full knowledge of the complete surface. A particularly important objective, called computational thermochemistry, is to calculate thermochemical quantities such as the enthalpy of formation to chemical accuracy. Chemical accuracy is the accuracy required to make realistic chemical predictions and is generally considered to be 1 kcal/mol or 4 kJ/mol. To reach that accuracy in an economic way it is necessary to use a series of post-Hartree–Fock methods and combine the results. These methods are called quantum chemistry composite methods. Density functional methods[edit] Density functional theory (DFT) methods are often considered to be ab initio methods for determining the molecular electronic structure, even though many of the most common functionals use parameters derived from empirical data, or from more complex calculations. In DFT, the total energy is expressed in terms of the total one-electron density rather than the wave function. In this type of calculation, there is an approximate Hamiltonian and an approximate expression for the total electron density. DFT methods can be very accurate for little computational cost. Some methods combine the density functional exchange functional with the Hartree–Fock exchange term and are known as hybrid functional methods. Semi-empirical and empirical methods[edit] Semi-empirical quantum chemistry methods are based on the Hartree–Fock formalism, but make many approximations and obtain some parameters from empirical data. They are very important in computational chemistry for treating large molecules where the full Hartree–Fock method without the approximations is too expensive. The use of empirical parameters appears to allow some inclusion of correlation effects into the methods. Semi-empirical methods follow what are often called empirical methods, where the two-electron part of the Hamiltonian is not explicitly included. For π-electron systems, this was the Hückel method proposed by Erich Hückel, and for all valence electron systems, the extended Hückel method proposed by Roald Hoffmann. Molecular mechanics[edit] In many cases, large molecular systems can be modeled successfully while avoiding quantum mechanical calculations entirely. Molecular mechanics simulations, for example, use a single classical expression for the energy of a compound, for instance the harmonic oscillator. All constants appearing in the equations must be obtained beforehand from experimental data or ab initio calculations. The database of compounds used for parameterization, i.e., the resulting set of parameters and functions is called the force field, is crucial to the success of molecular mechanics calculations. A force field parameterized against a specific class of molecules, for instance proteins, would be expected to only have any relevance when describing other molecules of the same class. These methods can be applied to proteins and other large biological molecules, and allow studies of the approach and interaction (docking) of potential drug molecules (e.g. [1] and [2]). Methods for solids[edit] Computational chemical methods can be applied to solid state physics problems. The electronic structure of a crystal is in general described by a band structure, which defines the energies of electron orbitals for each point in the Brillouin zone. Ab initio and semi-empirical calculations yield orbital energies; therefore, they can be applied to band structure calculations. Since it is time-consuming to calculate the energy for a molecule, it is even more time-consuming to calculate them for the entire list of points in the Brillouin zone. Chemical dynamics[edit] Once the electronic and nuclear variables are separated (within the Born–Oppenheimer representation), in the time-dependent approach, the wave packet corresponding to the nuclear degrees of freedom is propagated via the time evolution operator (physics) associated to the time-dependent Schrödinger equation (for the full molecular Hamiltonian). In the complementary energy-dependent approach, the time-independent Schrödinger equation is solved using the scattering theory formalism. The potential representing the interatomic interaction is given by the potential energy surfaces. In general, the potential energy surfaces are coupled via the vibronic coupling terms. The most popular methods for propagating the wave packet associated to the molecular geometry are: Molecular dynamics[edit] Molecular dynamics (MD) use either quantum mechanics, Newton's laws of motion or a mixed model to examine the time-dependent behavior of systems, including vibrations or Brownian motion and reactions. MD combined with density functional theory leads to hybrid models. Interpreting molecular wave functions[edit] The Atoms in molecules or QTAIM model of Richard Bader was developed in order to effectively link the quantum mechanical picture of a molecule, as an electronic wavefunction, to chemically useful concepts such as atoms in molecules, functional groups, bonding, the theory of Lewis pairs and the valence bond model. Bader has demonstrated that these empirically useful chemistry concepts can be related to the topology of the observable charge density distribution, whether measured or calculated from a quantum mechanical wavefunction. QTAIM analysis of molecular wavefunctions is implemented, for example, in the AIMAll software package. Software packages[edit] There are many self-sufficient software packages used by computational chemists. Some include many methods covering a wide range, while others concentrating on a very specific range or even a single method. Details of most of them can be found in: See also[edit] Cited references[edit] 1. ^ Smith, S. J.; Sutcliffe B. T., (1997). "The development of Computational Chemistry in the United Kingdom". Reviews in Computational Chemistry 70: 271–316.  2. ^ Schaefer, Henry F. III (1972). The electronic structure of atoms and molecules. Reading, Massachusetts: Addison-Wesley Publishing Co. p. 146.  3. ^ Boys, S. F.; Cook G. B., Reeves C. M., Shavitt, I. (1956). "Automatic fundamental calculations of molecular structure". Nature 178 (2): 1207. Bibcode:1956Natur.178.1207B. doi:10.1038/1781207a0.  4. ^ Richards, W. G.; Walker T. E. H and Hinkley R. K. (1971). A bibliography of ab initio molecular wave functions. Oxford: Clarendon Press.  5. ^ Preuss, H. (1968). International Journal of Quantum Chemistry 2: 651. Bibcode:1968IJQC....2..651P. doi:10.1002/qua.560020506.  6. ^ Buenker, R. J.; Peyerimhoff S. D. (1969). "Ab initio SCF calculations for azulene and naphthalene". Chemical Physics Letters 3: 37. Bibcode:1969CPL.....3...37B. doi:10.1016/0009-2614(69)80014-X.  7. ^ Schaefer, Henry F. III (1984). Quantum Chemistry. Oxford: Clarendon Press.  8. ^ Streitwieser, A.; Brauman J. I. and Coulson C. A. (1965). Supplementary Tables of Molecular Orbital Calculations. Oxford: Pergamon Press.  9. ^ Pople, John A.; David L. Beveridge (1970). Approximate Molecular Orbital Theory. New York: McGraw Hill.  10. ^ Allinger, Norman (1977). "Conformational analysis. 130. MM2. A hydrocarbon force field utilizing V1 and V2 torsional terms". Journal of the American Chemical Society 99 (25): 8127–8134. doi:10.1021/ja00467a001.  11. ^ Fernbach, Sidney; Taub, Abraham Haskell (1970). Computers and Their Role in the Physical Sciences. Routledge. ISBN 0-677-14030-4.  12. ^ "vol 1, preface". Reviews in Computational Chemistry. doi:10.1002/9780470125786.  Other references[edit] Specialized journals on computational chemistry[edit] External links[edit]
adbe277406d76695
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer Coming from a mathematical background, I'm trying to get a handle on the path integral formulation of quantum mechanics. According to Feynman, if you want to figure out the probability amplitude for a particle moving from one point to another, you 1) figure out the contribution from every possible path it could take, then 2) "sum up" all the contributions. Usually when you want to "sum up" an infinite number of things, you do so by putting a measure on the space of things, from which a notion of integration arises. However the function space of paths is not just infinite, it's extremely infinite. If the path-space has a notion of dimension, it would be infinite-dimensional (eg, viewed as a submanifold of $C([0,t] , {\mathbb R}^n))$. For any reasonable notion of distance, every ball will fail to be compact. It's hard to see how one could reasonably define a measure over a space like this - Lebesgue-like measures are certainly out. The books I've seen basically forgo defining what anything is, and instead present a method to do calculations involving "zig-zag" paths and renormalization. Apparently this gives the right answer experimentally, but it seem extremely contrived (what if you approximate the paths a different way, how do you know you will get the same answer?). Is there a more rigorous way to define Feynman path integrals in terms of function spaces and measures? share|cite|improve this question This may be one of those things that physicists do that would "make mathematicians throw themselves off the roof," as several of my professors put it ;-) i.e. I'm not sure whether there is a rigorous formulation. But path integrals are very well studied, so I'm sure people have at least tried to find one. Anyway, great question. I'm curious about this myself. – David Z Dec 14 '10 at 5:45 @David Zaslavsky: This is not one of those things. The path integral in quantum mechanics has a perfectly rigorous formulation. What makes mathematicians want to throw themselves off roofs is when physicists conflate Lie groups and Lie algebras. Or when -- despite knowing for 40 years that renormalization is the key organizing principle of QFT -- physicists write textbooks which don't mention the idea until page 300. – user1504 May 17 '12 at 22:45 +1 for the comment on renormalization! – user7757 Jun 4 '14 at 16:04 up vote 20 down vote accepted Path integral is indeed very problematic on its own. But there are ways to almost capturing it rigorously. Wiener process One way is to start with Abstract Wiener space that can be built out of the Hamiltonian and carries a canonical Wiener measure. This is the usual measure describing properties of the random walk. Now to arrive at path integral one has to accept the existence of "infinite-dimensional Wick rotation" and analytically continue Wiener measure to the complex plane (and every time this is done a probabilist dies somewhere). This is the usual connection between statistical physics (which is a nice, well-defined real theory) at inverse temperature $\beta$ in (N+1,0) space-time dimensions and evolution of the quantum system in (N, 1) dimensions for time $t = -i \hbar \beta$ that is used all over the physics but almost never justified. Although in some cases it was actually possible to prove that Wightman QFT theory is indeed a Wick rotation of some Euclidean QFT (note that quantum mechanics is also a special case of QFT in (0, 1) space-time dimensions). This is a good place to point out that while path integral is problematic in QM, whole lot of different issues enter with more space dimensions. One has to deal with operator valued distributions and there is no good way to multiply them (which is what physicist absolutely need to do). There are various axiomatic approaches to get a handle on this and they in fact do look very nice. Except that it's very hard to actually find a theory that satisfies these axioms. In particular, none of our present day theories of Standard model have been rigorously defined. Anyway, to make the Wick rotation a bit clearer, recall that Schrödinger equation is a kind of diffusion equation but for an introduction of complex numbers. And then just come back to the beginning and note that diffusion equation is macroscopic equation that captures the mean behavior of the random walk. (But this is not to say that path integral in any way depends on the Schrödingerian, non-relativistic physics) There were other approaches to define the path-integral rigorously. They propose some set of axioms that path-integral has to obey and continue from there. To my knowledge (but I'd like to be wrong), all of these approaches are too constraining (they don't describe most of physically interesting situations). But if you'd like I can dig up some references. share|cite|improve this answer I'd be interested to find out more about the analytic continuation approach, even if it turns out to not work (as per the discussion below). I'm a big fan of analytic continuation and reimann surfaces - not for a good reason, but just because I think they're cool (: – Nick Alger Dec 14 '10 at 23:52 @Nick: all right, I'll dug up some references; but I can't think of any out of top of my head (besides the wikipedia article). But if it really interests you then you should ask for the uses of Wick rotation in physics; I am sure there are many more applications than I am aware of (for example, one time I stumbled upon the use of analytical continuation to study event horizons of some black holes). – Marek Dec 15 '10 at 15:40 In 2-dimensional space-time, Feynman path integrals are perfectly well-defined, though understanding how this is done rigorously is somewhat heavy-going. But everything is spelled out in the book ''Quantum Physics: A Functional Integral Point of View'' by Glimm and Jaffe. In 4 space-time dimensions how to make rigorous sense of the Feynman path integral is an unsolved problem. On the one side, there is no indication that some rigorous version of it could not exist, and one expects that the structural properties the integral has in 2D continue to hold. On the other side, constructing a 4D integral having these properties has been successful only in the free case, and the methods used for the constructions in lower dimensions seem too weak to work in 4 dimensions. Edit: On the other hand, in quantum mechanics with finitely many degrees of freedom, Feynman path integrals are very well understood, and whole books about the subjects have been written in a mathematically rigorous style, e.g., the book ''The Feynman Integral and Feynman's Operational Calculus'' by Johnson and Lapidus. share|cite|improve this answer One major difficulty with defining path-integrals (which is entirely mathematicians' fault)is that the mathematicians insist for no good reason (and many bad ones) that there are non-measurable subsets of R. This is a psychological artifact of early days of set theory, where ZFC ws not seen as a way of generating countable models of a set-theoretic universe, but as the way things REALLY are in Plato-land (whatever that means). Cohen fixed that in 1963, but mathematicians still haven't gotten used to the fix, but that is changing rapidly. If you assume every subset of R is measurable, the notion of "randomly picking a number between 0 and 1" becomes free of contradiction. In the presence of the axiom of choice, the question "what is the probability that this number lands in a Vitali set?" is paradoxical, but in the real world, it is obviously meaningful. This tension is resolved in what is called a "Solovay model", where you have no more trouble with probability arguments meshing with set theory. For a non-mathematician, when you deal with sets which arise by predicative definition, not by doing uncountable axiom-of-choice shenanigans, probability is never contradictory. A solovay model still allows you to use countable axiom-of-choice, and countable dependent choice, which is enough for all usual analysis. Anyway, inside a Solovay model, you can define a Euclidean bosonic path integral very easily: it is an algorithm for picking a random path, or a random field. This has to be done by step-wise refinement, because the random path or random field has values at continuum of points, so you need to say what it means for a path to "refine" another path. further, while paths end up continuous, so that the refinement process is meaningful in the space of continuous paths, fields refine discontinuously. if you have a field whose average value on a lattice is something, it swings more wildly at small distances in dimensions higher than 2, so that in the limit, it defines a random distribution. If you are allowed to pick at random, part of the battle is won. You get free field path-integrals in any dimension with absolutely no work (pick the fourier components at random as Gaussian random numbers with a width which goes like the propagator). There is no issue with proving measurability, and the space of distributions you get out is just defined to be whatever space of distributions you get by doing the random picking. It's as simple as that. Really. The remaining battle is just renormalization, at least for bosonic fields with CP invariant (real) actions, which have a stable vacuum, so that their Euclidean continuation has a probability interpretation. You need to define the stepwise approximations in such a way that their probability distribution function approaches a consistent limit at small refinements. This is slightly tricky, but it automatically defines the measure if you have a Solovay world. There is nobody working on field theory in Solovay models, but there are people who mock up what is more or less the same thing inside usual set theory by doing what is called "constructive measure theory". I don't think that one can navigate the complicated renormalization arguments unless one is allowed to construct measures using probability intuitions without fear, and without work. And set theorists know how to do this since 1970. share|cite|improve this answer This is very intriguing but at the same it's very hard to believe that all problems of path integral that people are having trouble with for half a century can be cured by such naive approach. Any references? – Marek Aug 10 '11 at 9:27 I didn't say that all the problems can be cured, only the measure theoretic headaches--- defining a sigma algebra on the set of distributions, when you don't know what their properties are a-priori. This approach automatically shifts the difficulties to the places they are real. There is no reference--- it's my own personal view. But I guarantee you that if I ever construct a nonfree field, I will do it within a Solovay model. – Ron Maimon Aug 11 '11 at 0:55 I didn't mean to argue with you. It just strikes me as odd that no one else has as of yet realized this point of view and worked on it if it is as useful as you propose. – Marek Aug 11 '11 at 10:31 The answer is: forget about it. :-) Currently, there is no satisfying mathematical formalization of the path integral. Coming from a mathematics background myself, I was equally dismayed at this state of affairs. But I have come to terms with it, mainly due to the following historical observation: for several centuries, infinitesimal quantities did not have a satisfying formalization, but that didn't stop mathematicians from understanding and using them. We now have the blessing of Weierstraß' epsilons and deltas, but it is also a curse, since the infinitesimal quantities disappeared as well (outside of non-standard analysis). I would say that the path integral is a similar situation. However, if you accept the path integral as a "figure of speech", then there are ways to formalize it to some extend. Namely, you can interpret it as an "integration" of the propagator, much like the exponential function is the solution of the differential equation $\dot y = Ay$. The propagator is the probability amplitude $$ U(r,t; r_0,t_0) $$ of finding a particle at place and time $r,t$ when it originally started at place and time $r_0,t_0$. It is the general solution to the Schrödinger equation $$ i\hbar \frac{\partial}{\partial t}U(r,t; r_0,t_0) = \hat H(r,t) U(r,t; r_0,t_0), \quad U(r,t_0; r_0,t_0) = \delta(r-r_0) .$$ Now, pick a time $t_1$ that lies between $t$ and $t_0$. The interpretation of the propagator as probability amplitude makes it clear that you can also obtain it by integrating over all intermediate positions $r_1$. $$ U(r,t; r_0,t_0) = \int dr_1 U(r,t; r_1,t_1)U(r_1,t_1; r_0,t_0) $$ If you repeat that procedure and divide the time interval $[t,t_0]$ into infinitely many parts, thus integrating over all possible intermediate positions, you will obtain the path integral. More details on this construction can be found in Altland, Simons. Condensed Matter Field Theory. share|cite|improve this answer Well, I think most of the problems of path integral can be summarized as: complex numbers. Nothing converges, everything oscillates. But we already know that analytical continuation works in finite dimensions and the success of the path integral suggests that it indeed continues to hold also in infinite dimensions (at least under some conditions). So the way to make path integral rigorous would be to try to capture the properties of analytical continuation on infinite dimensional spaces. Do you know whether such a thing has been attempted? – Marek Dec 14 '10 at 10:12 I'm not knowledgeable about current efforts to make the path integral rigorous. However, I don't think that analytic continuation will play a prominent role. Sure, being complex analytic implies a lot of nice rigidity, but axiomatic approaches or something else should work equally well, if not better. For instance, there is the Henstock-Kurzweil integral which integrates a lot more functions on the real line than the Lebesgue integral does. (Unfortunately, it's already difficult to define it for dimensions $>1$.) – Greg Graviton Dec 14 '10 at 17:45 As a matter of fact, I think it will play a prominent role. It does play it in physics and there is no reason for not playing it as well in math once people polish things up. As for the axiomatic approach: that is precisely what I am talking about. You can define some axioms but soon you'll find out that they don't really generalize to other situations you are interested in or that there is nothing actually satisfying the axioms. This is probably because these theories are created by mathematicians and path integral is too physical in nature. – Marek Dec 14 '10 at 20:36 And I just realized my comment might be a little insulting to mathematicians. So apologies in advance. It was just a general observation that mathematicians hold different values and understand different things than physicists do. – Marek Dec 14 '10 at 20:38 (No worries. :-)) – Greg Graviton Dec 15 '10 at 8:51 For quantum mechanics, there's really nothing unrigorous about the path integral. You have to define it in Euclidean signature, but that's just the way life is with oscillatory integrals. It has nothing to do with the fact that the path integral is infinite-dimensional. Try to insert a set of intermediate states in the propagator $\langle q_f | e^{-iHt/\hbar}|q_i\rangle$ and you'll get an integral that's not absolutely convergent. This expression is just fine if you sandwhich it inside a well-defined computation -- e.g., don't use singular wave functions for your initial and final states -- but if you want the expression to stand on its own, you have to provide some additional convergence information. Usually what people do is observe that the unitary group of time translations is the imaginary boundary of an analytic semigroup. The real part of this semigroup, $e^{-H\tau}$, has a rigorous path integral formula; it's the volume of a cylinder set. The volume of a cylinder set is computed as the limit of cutoff path integrals of the form $$\frac{1}{Z} \int_{F_{cutoff}} e^{-\frac{1}{\hbar} S_{effective}(\phi)} d\phi,$$ where $d\phi$ is Lebesgue/Haar/whatever measures on the finite-dimensional space of cutoff fields and $S_{effective}$ is a cutoff/lattice approximation to the continuum action. Given such a measure, under reasonable conditions, you can analytically continue the correlation functions from Euclidean signature back to Minkowski. For the record: mathematicians are not tearing their hair out about this stuff. It's cool. We got it. We -- and by "we", I mean a relatively small number of experts, not necessarily including myself -- can even handle 4d Yang-Mills theory in finite volume. (What's hard is proving facts about the behavior of correlation functions in the IR limit.) share|cite|improve this answer This answer is impossibly misleading: when you say you can handle 4d Yang Mills in finite volume, you mean 4d lattice Yang Mills in finite volume, with finite coupling. when the gauge group is a compact product group. Big deal. The whole problem is defining Yang Mills in a continuum in a finite volume, which is equivalent to the infinite volume/zero coupling limit, and none of the so-called "experts" can handle that. – Ron Maimon Aug 11 '11 at 0:58 The situation is better than you think: See, for example,… – user1504 Aug 11 '11 at 17:57 There are two more rigorous definitions of the path integral, the latter of which hasn't been mentioned yet: 1. Gauge integrals: Muldowney's book "A Modern Theory of Random Variation", Wiley, 2012 has a chapter on complex-valued Brownian motion (chapter 7). It is the path to stochastic integration via a notion called random variability. Starting point is the Henstock integral ("gauge" integral), a generalisation of the Riemann-Stieltjes integral. This is not an approach via measures on function spaces, though. 2. Cylindrical measures ("promeasures"). The book "Analysis, Manifolds and Physics, Part I: Basics" by Choquet-Bruhat, DeWitt-Morette and Dillard-Bleick, North-Holland 1996, has a short introduction to this approach in chapter IV.D, and there you can find many references. The construction starts with the definition of a projective system of finite quotient spaces. The promeasure is a projective system of measures on this projective system of quotient spaces. The general Wiener integral towards Feynman diagrams is in the problems and exercises section at the very end of the book. share|cite|improve this answer Your Answer
746aaea4d17dd411
Original Paper Journal of Mathematical Chemistry , Volume 51, Issue 1, pp 297-315 First online: Entropic representation in the theory of molecular electronic structure • Roman F. NalewajskiAffiliated withDepartment of Theoretical Chemistry, Jagiellonian University Email author  The entropic perspective on the molecular electronic structure is investigated. Information-theoretic description of electron probabilities is extended to cover the complex amplitudes (wave functions) of quantum mechanics. This analysis emphasizes the entropic concepts due to the phase part of electronic states, which generates the probability current density, thus allowing one to distinguish the information content of states generating the same electron density and differing in their current densities. The classical information measures of Fisher and Shannon, due to the probability/density distributions themselves, are supplemented by the nonclassical terms generated by the wave-function phase or the associated probability current. A complementary character of the Fisher and Shannon information measures is explored and the relationship between these classical information densities is derived. It is postulated to characterize also their nonclassical (phase/current-dependent) contributions. The continuity equations of the generalized information densities are examined and the associated nonclassical information sources are identified. The variational rules involving the quantum-generalized Shannon entropy, which generate the stationary and time-dependent Schrödinger equations from the relevant maximum entropy principles, are discussed and their implications for the system “thermodynamic” equilibrium states are examined. It is demonstrated that the lowest, stationary “thermodynamic” state differs from the true ground state of the system, by exhibiting the space-dependent phase, linked to the modulus part of the wave function, and hence also a nonvanishing probability current. Electronic structure theory Information continuity equations Maximum entropy principle Nonclassical entropy Quantum Fisher information Quantum mechanics Schrödinger equation “Thermodynamic” states
4a2df77d63787786
Take the 2-minute tour × Back in college I remember coming across a few books in the physics library by Mendel Sachs. Examples are: General Relativity and Matter Quantum Mechanics and Gravity Quantum Mechanics from General Relativity Here is something on the arXiv involving some of his work. In these books (which I note are also strangely available in most physics department libraries) he describes a program involving re-casting GR using quaternions. He does things that seem remarkable like deriving QM as a low-energy limit of GR. I don't have the GR background to unequivocally verify or reject his work, but this guy has been around for decades, and I have never found any paper or article that seriously "debunks" any of his work. It just seems like he is ignored. Are there glaring holes in his work? Is he just a complete crackpot? What is the deal? share|improve this question closed as not constructive by dmckee Jun 1 '13 at 16:49 There are many questions also about the related Geometric Algebra. This type of thing is not physics, but formalism, and I have seen the claims about "QM from GR", they derive a quantization rule similar to Bohr Sommerfeld from a GR looking thing, and this is total rubbish from the point of view of physics. This part is crackpot, but the part about quaternions is probably empty formalism rather than wrong (although I didn't review it). –  Ron Maimon Jan 17 '12 at 5:00 He does things that seem remarkable like deriving QM as a low-energy limit of GR ... This sentence looks suspicious, I thought it is the other way round, GR is derievable as the classical low energy limit from a high energy quantum mechanics theory of gravity (or quantum gravity for short). –  Dilaton May 28 '13 at 14:25 In addition, see here for arguments why quantum mechanics has to use complex variables instead of anything else, so a quantum gravity can not be based on quaternions either. Physicists know this, and that is probably among other things why they ignore such approaches to quantum gravity. –  Dilaton May 28 '13 at 14:34 @Dilaton, I didn't type the sentence wrong. That is what he does in his books: QM as the low-energy limit of GR. I'm an experimentalist, so I probably don't have the background to dig into it enough, but I've just never been able to find anything wrong in his books and found it strange I never found any refutation or critical reviews of his works. His logic appears OK by my eye and he seems to have been an actual physicist at a real college, and his books seem to be in all the physics libraries... it's just odd. –  user1247 May 28 '13 at 17:56 Just for the notes, I did neither say nor mean that it is user1247 who typed the sentence wrong, but the sentence IS wrong from a physics point of view. –  Dilaton May 30 '13 at 17:02 5 Answers 5 Good question! (I have wondered the same.) I hold Mendel Sachs (deceased 05/05/12) to have been the most astute theoretical physicist since Einstein. His quaternion formalism was, no doubt, exactly what Einstein sought over his last thirty years, to complete GR. And its spinor basis induces me to suspect that Sachs' interpretation of QM, via Einstein's Mach principle, as a covariant field theory of inertia, is also right on the mark. Considering Sachs' volume of output, after much mulling, I finally had to conclude that he was "blacklisted," the establishment not permitting any discussion if they can have anything to do with it! I can see no other way that that quantity -- much less, quality -- of work could have been ignored. share|improve this answer Quantity of work is a poor measure of the work's value. –  Guy Gur-Ari Aug 10 '12 at 1:46 For the users who downvote SJRubenstein's opinion, have the elegance to motivate your vote. He has honestly answered user1247's question and I see no reason to downvote him but to confirm his view that there are some fanatics out there willing to censor anyone who is not mainstream. –  Shaktyai Sep 9 '12 at 6:20 To evade downvotes you should probably base your answer on physics arguments instead of sociological fuss and personal prejudices. Terms like "establishment" etc are often used in the internet by crackpots and trolls advertising their own physically not consistent personal pet theories, to attack professional physicists who know exactly what they are doing. –  Dilaton May 28 '13 at 14:45 To compare this guy who comes across as having a relatively weak understanding of actual physics, to Einstein, is comical to me... –  Killercam May 31 '13 at 7:36 Mendel Sachs may have been blacklisted, which would certainly be wrong. But his theory has a fatal error. His derivation depends on the assumption that certain 2x2 complex matrices, standing for quaternions, approach the Pauli spin matrices in the limit of zero curvature. This is impossible; the Pauli matrices are not quaternions and the argument collapses. share|improve this answer First of all, if Mendel Sachs does things like deriving QM as a low-energy limit of GR, he has things completely upside down. The fundamental laws of physics are quantum, so quantum mechanics can not be derived from something else. It is rather the case that general relativity is derievable as the classical low energy limit from a high energy quantum mechanics theory of gravity (or quantum gravity for short). This works for example for string theory. In addition, the only reasonable number system to describe quantum mechanics in are complex numbers. Some arguments why quantum mechanics has to use complex variables (instead of real variables) are given here. Complex numbers are needed for the Schrödinger equation to work, to conserve total probabilities, to describe commutators between non commuting operators (observables), to have plane wave momentum eigenstates, etc ... Generally, important physical operations in quantum mechanics demand that probability amplitudes obey the rules for addition and multiplication for complex numbers, they themself have to be complex numbers. In this article describing why quantum mechanics can not be different from the way it is, some explanations are given why using larger number systems than complex numbers to describe quantum mechanics are no good either. Using quaternions, the quaternionic wave function can be reduced to complex building blocks for example, so going from a complex number description of quantum mechanics to octanions introduces nothing new from a physics point of view. Using octanions would be really bad, since octanions have the lethal bug that they are not associative. So in summary, my reasons for being suspicious or more honestly even dismissive of Mendel Sachs's work as described here is that he seems to fundamentally misunderstand the relationship between quantum theories and their classical limits. In addition, the only reasonable number system to describe quantum mechanics are complex numbers, so I agree with Ron Maimon that introducing quaternions would at best be empty formalism. share|improve this answer I disagree that it is obvious that QM is more fundamental than GR. I think you are tautologically assuming as an axiom that QM cannot be some emergent property of GR. While it is de rigueur to quantize classical theories, it is a mistake to assume that classical theories cannot have QM as an emergent property (on the other hand modern extensions of Bell's inequalities are increasingly constraining these lines of thought). The parenthetical constraints excepted, I see no a priori reason why QM cannot emerge from GR. In fact the converse has maybe more obvious fundamental problems. –  user1247 May 29 '13 at 20:57 About complex #s vs quaternions, I agree with akhmeteli. I remember when I learned QM from Bohm's book he re-wrote QM without complex #s. Maybe not as compact a formalism, but certainly allowed. You seem to arrive at this understanding in the latter part of your answer, when you agree the quaternion stuff may just be empty formalism. On the other hand it is not obvious to me that the formalism must be completely empty. Perhaps the quaternion formalism allows a bit more freedom, being homomorphic rather than isomorphic to the complex one, leading to some additional structure. –  user1247 May 29 '13 at 21:07 @user1247 you and Mendel Sachs have it completely wrong. The real world community of active professional physicists knows that the fundamental laws of nature are quantum and that the classical theoreries are derieved from the them as a limit. The question how QM can be represented mathematically, and what does not work can be objectively and rigorously evaluated too. To bat that the voting pattern on this thread converges to represent the opinions and prejudices of people who know stuff no well enough instead of representing the knowledge of the real world active physicist community ... –  Dilaton May 30 '13 at 16:41 funnily enough, I can't stand many of Lubos' answers. Surely he is competent, I won't dispute that. But he definitely represents one very hard-line perspective about certain things that is not shared by everyone of his caliber. I don't dislike his answers just because he is so arrogant, but more because he doesn't attempt to understand where the questioner is coming from, often seemingly almost purposefully obtuse. It would please him more to insult rather than inform. –  user1247 May 30 '13 at 22:06 In any case despite your appeal to one person's opinion (Lubos), I don't think there is any a priori reason QM is fundamental. It is de rigueur to use that language, and I would use it myself when teaching a class. But that doesn't mean everyone literally thinks it must be fundamental. In fact, that is kind of stupid. Is there some logical proof it must be fundamental? Of course not. And as I pointed out there are people with Nobel's (does Lubos have one?) who work on this stuff (there are also many others in the mainstream in QM foundations). –  user1247 May 30 '13 at 22:09 I don't know much about general relativity, so I have little or nothing to say about M. Sachs' work. However, I'd like to make some remarks on some answers here where Sachs is criticized, and this is how the following is relevant to the question. For example, I don't quite understand @R S Chakravarti's critique:"the Pauli matrices are not quaternions". It is well-known that the Pauli matrices are closely related to quaternions (http://en.wikipedia.org/wiki/Pauli_matrices#Quaternions ), so maybe this critique needs some expansion/explanation. I also respectfully disagree with some of @Dilaton's statements/arguments, e.g., "the only reasonable number system to describe quantum mechanics in are complex numbers" Dilaton refers to L. Motl's arguments, however the latter can be less than watertight - please see my answer at QM without complex numbers . Maybe eventually we cannot do without complex numbers in quantum theory, but it looks like one needs more sophisticated arguments to prove that. EDIT(05/31/2013) Dilaton requested that I elaborate why I question the arguments that seem to prove that one cannot do without complex numbers in quantum theory. Let me describe the constructive results that show that quantum theory can indeed be described using real numbers only, at least in some very general and important cases. I’d like to strongly emphasize that I don’t have in mind using pairs of real numbers instead of complex numbers – such use would be trivial. Schrödinger (Nature (London) 169, 538 (1952)) noted that you can start with a solution of the Klein-Gordon equation for a charged scalar field in electromagnetic field (the charged scalar field is described by a complex function) and get a physically equivalent solution with a real scalar field using a gauge transform (of course, the four-potential of electromagnetic field will also be modified compared to the initial four-potential). This is pretty obvious, if you think about it. Schrödinger made the following comment: “"That the wave function ... can be made real by a change of gauge is but a truism, though it contradicts the widespread belief about 'charged' fields requiring complex representation." So it looks like either at least some arguments Dilaton mentioned (referred to) in his answer and comment are not quite watertight, or Schrödinger screwed up somewhere in his one- or two-page-long paper:-) I would appreciate if someone could enlighten me where exactly he failed:-) L. Motl offers some arguments related to spin. Furthermore, Schrödinger’s approach has no obvious generalization for equations describing a particle with spin, such as the Pauli equation or the Dirac equation, as, in general, one cannot simultaneously make two or more components of a spinor wavefunction real using a gauge transform. Apparently, Schrödinger looked for such generalization, as he wrote in the same short article: “One is interested in what happens when [the Klein-Gordon equation] is replaced by Dirac’s wave equation of 1927, or other first-order equations. This … will be discussed more fully elsewhere.” As far as I know, Schrödinger did not publish any sequel to his note in Nature, but, surprisingly, his conclusions can indeed be generalized to the case of the Dirac equation in electromagnetic field - please see my article http://akhmeteli.org/wp-content/uploads/2011/08/JMAPAQ528082303_1.pdf or http://arxiv.org/abs/1008.4828 (published in the Journal of Mathematical Physics). I show there that, in a general case, 3 out of 4 components of the Dirac spinor can be algebraically eliminated from the Dirac equation, and the remaining component (satisfies a 4th-order PDE and) can be made real by a gauge transform. Therefore, a 4th-order PDE for one real wavefunction is generally equivalent to the Dirac equation and describes the same physics. Therefore, we don’t necessarily need complex numbers in quantum theory, at least not in some very important and general cases. I believe the above constructive examples show that the arguments to the contrary just cannot be watertight. I don’t have time right now to consider each of these arguments separately. share|improve this answer Hi akhmeteli, can you elaborate a bit more exactly than just saying it is not "watertight" generally, about what arguments I explained you exactly disagree with and why from a physics (or mathematical) point of view? To me, the reasoning in the articles I linked too looks perfectly clear and right, I see no error therein. –  Dilaton May 30 '13 at 9:40 @Dilaton: Please see the edit to my answer. –  akhmeteli May 31 '13 at 6:42 There are many formalisms that relate general relativity to quaternions in the literature and it would be a huge task to entangle their interelations and see who cited each other. Quaternions or split quaternions or biquaternons can be related to the Pauli matrices so it is easy to see how someone might then relate GR to QM. (This does not mean that QM needs to be based on quaternions rather than complex numbers) All theory that uses twistors or spinor formalisms for quantisation of gravity have a similar flavour and could probably be related to the work of MS in some way. It is unlikely that MS had derived Quantum Field Theory from GR because GR is a local theory and QFT is non-local. It is possible that he related some formulation of GR to "first quantised" local equations such as the Dirac equation. Notice that in the modern view the Dirac Equation is regarded as classical even though it includes spin half variables and the Planck constant. The distinction between classical and quantum is not as clean as some people like to believe. I have not studied his work but I will hazard a guess that his work was not really ignored or debunked. It was just incorporated into other approaches with different interpretations that may have made it non-obvious that some of his ideas were included. One day when we know the final theory of physics there will be lots of science historians who dig through old papers and work out who really had the important ideas first, then perhaps MS will get more credit (if his ideas are part of the final answer and he thought of them first). Until then there is just a big melting pot of ideas that often get reinvented and the shear quantity of papers means that if you spend your time reading everything that anyone else has done you will never make any progress yourself. share|improve this answer
4d967adfa1477799
Viewpoint: Crystals of Time • Jakub Zakrzewski, Marian Smoluchowski Institute of Physics, Jagiellonian University, 30-059 Krakow, Poland Physics 5, 116 Researchers propose how to realize time crystals, structures whose lowest-energy states are periodic both in time and space. T. Li et al., Phys. Rev. Lett. (2012) Figure 1: (a) A time crystal has periodic structures both in space and time. Particles arranged in a periodic pattern in space rotate in one direction even at the lowest energy state, determining periodicity in time. (b) An experimental realization of a time crystal proposed by Li et al. uses ultracold ions confined in a ring-shaped trapping potential. The ions form a periodic structure in space and, under a weak magnetic field, they move along the ring, creating a time crystal.(a) A time crystal has periodic structures both in space and time. Particles arranged in a periodic pattern in space rotate in one direction even at the lowest energy state, determining periodicity in time. (b) An experimental realization of a time c... Show more Spontaneous symmetry breaking is ubiquitous in nature. It occurs when the ground state (classically, the lowest energy state) of a system is less symmetrical than the equations governing the system. Examples in which the symmetry is broken in excited states are common—one just needs to think of Kepler’s elliptical orbits, which break the spherical symmetry of the gravitational force. But spontaneous symmetry breaking refers instead to a symmetry broken by the lowest energy state of a system. Well-known examples are the Higgs boson (due to the breaking of gauge symmetries), ferromagnets and antiferromagnets, liquid crystals, and superconductors. While most examples come from the quantum world, spontaneous symmetry breaking can also occur in classical systems [1]. Three articles in Physical Review Letters investigate a fascinating manifestation of spontaneous symmetry breaking: the possibility of realizing time crystals, structures whose lowest-energy states are periodic in time, much like ordinary crystals are periodic in space. Alfred Shapere at the University of Kentucky, Lexington, and Frank Wilczek at the Massachusetts Institute of Technology, Cambridge [2], provide the theoretical demonstration that classical time crystals can exist and, in a separate paper, Wilczek [3] extends these ideas to quantum time crystals. Tongcang Li at the University of California, Berkeley, and colleagues [4] propose an experimental realization of quantum time crystals with cold ions trapped in a cylindrical potential. In nature, the most common manifestation of spontaneous symmetry breaking is the existence of crystals. Here continuous translational symmetry in space is broken and replaced by the discrete symmetry of the periodic crystal. Since we have gotten used to considering space and time on equal footing, one may ask whether crystalline periodicity can also occur in the dimension of time. Put differently, can time crystals—systems with time-periodic ground states that break translational time symmetry—exist? This is precisely the question asked by Alfred Shapere and Frank Wilczek. How can one create a time crystal? The key idea of the authors, both for the classical and quantum case, is to search for systems that are spatially ordered and move perpetually in their ground state in an oscillatory or rotational way, as shown in Fig. 1. In the time domain, the system will periodically return to the same initial state. Consider first the classical case. At first glance, it may seem impossible to find a system in which the lowest-energy state exhibits periodic motion: in classical mechanics the energy minimum is normally found for vanishing derivatives of positions (velocities) and momenta. However, Shapere and Wilczek [2] find a mathematical way out of this impasse. Assuming a nonlinear relation between velocity and momentum, they show that the energy can become a multivalued function of momentum with cusp singularities, with a minimum at nonzero velocities. While this provides a mathematical solution for creating classical time crystals, the authors fall short of identifying candidate systems. It remains to be seen if such an exotic velocity-momentum relation can be engineered in a real system. For once, the quantum case seems to be easier than its classical counterpart. A number of familiar quantum phenomena almost do the trick, resulting in systems that rotate or oscillate in their lowest energy state. Wilczek suggests the example of a supercoducting ring, which can support a permanent current in its ground state under proper conditions. An even closer analogy can be found in a continuous wave laser. Spontaneous symmetry breaking makes the electric-field amplitude oscillate in time with a well-defined phase [5], almost creating a photonic time crystal. Yet in these systems—so close to being quantum crystals—a key element is missing: the persistent superconducting current and the laser light intensity are constant, not periodically varying, and the translational symmetry in time is not broken. How can one then add time periodicity to a quantum system? Wilczek argues that this could be done in a system of quantum particles moving along a ring by introducing a mechanism that localizes them. If moving particles can be made to group in ordered “lumps,” this would naturally result in temporal periodicity as such lumps travel in a circle. Consider a ring filled with a large number of bosons with attractive interactions between them. If the system is isolated, its ground state is a symmetric state of constant density along the ring. But such a state is fragile: any interaction with the environment or any measurement (e.g., the determination of the position of an individual particle) makes the system collapse into a well-localized state along the ring, causing spontaneous symmetry breaking in space. Such localization can form a so-called soliton [6], a solution of the nonlinear Schrödinger equation that describes such a system. Wilczek’s insight is that an applied magnetic field, perpendicular to the ring, will cause the soliton to move. The resulting periodic motion would create a time crystal. Wilczek does not address the problem of how to engineer such a system. But possible simple solutions come to mind. One could use cold neutral atoms with weak mutual attraction and exploit atom-laser interactions to create forces that mimic a magnetic field. Such a scheme to create an artificial effective magnetic field has been already realized in the laboratory [7]. An even simpler possibility is to stir an atomic ensemble while it is cooled towards Bose-Einstein condensation in an appropriate ring-shaped trap. Indeed, a stirring laser beam was previously used to create vortices in a condensate held in a magnetic trap [8]. Here, the stirring laser would introduce a rotation into the system, driving the soliton’s movement. The article by Li et al. [4] provides the detailed description of an experiment that seems to be feasible. The scheme is based on beryllium ions trapped in a ring-shaped potential at nanokelvin temperatures. As a consequence of mutual Coulomb repulsion, the ions arrange periodically in space, forming a ring crystal. Similar geometries have already been demonstrated by the group of David Wineland [9]. Li and co-workers show that the addition of a weak magnetic field perpendicular to the ring would lead to the rotation of the spatially periodic ring crystal structure, thus creating a time crystal. Similarly to Wilczek’s model, spontaneous symmetry breaking of the rotational degree of freedom, through circular movement, is translated into breaking of translational time invariance. Time crystals may sound dangerously close to a perpetual motion machine, but it is worth emphasizing one key difference: while time crystals would indeed move periodically in an eternal loop, rotation occurs in the ground state, with no work being carried out nor any usable energy being extracted from the system. Finding time crystals would not amount to a violation of well-established principles of thermodynamics. If they can be created, time crystals may have intriguing applications, from precise timekeeping to the simulation of ground states in quantum computing schemes. But they may be much more than advanced devices. Could the postulated cyclic evolution of the Universe be seen as a manifestation of spontaneous symmetry breaking akin to that of a time crystal? If so, who is the observer inducing—by a measurement—the breaking of the symmetry of time? 1. F. Strocchi, Symmetry breaking, Lecture Notes in Physics (Springer, Heidelberg, 2008)[Amazon][WorldCat] 2. A. Shapere and F. Wilczek, “Classical Time Crystals,” Phys. Rev. Lett. 109, 160402 (2012) 3. F. Wilczek, “Quantum Time Crystals,” Phys. Rev. Lett. 109, 160401 (2012) 4. T. Li, Z-X. Gong, Z-Q. Yin, H. T. Quan, X. Yin, P. Zhang, L-M. Duan, and X. Zhang, “Space-Time Crystals of Trapped Ions,” Phys. Rev. Lett. 109, 163001 (2012) 5. H. Haken, Synergetics: An Introduction (Springer-Verlag, Berlin, 1977)[Amazon][WorldCat] 6. R. Kanamoto, H. Saito, and M. Ueda, ”Critical Fluctuations in a Soliton Formation of Attractive Bose-Einstein Condensates,” Phys. Rev. A 73, 033611 (2006) 7. Y.-J. Lin, R. L. Compton, K. Jiménez-García, J. V. Porto, and I. B. Spielman, “Synthetic Magnetic Fields for Ultracold Neutral Atoms,” Nature 462, 628 (2009) 8. K. W. Madison, F. Chevy, W. Wohlleben, and J. Dalibard, “Vortex Formation in a Stirred Bose-Einstein Condensate,” Phys. Rev. Lett. 84, 806 (2000) 9. M. G. Raizen, J. M. Gilligan, J. C. Bergquist, W. M. Itano, and D. J. Wineland, “Ionic Crystals in a Linear Paul trap,” Phys. Rev. A 45, 6493 (1992) About the Author Image of Jakub Zakrzewski Jakub Zakrzewski is a Full Professor and the Head of the Atomic Optics Department at the Marian Smoluchowski Institute of Physics, Jagiellonian University in Krakow, Poland. He also leads QuantLab at the Mark Kac Complex Systems Research Centre. Over the years his research has explored quantum optics, laser theory, quantum chaos in atomic systems, and cold gases in optical lattices, especially in the presence of disorder. He worked at the University of Southern California, Los Angeles, and spent several years at the Laboratoire Kastler Brossel of the École Normale Supérieure and University of Paris 6. For more information visit Read PDF Read PDF Read PDF Subject Areas Atomic and Molecular PhysicsQuantum Physics Related Articles Synopsis: Deep Freezing Molecules Atomic and Molecular Physics Synopsis: Deep Freezing Molecules Viewpoint: Journey from Classical to Quantum in Two Dimensions Atomic and Molecular Physics Viewpoint: Journey from Classical to Quantum in Two Dimensions Synopsis: Tiny Oscillator Works as Photon Changing Room Quantum Physics Synopsis: Tiny Oscillator Works as Photon Changing Room More Articles
e874f318f174c8ad
Maximal theorems and Calderón-Zygmund type decompositions forthe fractional maximal function by Kuznetsov, Evgeny, PhD Abstract (Summary) A very significant role in the estimation of different operators in analysis is played by the Hardy-Littlewood maximal function. There are a lot of papers dedicated to the study of properties of it, its variants, and their applications. One of the important variants of the Hardy-Littlewood maximal function is the so-called fractional maximal function, which is deeply connected to the Riesz potential operator. The main goal of the thesis is to establish analogues of some important properties of the Hardy-Littlewood maximal function for the fractional maximal function. In 1930 Hardy and Littlewood proved a remarkable result, known as the Hardy-Littlewood maximal theorem. Therefore its naturally arose a problem: what is an analogue of the Hardy-Littlewood maximal theorem for the fractional maximal function? In the thesis we will give an answer for this problem. Particularly, we will show that the so-called Hausdorff capacity and Morrey spaces, introduced by C. Morrey in 1938 in connection with some problems in elliptic partial differential equations and the theory of variations, naturally appears here. Moreover, recently Morrey spaces found important applications in connection with the Navier-Stokes and Schrödinger equations, elliptic problems with discontinuous coefficients and potential theory. The Hardy-Littlewood maximal theorem is deeply connected with the Stein-Wiener and Riesz-Herz equivalences. Analogues of these equivalences for the fractional maximal function are also given. In 1971 C. Fefferman and E. Stein, by using the Calderón-Zygmund decomposition, obtained the generalization of the maximal theorem of Hardy-Littlewood for a sequence of functions. This result of Fefferman and Stein found many important applications in Harmonic Analysis and its applications, e.g. in Signal Processing. In the thesis we will give an analogue of one part of the Fefferman-Stein maximal theorem for the fractional maximal operator. In 1952 A. Calderón and A. Zygmund published the paper "On Existence of Certain Singular Integrals", which has made a significant influence on the Analysis of the last 50 years. One of the main new tools used by A. Calderón and A. Zygmund was a special family of the decomposition of a given function in its "good" and "bad" parts. This decomposition provides a multidimensional substitution of the famous "sunrise" lemma by F. Riesz and it was used for proving a weak-type estimate for singular integrals. Furthermore, we want to emphasize that Calderón-Zygmund type decompositions have played an important and sometimes crucial role in the proofs of many fundamental results, such as the John-Nirenberg inequality, the theory of Ap-weights, Fefferman-Stein maximal theorem, etc. In the thesis it is showed that it is possible to construct an analogue of the Calderón-Zygmund decomposition for the Morrey spaces. Bibliographical Information: School:Luleå tekniska universitet School Location:Sweden Source Type:Doctoral Dissertation Date of Publication:01/01/2005 © 2009 All Rights Reserved.
c4c8c6b43ef6d096
Chandrasekhar limit From Wikipedia, the free encyclopedia Jump to: navigation, search The Chandrasekhar limit (/ʌndrəˈʃkər/) is the maximum mass of a stable white dwarf star. The limit was first indicated in papers published by Wilhelm Anderson and E. C. Stoner, and was named after Subrahmanyan Chandrasekhar, the Indian astrophysicist who independently discovered and improved upon the accuracy of the calculation in 1930, at the age of 19, in India. This limit was initially ignored by the community of scientists because such a limit would logically require the existence of black holes, which were considered a scientific impossibility at the time. White dwarfs resist gravitational collapse primarily through electron degeneracy pressure. (By comparison, main sequence stars resist collapse through thermal pressure.) The Chandrasekhar limit is the mass above which electron degeneracy pressure in the star's core is insufficient to balance the star's own gravitational self-attraction. Consequently, white dwarfs with masses greater than the limit would be subject to further gravitational collapse, evolving into a different type of stellar remnant, such as a neutron star or black hole. (However, white dwarfs generally avoid this fate by exploding before they undergo collapse.) Those with masses under the limit remain stable as white dwarfs.[1] The currently accepted value of the limit is about 1.39 M_\odot (2.765 × 1030 kg).[2][3][4] Electron degeneracy pressure is a quantum-mechanical effect arising from the Pauli exclusion principle. Since electrons are fermions, no two electrons can be in the same state, so not all electrons can be in the minimum-energy level. Rather, electrons must occupy a band of energy levels. Compression of the electron gas increases the number of electrons in a given volume and raises the maximum energy level in the occupied band. Therefore, the energy of the electrons will increase upon compression, so pressure must be exerted on the electron gas to compress it, producing electron degeneracy pressure. With sufficient compression, electrons are forced into nuclei in the process of electron capture, relieving the pressure. Radius–mass relations for a model white dwarf. The green curve uses the general pressure law for an ideal Fermi gas, while the blue curve is for a non-relativistic ideal Fermi gas. The black line marks the ultrarelativistic limit. In the nonrelativistic case, electron degeneracy pressure gives rise to an equation of state of the form P = K_1 \rho^{5\over 3}, where P is the pressure, \rho is the mass density, and K_1 is a constant. Solving the hydrostatic equation then leads to a model white dwarf which is a polytrope of index 3/2 and therefore has radius inversely proportional to the cube root of its mass, and volume inversely proportional to its mass.[5] As the mass of a model white dwarf increases, the typical energies to which degeneracy pressure forces the electrons are no longer negligible relative to their rest masses. The velocities of the electrons approach the speed of light, and special relativity must be taken into account. In the strongly relativistic limit, the equation of state takes the form P = K_2 \rho^{4\over 3}. This will yield a polytrope of index 3, which will have a total mass, Mlimit say, depending only on K2.[6] For a fully relativistic treatment, the equation of state used will interpolate between the equations P = K_1 \rho^{5\over 3} for small ρ and P = K_2 \rho^{4\over 3} for large ρ. When this is done, the model radius still decreases with mass, but becomes zero at Mlimit. This is the Chandrasekhar limit.[7] The curves of radius against mass for the non-relativistic and relativistic models are shown in the graph. They are colored blue and green, respectively. μe has been set equal to 2. Radius is measured in standard solar radii[8] or kilometers, and mass in standard solar masses. Calculated values for the limit will vary depending on the nuclear composition of the mass.[9] Chandrasekhar[10], eq. (36),[7], eq. (58),[11], eq. (43) gives the following expression, based on the equation of state for an ideal Fermi gas: M_{\rm limit} = \frac{\omega_3^0 \sqrt{3\pi}}{2}\left ( \frac{\hbar c}{G}\right )^{3/2}\frac{1}{(\mu_e m_H)^2}, As \sqrt{\hbar c/G} is the Planck mass, the limit is of the order of A more accurate value of the limit than that given by this simple model requires adjusting for various factors, including electrostatic interactions between the electrons and nuclei and effects caused by nonzero temperature.[9] Lieb and Yau[12] have given a rigorous derivation of the limit from a relativistic many-particle Schrödinger equation. In 1926, the British physicist Ralph H. Fowler observed that the relationship among the density, energy and temperature of white dwarfs could be explained by viewing them as a gas of nonrelativistic, non-interacting electrons and nuclei which obeyed Fermi–Dirac statistics.[13] This Fermi gas model was then used by the British physicist E. C. Stoner in 1929 to calculate the relationship among the mass, radius, and density of white dwarfs, assuming them to be homogeneous spheres.[14] Wilhelm Anderson applied a relativistic correction to this model, giving rise to a maximum possible mass of approximately 1.37×1030 kg.[15] In 1930, Stoner derived the internal energydensity equation of state for a Fermi gas, and was then able to treat the mass–radius relationship in a fully relativistic manner, giving a limiting mass of approximately (for μe=2.5) 2.19 · 1030 kg.[16] Stoner went on to derive the pressuredensity equation of state, which he published in 1932.[17] These equations of state were also previously published by the Soviet physicist Yakov Frenkel in 1928, together with some other remarks on the physics of degenerate matter.[18] Frenkel's work, however, was ignored by the astronomical and astrophysical community.[19] A series of papers published between 1931 and 1935 had its beginning on a trip from India to England in 1930, where the Indian physicist Subrahmanyan Chandrasekhar worked on the calculation of the statistics of a degenerate Fermi gas.[20] In these papers, Chandrasekhar solved the hydrostatic equation together with the nonrelativistic Fermi gas equation of state,[5] and also treated the case of a relativistic Fermi gas, giving rise to the value of the limit shown above.[6][7][10][21] Chandrasekhar reviews this work in his Nobel Prize lecture.[11] This value was also computed in 1932 by the Soviet physicist Lev Davidovich Landau,[22] who, however, did not apply it to white dwarfs. Chandrasekhar's work on the limit aroused controversy, owing to the opposition of the British astrophysicist Arthur Stanley Eddington. Eddington was aware that the existence of black holes was theoretically possible, and also realized that the existence of the limit made their formation possible. However, he was unwilling to accept that this could happen. After a talk by Chandrasekhar on the limit in 1935, he replied: The star has to go on radiating and radiating and contracting and contracting until, I suppose, it gets down to a few km radius, when gravity becomes strong enough to hold in the radiation, and the star can at last find peace. … I think there should be a law of Nature to prevent a star from behaving in this absurd way! — [23] Eddington's proposed solution to the perceived problem was to modify relativistic mechanics so as to make the law P=K1ρ5/3 universally applicable, even for large ρ.[24] Although Bohr, Fowler, Pauli, and other physicists agreed with Chandrasekhar's analysis, at the time, owing to Eddington's status, they were unwilling to publicly support Chandrasekhar.[25], pp. 110–111 Through the rest of his life, Eddington held to his position in his writings,[26][27][28][29][30] including his work on his fundamental theory.[31] The drama associated with this disagreement is one of the main themes of Empire of the Stars, Arthur I. Miller's biography of Chandrasekhar.[25] In Miller's view: Chandra's discovery might well have transformed and accelerated developments in both physics and astrophysics in the 1930s. Instead, Eddington's heavy-handed intervention lent weighty support to the conservative community astrophysicists, who steadfastly refused even to consider the idea that stars might collapse to nothing. As a result, Chandra's work was almost forgotten. — p. 150, [25] The core of a star is kept from collapsing by the heat generated by the fusion of nuclei of lighter elements into heavier ones. At various stages of stellar evolution, the nuclei required for this process will be exhausted, and the core will collapse, causing it to become denser and hotter. A critical situation arises when iron accumulates in the core, since iron nuclei are incapable of generating further energy through fusion. If the core becomes sufficiently dense, electron degeneracy pressure will play a significant part in stabilizing it against gravitational collapse.[32] If a main-sequence star is not too massive (less than approximately 8 solar masses), it will eventually shed enough mass to form a white dwarf having mass below the Chandrasekhar limit, which will consist of the former core of the star. For more-massive stars, electron degeneracy pressure will not keep the iron core from collapsing to very great density, leading to formation of a neutron star, black hole, or, speculatively, a quark star. (For very massive, low-metallicity stars, it is also possible that instabilities will destroy the star completely.)[33][34][35][36] During the collapse, neutrons are formed by the capture of electrons by protons in the process of electron capture, leading to the emission of neutrinos.[32], pp. 1046–1047. The decrease in gravitational potential energy of the collapsing core releases a large amount of energy which is on the order of 1046 joules (100 foes). Most of this energy is carried away by the emitted neutrinos.[37] This process is believed to be responsible for supernovae of types Ib, Ic, and II.[32] Type Ia supernovae derive their energy from runaway fusion of the nuclei in the interior of a white dwarf. This fate may befall carbonoxygen white dwarfs that accrete matter from a companion giant star, leading to a steadily increasing mass. As the white dwarf's mass approaches the Chandrasekhar limit, its central density increases, and, as a result of compressional heating, its temperature also increases. This eventually ignites nuclear fusion reactions, leading to an immediate carbon detonation which disrupts the star and causes the supernova.[38], §5.1.2 A strong indication of the reliability of Chandrasekhar's formula is that the absolute magnitudes of supernovae of Type Ia are all approximately the same; at maximum luminosity, MV is approximately -19.3, with a standard deviation of no more than 0.3.[38], (1) A 1-sigma interval therefore represents a factor of less than 2 in luminosity. This seems to indicate that all type Ia supernovae convert approximately the same amount of mass to energy. Super-Chandrasekhar mass supernovae[edit] Main article: Champagne Supernova In April 2003, the Supernova Legacy Survey observed a type Ia supernova, designated SNLS-03D3bb, in a galaxy approximately 4 billion light years away. According to a group of astronomers at the University of Toronto and elsewhere, the observations of this supernova are best explained by assuming that it arose from a white dwarf which grew to twice the mass of the Sun before exploding. They believe that the star, dubbed the "Champagne Supernova" by University of Oklahoma astronomer David R. Branch, may have been spinning so fast that centrifugal force allowed it to exceed the limit. Alternatively, the supernova may have resulted from the merger of two white dwarfs, so that the limit was only violated momentarily. Nevertheless, they point out that this observation poses a challenge to the use of type Ia supernovae as standard candles.[39][40][41] Since the observation of the Champagne Supernova in 2003, more very bright type Ia supernovae have been observed that are thought to have originated from white dwarfs whose masses exceeded the Chandrasekhar limit. These include SN 2006gz, SN 2007if and SN 2009dc.[42] The super-Chandrasekhar mass white dwarfs that gave rise to these supernovae are believed to have had masses up to 2.4–2.8 solar masses.[42] One way to potentially explain the problem of the Champagne Supernova was considering it the result of an aspherical explosion of a white dwarf. However, spectropolarimetric observations of SN 2009dc showed it had a polarization smaller than 0.3, making the large asphericity theory unlikely.[42] Tolman–Oppenheimer–Volkoff limit[edit] After a supernova explosion, a neutron star may be left behind. Like white dwarfs these objects are extremely compact and are supported by degeneracy pressure, but a neutron star is so massive and compressed that electrons and protons have combined to form neutrons, and the star is thus supported by neutron degeneracy pressure instead of electron degeneracy pressure. The limit of neutron degeneracy pressure, analogous to the Chandrasekhar limit, is known as the Tolman–Oppenheimer–Volkoff limit. 1. ^ Sean Carroll, Ph.D., Cal Tech, 2007, The Teaching Company, Dark Matter, Dark Energy: The Dark Side of the Universe, Guidebook Part 2 page 44, Accessed Oct. 7, 2013, "...Chandrasekhar limit: The maximum mass of a white dwarf star, about 1.4 times the mass of the Sun. Above this mass, the gravitational pull becomes too great, and the star must collapse to a neutron star or black hole..." 2. ^ Israel, edited by S.W. Hawking, W. (1989). Three hundred years of gravitation (1st pbk. ed., with corrections. ed.). Cambridge [Cambridgeshire]: Cambridge University Press. ISBN 0-521-37976-8.  3. ^ p. 55, How A Supernova Explodes, Hans A. Bethe and Gerald Brown, pp. 51–62 in Formation And Evolution of Black Holes in the Galaxy: Selected Papers with Commentary, Hans Albrecht Bethe, Gerald Edward Brown, and Chang-Hwan Lee, River Edge, New Jersey: World Scientific: 2003. ISBN 981-238-250-X. 5. ^ a b The Density of White Dwarf Stars, S. Chandrasekhar, Philosophical Magazine (7th series) 11 (1931), pp. 592–596. 6. ^ a b The Maximum Mass of Ideal White Dwarfs, S. Chandrasekhar, Astrophysical Journal 74 (1931), pp. 81–82. 7. ^ a b c The Highly Collapsed Configurations of a Stellar Mass (second paper), S. Chandrasekhar, Monthly Notices of the Royal Astronomical Society, 95 (1935), pp. 207--225. 8. ^ Standards for Astronomical Catalogues, Version 2.0, section 3.2.2, web page, accessed 12-I-2007. 9. ^ a b The Neutron Star and Black Hole Initial Mass Function, F. X. Timmes, S. E. Woosley, and Thomas A. Weaver, Astrophysical Journal 457 (February 1, 1996), pp. 834–843. 10. ^ a b The Highly Collapsed Configurations of a Stellar Mass, S. Chandrasekhar, Monthly Notices of the Royal Astronomical Society 91 (1931), 456–466. 11. ^ a b On Stars, Their Evolution and Their Stability, Nobel Prize lecture, Subrahmanyan Chandrasekhar, December 8, 1983. 12. ^ A rigorous examination of the Chandrasekhar theory of stellar collapse, Elliott H. Lieb and Horng-Tzer Yau, Astrophysical Journal 323 (1987), pp. 140–144. 14. ^ The Limiting Density of White Dwarf Stars, Edmund C. Stoner, Philosophical Magazine (7th series) 7 (1929), pp. 63–70. 15. ^ Über die Grenzdichte der Materie und der Energie, Wilhelm Anderson, Zeitschrift für Physik 56, #11–12 (November 1929), pp. 851–856. DOI 10.1007/BF01340146. 16. ^ The Equilibrium of Dense Stars, Edmund C. Stoner, Philosophical Magazine (7th series) 9 (1930), pp. 944–963. 17. ^ The minimum pressure of a degenerate electron gas, E. C. Stoner, Monthly Notices of the Royal Astronomical Society 92 (May 1932), pp. 651–661. 18. ^ Anwendung der Pauli-Fermischen Elektronengastheorie auf das Problem der Kohäsionskräfte, J. Frenkel, Zeitschrift für Physik 50, #3–4 (March 1928), pp. 234–248. DOI 10.1007/BF01328867. 19. ^ The article by Ya I Frenkel' on `binding forces' and the theory of white dwarfs, D. G. Yakovlev, Physics Uspekhi 37, #6 (1994), pp. 609–612. 20. ^ Chandrasekhar's biographical memoir at the National Academy of Sciences, web page, accessed 12-I-2007. 21. ^ Stellar Configurations with degenerate Cores, S. Chandrasekhar, The Observatory 57 (1934), pp. 373–377. 22. ^ On the Theory of Stars, in Collected Papers of L. D. Landau, ed. and with an introduction by D. ter Haar, New York: Gordon and Breach, 1965; originally published in Phys. Z. Sowjet. 1 (1932), 285. 23. ^ Meeting of the Royal Astronomical Society, Friday, 1935 January 11, The Observatory 58 (February 1935), pp. 33–41. 24. ^ On "Relativistic Degeneracy", Sir A. S. Eddington, Monthly Notices of the Royal Astronomical Society 95 (1935), 194–206. 25. ^ a b c Empire of the Stars: Obsession, Friendship, and Betrayal in the Quest for Black Holes, Arthur I. Miller, Boston, New York: Houghton Mifflin, 2005, ISBN 0-618-34151-X; reviewed at The Guardian: The battle of black holes. 26. ^ The International Astronomical Union meeting in Paris, 1935, The Observatory 58 (September 1935), pp. 257–265, at p. 259. 27. ^ Note on "Relativistic Degeneracy", Sir A. S. Eddington, Monthly Notices of the Royal Astronomical Society 96 (November 1935), 20–21. 28. ^ The Pressure of a Degenerate Electron Gas and Related Problems, Arthur Eddington, Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences 152 (November 1, 1935), pp. 253–272. 29. ^ Relativity Theory of Protons and Electrons, Sir Arthur Eddington, Cambridge: Cambridge University Press, 1936, chapter 13. 30. ^ The physics of white dwarf matter, Sir A. S. Eddington, Monthly Notices of the Royal Astronomical Society 100 (June 1940), pp. 582–594. 31. ^ Fundamental Theory, Sir A. S. Eddington, Cambridge: Cambridge University Press, 1946, §43–45. 32. ^ a b c The evolution and explosion of massive stars, S. E. Woosley, A. Heger, and T. A. Weaver, Reviews of Modern Physics 74, #4 (October 2002), pp. 1015–1071. 33. ^ White dwarfs in open clusters. VIII. NGC 2516: a test for the mass-radius and initial-final mass relations, D. Koester and D. Reimers, Astronomy and Astrophysics 313 (1996), pp. 810–814. 34. ^ An Empirical Initial-Final Mass Relation from Hot, Massive White Dwarfs in NGC 2168 (M35), Kurtis A. Williams, M. Bolte, and Detlev Koester, Astrophysical Journal 615, #1 (2004), pp. L49–L52; also arXiv astro-ph/0409447. 35. ^ How Massive Single Stars End Their Life, A. Heger, C. L. Fryer, S. E. Woosley, N. Langer, and D. H. Hartmann, Astrophysical Journal 591, #1 (2003), pp. 288–300. 36. ^ Strange quark matter in stars: a general overview, Jürgen Schaffner-Bielich, Journal of Physics G: Nuclear and Particle Physics 31, #6 (2005), pp. S651–S657; also arXiv astro-ph/0412215. 37. ^ The Physics of Neutron Stars, by J. M. Lattimer and M. Prakash, Science 304, #5670 (2004), pp. 536–542; also arXiv astro-ph/0405262. 38. ^ a b Type IA Supernova Explosion Models, Wolfgang Hillebrandt and Jens C. Niemeyer, Annual Review of Astronomy and Astrophysics 38 (2000), pp. 191–230. 39. ^ The weirdest Type Ia supernova yet, LBL press release, web page accessed 13-I-2007. 40. ^ Champagne Supernova Challenges Ideas about How Supernovae Work, web page,, accessed 13-I-2007. 41. ^ The type Ia supernova SNLS-03D3bb from a super-Chandrasekhar-mass white dwarf star, D. Andrew Howell et al., Nature 443 (September 21, 2006), pp. 308–311; also, arXiv:astro-ph/0609616. 42. ^ a b c Hachisu, Izumi; Kato, M.; et al. (2012). "A single degenerate progenitor model for type Ia supernovae highly exceeding the Chandrasekhar mass limit". The Astrophysical Journal 744 (1): 76–79 (Article ID 69). arXiv:1106.3510. Bibcode:2012ApJ...744...69H. doi:10.1088/0004-637X/744/1/69.  Further reading[edit]
46fbb8575d96b970
Molecules from scratch without the fiendish physics Molecules from scratch without the fiendish physics by Lisa Grossman. From the post: But because the equation increases in complexity as more electrons and protons are introduced, exact solutions only exist for the simplest systems: the hydrogen atom, composed of one electron and one proton, and the hydrogen molecule, which has two electrons and two protons. This complexity rules out the possibility of exactly predicting the properties of large molecules that might be useful for engineering or medicine. “It’s out of the question to solve the Schrödinger equation to arbitrary precision for, say, aspirin,” says von Lilienfeld. So he and his colleagues bypassed the fiendish equation entirely and turned instead to a computer-science technique. Machine learning is already widely used to find patterns in large data sets with complicated underlying rules, including stock market analysis, ecology and Amazon’s personalised book recommendations. An algorithm is fed examples (other shoppers who bought the book you’re looking at, for instance) and the computer uses them to predict an outcome (other books you might like). “In the same way, we learn from molecules and use them as previous examples to predict properties of new molecules,” says von Lilienfeld. His team focused on a basic property: the energy tied up in all the bonds holding a molecule together, the atomisation energy. The team built a database of 7165 molecules with known atomisation energies and structures. The computer used 1000 of these to identify structural features that could predict the atomisation energies. When the researchers tested the resulting algorithm on the remaining 6165 molecules, it produced atomisation energies within 1 per cent of the true value. That is comparable to the accuracy of mathematical approximations of the Schrödinger equation, which work but take longer to calculate as molecules get bigger (Physical Review Letters, DOI: 10.1103/PhysRevLett.108.058301). (emphasis added) One way to look at this research is to say we have three avenues to discovering the properties of molecules: 1. Formal logic – but would require far more knowledge than we have at the moment 2. Schrödinger equation – but that may be intractable for some molecules 3. Knowledge-based approach – May be less precise than 1 & 2 but works now. A knowledge-based approach allows us to make progress now. Topic maps can be annotated with other methods, such as math or research results, up to and including formal logic. The biggest different with topic maps is that the information you wish to record or act upon is not restricted ahead of time. Comments are closed.
73f06a3ca8fee8c7
There’s something wrong about the diffusion equation—but what exactly is it? As promised last time, let me try to give you a “layman’s” version of the trouble about the diffusion equation. 1. Physical Situations Involving Diffusion First of all, we need some good physical situations that illustrate the phenomenon of diffusion, in particular, the simplest linear 1D diffusion equation: \alpha \dfrac{\partial^2 u}{\partial x^2} = \dfrac{\partial u}{\partial t} Here is a list of such models: • Think of a long, metal railing, which has got cold on a winter morning. [I said winter, and not December. No special treatment for Aussies and others from the southern hemisphere.]  Heat the mid-point of the railing using a candle or a soldering iron. The heat propagates in the rod, increasing temperatures at various points, which can be measured using thermocouples. Ignoring higher-order (wave/shocks) effects, the conduction of heat can be taken to follow the abovementioned simple diffusion equation. • Think of a container having two compartments separated by a wall which carries a small hole. The entire container is filled with air (say, 1 atm pressure at 25 degree Celsius), and then, an electromechanical shutter closes down the hole in the internal wall. Then, place an opened bottle of scent in one of the compartments, say, that on the left hand side. Allow for some time to elapse so that the scent spreads practically evenly everywhere in that compartment. (If you imagine having a fan in that compartment, you must also imagine it being switched off and the air-flow becoming stand-still on the macro-scale). Now, open the internal hole, and sense the strength of the scent at various points in the right-hand side compartment, at regular time-intervals. [I was being extra careful in writing this model, because the diffusion here can be directly modelled using the kinetic theory of gases.] • Take a kitchen sponge of fine porosity, and dip it into a bucket of water, thus letting it fully soak-in the water. Now, keep the sponge on a table. Take a flat piece of transparent glass, and place it vertically next to the sponge, touching it gently. Then, place a drop of ink at a point on the top surface of the sponge, right next to the glass. Observe the flow of ink through the sponge. Even if this post is meant for “layman” engineers/physicists who have already studied this topic, I deliberately started with concrete physical examples. It helps freshen up the physical thinking better, and thereby, helps ground the mathematical thinking better. (I always believe that by way of logical hierarchy, the physical thought comes before the mathematical thought does. Before you can measure something, you have to know what it is that you are measuring; the what precedes the how.) 2. Mathematical Techniques Available to Solve the Diffusion Equation Now, on to the mathematical techniques available to solve the above-mentioned diffusion equation. Here is a fairly comprehensive (even if perhaps not exhaustive) list of the usual techniques: • Spectral: • Analytical: The classical Fourier theory. Expand the initial condition in terms of a Fourier series (or, for an infinitely extended domain, a Fourier integral), and find the time evolution using separation of variables • Numerical: Discretize the domain and the initial condition, and also the time dimension. Use FFT to numerically compute the Fourier evolution. (If you are smart: chuck out the FFT implementation you wrote by yourself, and start using FFTW.) • Usual Numerical Methods: • FEM: Weak formulation. • FVM: Flux-conservation formulation • FDM: Based on the Taylor series expansion. For a 1D structured grid, it produces the same system as FEM. • The “Unusual” Numerical Method—the Local Finite Differences: Discretize the time-axis using the Taylor series expansion (as in FDM). On the space side, it’s slightly different from FDM. Check out p. 15 of Ref [1]. Practically speaking, almost none models the diffusion equation this way. However, we include it at this place to provide a neat progression in the nature of the techniques. If it helps, note that this technique essentially works as a CA (cellular automaton). • The Stochastic Methods • Brownian movement: By which, I mean, Einstein’s analysis of it; Ref. [3]. BTW, the original paper is surprisingly easy to understand. In fact, even the best textbook expositions of it (e.g. Huang’s Statistical Physics book) tend to drop a crucial noting made in the original paper. (In fact, even Einstein himself didn’t pay any further attention to it, right in the same paper. It was easier to spot it in the original paper. More on this, below, or later.) • The random walk (RW)/Monte Carlo (MC)/Numerical Brownian movement. For our limited purposes (focusing on the simple and the basic things), the three amount to one and the same thing. The Solution Techniques and the Issue of the Instantaneous Action at a Distance Now go over the list again, this time figuring out on which side each technique falls, what basic premise it (implicitly or explicitly) assumes: does it fall on the side of a compact support for the solution, or not. Here is the quick run-down: All the spectral methods and all the usual numerical methods involve solution support extended over the entire domain (finite or infinite). The unusual numerical method of local finite differences involves a compact support. The traditional analysis of Brownian movement is confused about the issue. In contrast, what the numerical techniques of random walk/MC implement is a compact support. Go over the list again, and make sure you are comfortable with the characterizations. You should be. Except for my assertion that the traditional analysis of Brownian movement is confused. To explain the confusion, we have to go to Ref. [1] again. In Ref. [1], on p. 16, the author states that: “… A simple argument shows that if h^2/\tau \rightarrow 0 or +\infty, x may approach +\infty in finite time, which is physically untenable.” However, in the same Ref [1], on p. 2 in fact, the author has already stated that: “It is easily verified that u(t,x) = \dfrac{1}{(2 \pi t)^{d/2}} \exp(-\dfrac{|x|^2}{2 t}) satisfies [the above-mentioned diffusion equation.]” Here, the author does not provide commentary on the nature of the solution, as far as the issue of IAD is concerned. For a commentary on the nature of solution, we here make reference to [2], which, on p. 46, simply declares (without a prior or later discussion of the logical antecedents or context, let alone a proof for the declaration in question) that the function \dfrac{1}{(4\pi t)^{n/2}} e^{- \dfrac{|x|^2}{4 t}} (where n is the dimensionality of space) is the fundamental solution to the diffusion equation; and then, on p. 56, goes on to invoke the strong maximum principle to assert infinite speed of propagation—which is contradictory to the above-quoted passage in Ref [1], of course, but notice that the solutions being quoted is the same. BTW, the strong maximum principle suspiciously looks as if its native place is the harmonic analysis (which is just another [mathematicians’] name for the Fourier theory). And, this turns out to be true. [^] So, back to square one. Nice circularity: You first begin with spectral decomposition that first posits domain-wide support for each eigenfunction; you then multiply each eigenfunction by its time-decay term and add the products together so as to get the time evolution predicted by the separation of variables in the diffusion process; and then, somewhere down the line, you allow yourself to be wonder-struck; you declare: wow! There is action at a distance in the diffusion equation, after-all! Ok, that’s not a confusion, you might say. It’s just a feature of the Fourier theory. But where is the confusion concerning the Brownian movement which you promised us, you might want to ask at this point. The Traditional Analysis of the Brownian Movement as Confused w.r.t. IAD Well, the confusion concerning the Brownian movement is this: Refer to Einstein’s 1905 paper. In section 4 (“On the irregular movement…”) he says this much: “Suppose there are altogether n particles suspended in a liquid. In an interval of time \tau the x-Co-ordinates of the single particles will increase by \Delta, where \Delta has a different value (positive or negative) for each particle. For the value of \Delta a certain probability-law will hold; the number dn of the particles which experience in the time interval \tau a displacement which lies between \Delta and \Delta + d\Delta, will be expressed by an equation of the form dn = n \phi(\Delta) d\Delta \int_{-\infty}^{+\infty} \phi(\Delta) d\Delta = 1 and \phi only differs from zero for very small values of \Delta and fulfils the condition \phi(\Delta) = \phi( - \Delta) . We will investigate now how the coefficient of diffusion depends on \phi, confining ourselves again to the case when the number \nu of the particles per unit volume is dependent only on x and t. Putting for the particles per unit volume \nu = f(x, t), we will calculate the distribution of the particles at a time t + \tau from the distribution at the time t. From the definition of the function \phi(\Delta), there is easily obtained the number of the particles which are located at the time t + \tau between two planes perpendicular to the x-axis, with abscissae x and x + dx. We get f(x, t + \tau) dx = dx \cdot \int_{\Delta = -\infty}^{\Delta = +\infty} f(x + \Delta) \phi(\Delta) d\Delta … we get …] \dfrac{\partial f}{\partial t} = D \dfrac{\partial^2 f}{\partial x^2} (I) This is the well known differential equation for diffusion…” [Bold emphasis mine.] In the same paper, Einstein then goes on to say the following: “Another important consideration can be related to this method of development. We have assumed that the single particles are all referred to the same Co-ordinate system. But this is unnecessary, since the movements of the single particles are mutually independent. We will now refer the motion of each particle to a Co-ordinate system whose origin coincides at the time t = 0 with the position of the centre of gravity of the particles in question; with this difference, that f(x, t)dx now gives the number of the particles whose x Co-ordinate has increased between the time t = 0 and the time t = t, by a quantity which lies between x and x + dx. In this case also the function f must satisfy, in its changes, the equation (I). Further, we must evidently have for $x > or < 0$ and t = 0, f(x,t) = 0 and \int_{-\infty}^{+\infty} f(x,t) dx = n . The problem, which accords with the problem of the diffusion outwards from a point (ignoring possibilities of exchange between the diffusing particles) is now mathematically completely defined [his Ref 9]; the solution is: f(x,t) = \dfrac{n}{4 \pi D} \dfrac{e^{-\frac{x^2}{4Dt}}}{\sqrt{t}} The probable distribution of the resulting displacements in a given time t is therefore the same as that of fortuitous error, which was to be expected.” [Bold emphasis mine] Contrast the bold portions in the above two passages from Einstein’s paper. Both the passages come from the same section within the paper! The first passage assumes a probability distribution function (PDF) that has compact support, and proceeds, correctly, to derive the diffusion equation. The second passage reiterates that the PDF must obey the same diffusion equation, but proceeds to quote a “known” solution that has x spread all over an infinite domain, thereby simply repeating the error. … To come so close to the truth, and then to lose it all! Well, you can say: “Wait a minute! He changed the meaning of f somewhere along, didn’t he?” You are right. He did. In the first passage, f referred to the PDF of particles density at various locations x; in the second passage, it refers to the PDF of particles undergoing various amounts of displacements from their current positions. The difference hardly matters. In either case, if you do not qualify x variable in any way, and in fact quote the earlier, infinite-domain result for the diffusion, you implicitly adopt the position that the PDF is extended to \infty. You thereby end up getting IAD (instantaneous action at a distance) back into the game. This back-and-forth jumping of positions concerning compactness of support (or IAD) is exactly what Ref [1] also engages in, as we saw above. The difference is that, once in the stochastic context, the Ref [1] is at least explicit in identifying the infinite speed of propagation and denying it a physical tenability. Even though, by admitting the classical solution, it must make an inadvertent jump back to the IAD game! In contrast, the issue is very clear to see in case of the numerical methods—even if no one discusses IAD in their contexts! The most spectacular failure of the successive authors, IMO, is their failure to distinguish between the local finite differences and the usual FDM. If you grasp this part, the rest everything becomes much more easy to follow. After all, you get the random walk simply out of randomizing the same local-propagational process which is finitely discretized in the local finite differences technique. The difference between RW/MC on the one hand and FDM/FEM/FVM on the other, is not just the existence or otherwise of  randomness; it also is: the compactness of the solution support. … I wish I had the time (or at least the inclination) to implement both these techniques and illustrate the time evolution via some nice graphics. For the time being at least, the matter is left to the reader’s imagination and/or implementation. Here, let me touch on one last point. Why This Kolaveri Confusion, Di? Mathematicians are not idiots. [LOL!] If so, what could possibly the reason as to why a matter this “simple” has not been “caught” or “got” or highlighted by any single mathematician so far—or a mathematical physicist, for that matter? Why do people adopt one mind-set, capable of denying IAD, when in the stochastic realm, and immediately later on, adopt another mind-set, that explicitly admits IAD? Why? Any clue? Do you have any clue regarding this above question? Can you figure out the reason why? Give it your honest try. As to me, I think I know the answer—at least, I do have a clue which looks pretty decent to me. … And, as indicated above, the answer is not in the nature of the change of mind-set when people approach the problems they regard as “deterministic” vs. the problems they regard as “probabilistic” or “stochastic.” It’s not that…. It’s something different. Do give it a try, but I also think that it will probably be hard for you to get to the same answer as mine. (Even though, I also think that you will accept my answer as a valid one, when you get to know it.) And the reason why it will be hard for you—or at least it will be so for most people—is that most people don’t think that physics precedes mathematics, but I do. If only you can change that hierarchy, the path to the answer will become much much easier. A whole lot easier. That precisely is the reason why I included the very first section in this post. It doesn’t just sit there without any purpose. It’s there to help give you a context. Mathematics requires physics for its context. Anyway, I don’t want to overstretch this point. It’s not very important. Knowing for a fact that two classes of theories, speaking about the same mathematical equation which has been studied for a couple of centuries, but have completely different things to say when it comes to an important issue like IAD—that is important. Important, as from the quantum mechanics viewpoint. After all, check out p. 4 of Ref [2]. It lists the Schrödinger equation right after the diffusion equation. And while at that page, notice also the similarity and differences between the two equations, stripped down (i.e. suitably scaled and specialized) to their bare essences. Resolving the riddles of the quantum entanglement is as close as the Schrödinger equation is to the heat equation—and then as close as resolving the confusions concerning IAD is, in the context of the diffusion equation. … We must know why physicists and mathematicians have noted the two faces of the diffusion equation, but have remained confused about it. … Think about it. May be another post on this entire topic, some time later, probably sooner than later, giving you my answer to the above question. In the meanwhile, remember to let me know if you can give any additional information/answer to my Maths StackExchange question on this topic [^]. BTW, thanks are due to “Pavel M” from at that forum, for pointing out Evans’ book to me. I didn’t about it, and it seems a good reference to quote. [1] Varadhan, S. R. S. (1989) “Lectures on Diffusion Problems and Partial Differential Equations,” Notes taken by Pl. Muthuramalingam and Tara R. Nanda, TIFR, Springer-Verlag. [2] Evans, Lawrence C. (2010) “Partial Differential Equations, 2/e,” Graduate Studies in Mathematics, v. 19, American Mathematical Society [3] Einstein, A. (1905) “On the movement of small particles suspended in stationary liquids required by the molecular-kinetic theory of heat,” Annalen der Physik, v. 17, pp. 549–560. [English translation (1956) by A. D. Cowper, in “Investigations on the Theory of Brownian Movement,” Dover] [This post sure would do with a couple of edits in the near future, though the essential ideas are all already there. TBD: I have to think whether to add the “Song I Like” section or not. Sometime later. It already is almost 2700 words, with many latex equations. … May be tomorrow or the day after…] 4 thoughts on “There’s something wrong about the diffusion equation—but what exactly is it? 1. a_scientist, regarding your answer “Wave equation: local disturbance propogates at finite speed; diffusion: local disturbance gets smeared out”:Could one say that the maximal of the diffusion is the average/mean, and that with time the mean decrease as the standard deviation increases? ( assuming that the boundary temperature/concentration is constant)It is interesting that since there is no derivative with respect to time in the diffusion equation that it most act on all space (including infinity)instantaneously.Could one say that when there is a derivative with respect to time that energy is stored and relased as time progresses? [Admin note: IP address:] [Admin note: IP Address:] 3. Pingback: More on the features of the Fourier theory | Ajit Jadhav's Weblog Comments are closed.
c1a3f47ca11607f3
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer I realize that path integral techniques can be applied to quantitative finance such as option value calculation. But I don't quite understand how this is done. Is it possible to explain this to me in qualitative and quantitative terms? share|cite|improve this question closed as too broad by ACuriousMind, Kyle Kanos, Brandon Enright, John Rennie, JamalS Feb 9 '15 at 7:35 Although some physicists are interested in quantitative finance, this question is off-topic here. Unfortunately I can't point you to the appropriate forum off the top of my head, but I'm sure quantitative finance forums exist if you poke around. – Mark Eichenlaub Dec 13 '10 at 9:30 @Mark, I do not think so; I think econophysics questions should be allowed here, much like mathematical physics, or physics questions with engineering bent are allowed here. – Graviton Dec 13 '10 at 9:36 I will have to agree with @Ngu on this one. Path integrals are pretty much a modern physicist's bread and butter. So if someone asked "can you apply path integrals to understanding how to butter bread" I'd say that was a question for physicists :-) And a lot more physicists are entering this field now. One prominent example is Lee Smolin (reference) who is also one of the BIG names in quantum gravity. – user346 Dec 13 '10 at 9:51 I voted to reopen. (I couldn't figure out how to redact my vote to close.) I don't have a strong personal stake in this question - my initial vote to close was just my immediate reaction because I hadn't seen such questions here before and I didn't know about any significant ties to physics. It now seems that a majority of users think the question is appropriate, and I'm happy to go along with the majority. – Mark Eichenlaub Dec 14 '10 at 4:51 @marek I don't know about string theory but the complete works of shakespeare should be mandatory reading for all experimentalists :-) Theorists are just born with it ! – user346 Dec 14 '10 at 20:23 up vote 24 down vote accepted The fundamental equation which serves as the basis for the path-integral formulation of finance and many physical problems is the Chapman-Kolmogorov equation. $$p(X_f|X_i)=\int p(X_f|X_k)p(X_k|X_i) dX_k $$ This is analogous to the following equation for amplitudes in quantum mechanics $$\langle X_f|X_i \rangle=\int \langle X_f|X_k\rangle\langle X_k|X_i\rangle dX_k $$ That's right, it's the same form, but the interpretation of the basic entities changes. In the former, they are probability densities and thus real and positive, in the latter they are probability amplitudes and thus complex. The class of physical problems that can be tackled with the first type of equation are called Markov processes, their characteristic is that the state of the system depends only on its previous state. Despite its seeming limitedness, this comprises many phenomena since any process with a long but finite memory can be mapped onto a Markov process provided the state space is enlarged appropriately. On the other hand, the second equation is pretty natural and general in quantum mechanics. It is basically stating that the unity operator can always be decomposed into a, possibly overcomplete, sum of pure states $$\mathbb{I}=\int |X_k\rangle\langle X_k| dX_k \; .$$ Now, constructing a path integral is done by slashing up the path from $X_i$ to $X_k$ into ever smaller components. Let's suppose that the endpoints are fixed, then we might assume that to go from one endpoint to another, the system has to go through paths $(X_i,X_1(t_1),X_2(t_2),\ldots,X_n(t_n),X_f)$. This leads to the following integral $$p(X_f|X_i)=\int\cdots\int \prod_{k=0}^n p(X_{k+1}(t_{k+1})|X_k(t_k)) \prod_{k=1}^n dX_k(t_k) $$ where I put $X_0(t_0)=X_i$ and $X_{n+1}(t_{n+1})=X_f$. The tricky part is now to see if the limit can be defined meaningfully. This can be very problematic, especially in the quantum case. Ironically, the cases that are used for finance and statistical mechanics are often much more well-behaved. This is again related to one integral being over complex numbers and the other over real numbers, but it's not the only reason. Up till now, I have not been specific about the kind of system I want to study, this will play an important role as well. So, let's take an option which is a financial security of which the price is dependent on the price of the underlying stock and time. So we can write $O(X,t)$ for the price of the option and we'll assume the underlying stock follows a geometric brownian motion: $$\frac{dX}{X}=\mu dt + \sigma dW$$ where $W$ represents a Wiener process with increments $dW$ having mean zero and variance $dt$. Also assume that the pay-off of the option at the expiration time $T$ is: with $F$ a given function of the terminal stock price. Then, Fisher Black and Myron Scholes have shown that the option, under the 'no arbitrage' assumption, satisfies the following PDE $$\frac{\partial O}{\partial t} + \frac{1}{2}\sigma^2X^2\frac{\partial^2 O}{\partial X^2} + r X \frac{\partial O}{\partial X} - rO = 0$$ in which $r$ is the risk free interest rate. If instead of the geometric brownian motion variable $X$, I reformulate this into $x=\ln X$ which is an arithmetic brownian motion variable, I can reformulate the equation as: $$\frac{\partial O}{\partial t} + \frac{1}{2}\sigma^2\frac{\partial^2 O}{\partial x^2} + (r-\frac{\sigma^2}{2}) \frac{\partial O}{\partial x} - rO = 0$$ This is nothing else but a special case of the PDE's that can be solved by using the Feynman-Kac formula, which includes also the Fokker-Planck equation and the Smoluchowski equation, both related to the description of diffusion processes in physics. In the diffusion problem, O is to be interpreted as a distribution of velocities of the particle (Fokker-Planck) or of the positions of the particle (Smoluchowski). That's how we relate to what I introduced above. Also note that the Schrödinger equation in quantum mechanics is very similar in form, except you'll get complex coefficients. The Feynman-Kac formula tells us that the solution to the PDE is: $$O(X,t) = e^{-r(T-t)}\mathbb{E}\left[ F(X_T)|X(t)=X \right]$$ It is this expectation value that will now be represented as a pathintegral: $$O(X,t) = e^{-r(T-t)}\int_{-\infty}^{+\infty}\left(\int_{x(t)=x}^{x(T)=x_T} F(e^{x_T}) e^{A_{BS}(x(t'))} \mathcal{D}x(t')\right) dx_T$$ $$A_{BS}(x(t'))=\int_t^{T} \frac{1}{2\sigma^2}\left(\frac{dx(t')}{dt'}-\mu\right)^2$$ is the action functional. The reason this path integral can be built is the same explained before, here it is possible to split the conditional expectation ever further in smaller intervals: $$\begin{array}{rcl}\mathbb{E}\left[ F(e^{x_T})|x(t)=x \right] & = & \int_{-\infty}^{+\infty} F(e^{x_T}) p(x_T|x(t)=x) dx_T \\ & = & \int_{-\infty}^{+\infty} F(e^{x_T}) \int_{\tilde{x}(t)=x}^{\tilde{x}(T)=x_T} p(x_T|\tilde{x}(\tilde{t})) p(\tilde{x}(\tilde{t})|x(t)=x) d\tilde{x}(\tilde{t}) dx_T \end{array}$$ Each of the conditional probabilities satisfying the PDE for the arithmetic brownian motion as noticed above. I'll stop here for now, but I refer to the following article for further details. share|cite|improve this answer This doesn't talk about finance at all hardly. – Noldorin Dec 13 '10 at 20:52 It doesn't really talk about physics, either. – Sklivvz Dec 13 '10 at 22:13 Excellent answer @raskolnikov! I see the peanut gallery is pretty crowded today. – user346 Dec 13 '10 at 23:29 Seems like the villagers have taken over. I vote to reopen. – user346 Dec 14 '10 at 0:47 I voted to reopen. – Robert Smith Dec 14 '10 at 2:18
2380b8b6ff64a028
Saturday, January 31, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Davos: Klaus met Gore for 2 hours In Davos, Czech president Václav Klaus met Al Gore for two hours. National Review, AFP It was a friendlier encounter than the differences would suggest. Klaus summarized it by saying that there is no global warming, that he (Klaus) is the most important denier in the world (alarmists may call me Dr Victor Frankenstein for the win) :-), and that Al Gore is a key figure of a movement trying to suppress freedom. Environmentalists don't listen to the other side while Klaus does. Also, Klaus is more afraid of the regulation that will be justified by the current crisis than the crisis itself. President Klaus also met with Shimon Peres (Klaus' favorite Israeli politician) today, discussing the Middle East issues. The Gore effect is working in Davos, too. The current temperature, -12 °C, matches the recent record cold reading from January 31st, 2003. A windstorm By the way, there is one more entertaining story about Klaus. You know that it's politically incorrect to exclusively use female names for hurricanes and windstorms. It's politically incorrect despite the fact that a woman is like a hurricane: when it arrives, it is a pleasant and warm humid wind. When it leaves, it takes cars and houses with it. One week ago, the European meteorologists had a great idea how to call the windstorm that was underway in France and Spain: Klaus. It killed about 20 people and will cost the insurance industry half a billion euro or so. Click to jump, use a fixed-length string to support the yellow ball. Hit the green one. Secondary forest growth beats human consumption 50:1 Tome has pointed out a remarkably balanced story in The New York Times, New Jungles Prompt a Debate on Rain Forests. Secondary forests i.e. new jungles are growing in previously agricultural (or logging or natural disaster) areas as much as 50 times faster than people are able and allowed to cut the primeval rain forests. The area of secondary forests is doubling every 18 years and people are quoted in the article as saying that there are many more forests than they could see 30 years ago. In the good old times, rain forests were one of the main symbols of environmentalism. They're so pretty and diverse. (You know, I am an old environmentalist who has participated - together with Greenpeace guys - in weekly voluntary events to help the trees in the Bohemian Forest and elsewhere!) That old environmental problem was arguably captivating but it has never gained the political power of the contemporary greenhouse religion, especially because of its local (and distant) character. People may be just revealing that even the old problem was based on a deep misunderstanding of the internal mechanisms of Nature and Her inherent strength. I guess that the higher concentration of CO2, the gas we call life, is contributing to the fast expansion of the new forests, too. Rudolf Mössbauer: 80th birthday Rudolf Mössbauer was born in Munich on January 31st, 1929. Congratulations! He has been an eager teacher who thought it was important to explain anything and everything to everyone else, including your cat. Now he is a Prof Emeritus at Caltech. He has taught physics of neutrinos, neutrons, electroweak theory, and other things. Of course, he is most famous for his 1957 discovery of the Mössbauer effect, or "Recoilless Nuclear Resonance Absorption" if you happen to be himself and you still want to look excessively modest. :-) See his 1961 lecture about it and the paper in German. He received the 1961 physics Nobel prize for that. He was promoted to a professor in advance so that Caltech wouldn't become the place where Nobel prize winners are treated as postdocs. :-) Well, he actually shared the award with Robert Hofstadter who studied electron scattering in atomic nuclei. Right now, the most culturally important fact about Robert Hofstadter is that Leonard Hofstadter from The Big Bang Theory (CBS) was named after him. ;-) Friday, January 30, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Emergence of thermodynamics from statistical physics I have seen - and participated in - a nearly infinite discussion at Backreaction, started by a paper about not-quite-conservative solutions to the black hole paradox, where a couple of armchair physicists (and sometimes professional armchair physicists) were simply not able to understand some basic things about statistical physics, thermodynamics, and their relationships. I am always amazed where the self-confidence of the people who are parametrically as dumb as a door knob comes from. These pompous fools never want to realize that they're just wasting other people's time by their stupidity - or maybe they do realize it? Besides Sabine, a physics kibitzer called Peter was genuinely obnoxious. Why are so many loud people who talk nonsense about physics called Peter? As far as I understand the sociology of these things, basic philosophical postulates of statistical physics and thermodynamics - and their key relationships - should be taught and usually are taught when you're a sophomore, an undergraduate student. These things have been known for more than a century - in classical physics - and quantum mechanics has only made certain limited corrections to this basic philosophy (related to the probabilistic nature of predictions and quantization of various quantities). It's just completely baffling for me when someone who has misunderstood everything about these basic issues is flooding some weblogs that are trying to pretend to be close to the actual physics research. Thursday, January 29, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere LHC: black holes living for seconds? Yesterday, I had to spend hours with a debate about global warming under my article at The Invisible Dog, a famous Czech personal internet daily of Mr Ondřej Neff, a well-known science-fiction writer, called Rationally About Weather And Climate - a modified Czech version of Weather and Climate: Noise and Timescales. Yes, it seems that the skeptics have won once again. ;-) The IPCC's proxy, Dr Ladislav Metelka, is an OK chap and he's not even terribly radical. But he has shown his remarkable ignorance in many ways. For example, a reader asked him why the IPCC seems to predict that the temperature change per CO2 concentration change is speeding up as the concentration goes up, in violation of the logarithmic law. Metelka answered some incoherent nonsense that the IPCC result includes all feedbacks, and it can therefore be accelerating. Of course, the real explanation was that the reader had calculated "temperature change OVER concentration" instead of "temperature change OVER concentration change". He forgot to subtract 280 ppm from the concentration, and when he did it right, it worked as expected: the influence is slowing down. The reader understood the error (and the correct answer) completely. I am sure that he must have learned the feeling of being sure that his knowledge is more robust than the knowledge of the self-declared best Czech mainstream climatologist. Black holes at the LHC But there is one more type of alarmism that began to spread in the media, the LHC alarmism. You know, the LHC will create a black hole that will destroy the Earth. A few days ago, a new wave of this stuff began to penetrate through the media. See e.g. MSNBC: Study revisits black-hole controversy FoxNews: Scientists not so sure 'doomsday machine' won't destroy world and others (Google News). The story is based on a new preprint by Roberto Casadio, Sergio Fabi, Benjamin Harms, Tuesday, January 27, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Balling, Michaels: Climate of Extremes I received a book by Patrick Michaels and Robert Balling Jr, "Climate of Extremes". It is a very nice book that is crowded with graphs and information. At the beginning, Michaels announces that he will have to leave his school in June 2009 because the current conditions don't allow him to keep both his scientific integrity and the funding. You will find some embarrassing quotes by leading IPCC scientists and Al Gore. But then the real book begins. The authors classify themselves as believers in man-made contributions to global warming but disbelievers in the climate apocalypse. Rationally speaking, I agree with them. Hořava, Lifshitz, Cotton, and UV general relativity Let me start with some fun: Click the picture of April Motl, my very distant relative who is "getting to the heart of the matter", too. ;-) Amusingly enough, in 1998, I was using pen name April Lumo for a while. Petr Hořava wrote an interesting preprint: Quantum gravity at a Lifshitz point (see also: November 2007 talk in Santa Barbara) He wants to find a "smaller" theory of quantum gravity than string theory, so he looks at the hypothetical UV fixed point (a theory without a preferred scale) that could flow to Einstein's equations at long distances. Fixed points are an intellectual value that the CMT and HEP cultures share. See also NYU about Hořava-Lifshitz gravity for more comments about the paper and the sociology surrounding it... This research program has been unsuccessfully tried many times in the past. The new twist is that his proposed fixed point is non-relativistic. Normal scale-invariant relativistic theories have a scaling symmetry that affects space and time equally. Dispersion relations tell us that "E=p" and we say that the exponent "z=1". Ordinary non-relativistic mechanics scales them differently and "E=p^2/2m", giving "z=2". His starting point is even more non-relativistic, with "z=3". But he wants to get to "z=1" at long distances. Weather in the year 3000 Gene Day has sent me a cute article at MSNBC. Do you want to invite your friends for a barbecue party in the year 3000, a few years after the collapse of the Thousand Year Reich? And do you need to have plans for different weather scenarios? Expect 1,000-year climate impacts, experts say. Although science can only remove the noise and predict something specific about the atmosphere for a month in advance, while the behavior in any further future seems to be an intractable problem, the average experts got used to "predictions" of the weather for the year 2100 or 2200. No one is going to check these predictions during the people's careers - which is great - and because other people want to listen to them, anyway, these forecasts became widespread. So if you want to be ahead of your climatological colleagues these days, 2100 or 2200 is not enough. So Susan Solomon is telling us that the catastrophe is going to be lasting. Even if we stop all production of CO2, she says, the Earth will be "dying" at least until the year 3000 because the "murder" we are committing against Gaia is "irreversible". One of the greatest catastrophes is that the sea level will jump by 1 meter by the year 3000 just because of the CO2-related greenhouse heating expansion. What a cataclysm: it's almost a millimeter per year, roughly 10 times slower rate than when we were going out of the last ice age. Moreover, it is extremely logical (for them) to talk about the year 3000 and use these speculations as a justification of an immediate "action", in the year 2009. There must exist a time machine or a wormhole between the year 2009 and 3000 if they can be linked in this way. Believe it or not, the fate of the people in the year 3000 depends on your decision on January 27th, 2009! :-) She can't possibly be serious when she says that a 1 meter change of the sea level in 1000 years is bad. During the last 20 millenia, the sea level naturally jumped by 120 meters which is 6 meters per millenium. If you focus on the interval from 15,000 years ago to 7,000 years ago, the rate doubles. The rise was more than 10 meters per millenium. Relatively to these rates, the sea level rise nearly stopped 7,000 years ago or so. Why did it stop so quickly? No, there was no discontinuity of the laws of physics. The reason was that the glaciers have disappeared from the bulk of the Earth's surface so there was no ice left to be melted - except for Greenland and Antarctica which might naturally melt (even without any human intervention) in a few thousand years, too. You know, ice can sometimes naturally melt, lady! It is no coincidence that 6,000 years ago or so, the ancient civilizations started to be born and flourish. The reason is that ice is pretty bad for life while the warming was damn good for them. Snow and ice are clean and cute but that's exactly why there's almost no life in them. Sunday, January 25, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Anti-quantum zeal A few days ago, we talked about the lethal flaws of the Bohmian mechanics. It was very far from the first time when I noticed that rational arguments don't play an important role in the debates about the interpretation of quantum mechanics because one side always ends up by saying that regardless of any arguments, canonical quantum mechanics makes no sense. They won't hear any answer until you tell them the answer they are waiting for, namely that the microscopic world behaves just like the world they know from their everyday lives. You know, when I was 15 or so, I was also "disappointed" by the philosophical framework of quantum mechanics. It looked weird and incomplete. What I wanted to see as a theory of everything was just a more complete version of Einstein's equations of general relativity. I spent quite some time by attempts to develop alternatives to quantum mechanics. However, because I cared about the experiments, they had to be explained. So I was stealing methods and tools from quantum mechanics, one by one, seeing that it was inevitable, and eventually I had to steal all of quantum mechanics, including its probabilistic interpretation. There simply can't be an alternative. Everyone who thinks that there is one is deluded and misunderstands some very important things about modern physics. You know, to describe the measured discrete quantum numbers of atoms (or the interference of particles), you need the wave function and the conditions of its single-valuedness. On the other hand, particles are observed at specific points. And unless you accept the probabilistic interpretation of the wave function, with no additional classical degrees of freedom or hidden variables, you will ruin not only quantum mechanics but also the other pillar of the modern physics, relativity, as you can see by a detailed analysis of the EPR experiments. Unlike the previous essay, this one will attempt to be more sociological in character. Evidence vs prejudices All these conclusions above may be surprising but they indisputably follow from the scientific evidence. You know, whether or not an electron has a uniquely defined classical position or spin before it is measured, is just another scientific question that can only be answered by scientific tools. It is not a philosophical or religious question. The situation is analogous to the question whether the Earth was orbiting the Sun or vice versa. Saturday, January 24, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Microcanonical leftists & cap-and-trade system Do you prefer the microcanonical ensembles over the canonical ones? Then you're a leftist, a TRF theory claims. ;-) In thermodynamics, a microcanonical ensemble chooses microstates according to the value of their energy (or another extensive quantity) that must belong to an interval. Each microstate in the ensemble has the same weight. A canonical (or grand-canonical) ensemble chooses a temperature or a chemical potential (or another intensive quantity) and allows different microstates to contribute differently, by the exponentially decreasing weights. The average energy is calculated out of the temperature. I have always found the canonical ensemble to be the more natural one, for many reasons. The microcanonical weights are discontinuous and depend on an arbitrary choice of an interval (and its length). The canonical weights are smooth and admit a natural calculation in terms of a periodic Euclidean time. Now I believe that this preference of mine is a by-product of my being a rightwinger. ;-) Conservative people like to define fixed laws in the society that apply to everyone, and it is up to everyone what he or she does with them. The outcomes are not known in advance and they depend on the detailed dynamics - the Hamiltonian, if you wish. On the other hand, leftists prefer plans (like in a planned economy). So they decide what the energy should be in advance and eliminate every microstate that doesn't agree with their plans. Those (microstates or people) who make it through the microcanonical filter must be treated in an egalitarian way while the rest is sent to a Gulag. Cap and trade system We recently encountered this funny analogy during a discussion of the carbon cap-and-trade system. Is that a better idea than ordinary taxes, or fees paid for every ton of CO2? In the cap-and-trade system, one first defines the "emission goals" for everyone (the cap part) and companies are then allowed to buy or sell the indulgences or "credits" (the trade part). This costs us about half a trillion dollars a year right now. Is that a market-economy solution? Well, if the "caps" were natural, the "trade" part would surely be more market-friendly than strict pre-determined plans required from everyone. Such a trade could increase the efficiency of the system. On the other hand, there's still the microcanonical "cap" part here which is completely artificial and dirigistic in character. There would be no indulgence market if there were no caps. The strictness of the regulator will always be the main force that will determine the price of the indulgences. That's why we have seen the price of carbon credits oscillate wildly, by many orders of magnitude. Price oscillations surely exist in genuine market economy, too. However, these particular price oscillations - in the carbon markets - were clearly driven by the government policies. And I find them almost completely unnatural and counterproductive. If the carbon dioxide emissions are viewed as "finite damages", someone should have at least a vague idea what are the damages caused by one ton of CO2. I believe that they're zero, if not negative, but those people who believe that CO2 hurts should have a positive number in mind and an argument that justifies it. This number should be simply added as an extra tax. You can still emit whatever you want but it will cost you some money. The difference between the cap-and-trade system and a tax is the same as the difference between the microcanonical and canonical ensemble in physics. The main difference is that the cap-and-trade system admits a variable price of one ton but the variations are primarily determined by the regulators, anyway. To put it differently, it is not true that the cap-and-trade system is more market-friendly than the extra taxes. It's because the whole nonzero price of the whole market is effectively dictated by the government - in the same way as if the government determines the new tax rate. In a genuine market, the price to emit 1 ton CO2 would clearly be zero. The indulgence price swings look like genuine price swings in the free markets but they are not: they just reflect the hawkish or dovish mood of the regulators and the precise details about their choice of the caps (which moreover increase the risk of insider trading and corruption). Moreover, if it turned out that the carbon indulgences must be extremely expensive to achieve a sizable reduction of CO2 emissions (which would be likely if we saw no new technological breakthroughs), the damages to the economy could become much higher than even what the alarmists believe to be the damages caused by the CO2, which is simply wrong. So if any policy of this form ever has to be adopted, a new tax is the cleanest solution. The best formula for the CO2 tax rate was promoted by Ross McKitrick, the T3 tax. The tax rate would be determined by the measured warming of the tropical troposphere where the greenhouse effect should leave the cleanest and strongest fingerprints. Needless to say, during the recent years, the T3 tax would have been negative because we have seen cooling, especially in this particular part of the atmosphere, so I guess that according to this policy, CO2 emitters should have been paid extra money! The T3 tax should satisfy everyone. Those people who don't believe that CO2 production causes significant temperature change have pretty much equal chances to expect a loss as to expect a profit, and those who believe that the warming will escalate because of CO2 production should be looking forward to a high carbon tax rate. ;-) Hippie non-solutions to the black hole information problem Sabine Hossenfelder and Lee Smolin wrote a down-to-earth manifesto called Conservative solutions to the black hole information problem (PDF) about the qualitative approaches to describe the survival or death of the information inside the black holes. The paper uses the adjective "conservative" 18 times. That's quite a high frequency for hippie and feminist authors who clearly have no idea what the word "conservative" means - either in politics or in science. Why don't they omit these political adjectives that, as they must know, are not apt for their ideas? Would it be too much to ask? They seem to abandon the last traces of the rational thinking - an attitude that I don't consider to be a "conservative" virtue. The fact that it is the horizon below which the information is guaranteed to be lost (semiclassically) simply because causality prevents it to get outside, and not the singularity (which is irrelevant for the information loss puzzle), has been repeatedly explained in detail, for example in Black hole singularities in AdS/CFT (and by Moshe Rozali and others), so that I am pretty sure that a significant fraction of the lay readers start to understand this elementary point, too. Smolin and Hossenfelder are clearly not among them. The authors reasonably sketch four or five possible macroscopic fates of the Schwarzschild black hole, including 1. the correct Penrose diagram of an evaporating black hole 2. possible evolutions that involve naked singularities 3. a crazy star-like degeneration of the black hole that avoids both horizon and singularity 4. a future with a baby universe born inside (A) or a massive remnant (B) Clearly, only the option (1) is an acceptable macroscopic description of the spacetime with a black hole in it, and every acceptable - or "conservative" - solution must be compatible with the general shape of spacetime sketched in (1), as we will show again momentarily. This choice doesn't quite solve all the puzzles yet but it is inevitable. Friday, January 23, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Steven Weinberg on condensed matter physics culture We have written a lot stuff on the emergent phenomena and various cultures in physics, including the difference between the relativistic and particle-physics cultures. But when I looked at Clifford Johnson's blog an hour ago or so, I saw his comments about an interesting article by Steven Weinberg in the CERN Courier: From BCS to the LHC Clifford Johnson seems to disagree with Weinberg but I think that his reasons to disagree are based on a misunderstanding. Well, it shouldn't be shocking that my general philosophical opinions about physics are probably indistinguishable from Weinberg's opinions. Let me try to defend his viewpoint. There are many ways in which high-energy and condensed-matter theorists use similar methods and tools that are often helpful in the other discipline. The two cultures overlap and flows of ideas are sometimes helpful. But as Weinberg correctly says, there is a huge gap between the goals, aims, values, motivation, and sources of satisfaction in between the two cultures. I always knew this to be the case but a few years ago, many friendly discussions with those roughly five condensed-matter theorists in the Society of Fellows - like Yaroslav Tserkovnyak (now UCLA: Privet!) - have convinced me that the differences are much deeper than I had previously thought. U.S.: global warming is the least concern As Benjamin (and Marc Morano) has pointed out, global warming is the smallest concern for the U.S. citizens among 20 topics they were offered. It is clear that most people usually return to some common sense after some time, and because of the weak economy (#1 topic) and a cool winter, it is clear that possible & imaginary threats and expensive proposals to avoid them simply can't be important for the people, regardless of the actual merit of this fearful science (which happens to be non-existent). Cosmic-climate link supported by muons that see the stratosphere Also, we follow Anthony Watts, the web's best science blogger in 2008 according to a large poll, and bring you a weekly dose of the peer-reviewed denialist literature. In a press release, a new paper in Geophysical Research Letters is promoted: S. Osprey and hundreds of authors: Sudden stratospheric warmings seen in MINOS deep underground muon data This scientific work actually comes from high-energy physics. Deep underground, in an iron mine in Minnesota (the same state where Minnesotans for global warming live) that is controlled by the Fermilab's MINOS collaboration, one can use a detector to measure the intensity of cosmic rays (well, the flux of muons, the electrons' 206.8 times heavier siblings) and these measurement display an unexpectedly strong correlation with the weather (temperature) in the upper atmosphere called the stratosphere. The link was especially strong and surprising during sudden, multi-day-long stratospheric warming episodes in the Northern Hemispheric winter. In other words, underground muons can now be used to reconstruct the stratospheric temperature! The correlation between the cosmic rays and the climate is pretty much proven by now. The direction of the causation Still, don't judge too quickly: you must be careful before you declare this to be a proof of the Svensmark-Shaviv cosmoclimatological theories because the MINOS correlation is at least partially (and possibly mostly) caused by the influence of the temperature on the production of muons from mesons - the opposite direction of the causal influence than climatologists would care about. To be sure, the causal relationship between the underground muon flux and the stratospheric temperatures can go in both ways - both directions can contribute to the correlation. The cosmic mesons may speed up the creation of low-lying clouds which usually cool down the surface, but because they reflect more of the solar radiation back to space, they give more opportunity to the stratosphere to heat up: more cosmic rays mean a warmer stratosphere. The opposite relationship exists, too. A warmer stratosphere is "expanded" and the fraction (and the typical position) of the mesons destroyed by the air is influenced, too. That's why the fraction of mesons with long enough life to decay to muons is also affected. But let me admit that the sign of the relationship in this paragraph isn't quite clear to me at this point. At any rate, most forcings predict the opposite changes of the trend for the stratosphere than for the troposphere. For example, the greenhouse effect cools down the stratosphere much like it heats up the troposphere. Sociology & other MINOS stuff If you care about the sociology, the MINOS authors are almost as numerous as the IPCC and their average IQ exceeds the IPCC's IQ roughly by 7 points. ;-) The list of authors includes my Harvard ex-colleague, Prof Gary Feldman, who is clearly even higher on this scale. :-) This is the second article on this blog that is largely dedicated to MINOS. The first one was not related to the climate: it was about the neutrino oscillations: Bush lost a few neutrinos in Minnesota There's no one as Irish as Barack O'Bama. Via Gene Day. ;-) Bohmists & segregation of primitive and contextual observables A student named Maaneli decided to defend his favorite theory, the Bohmian model of quantum phenomena. Update: See also Anti-quantum zeal for a sociological discussion of these issues. This picture, originally pioneered by Louis de Broglie in the late 1920s under the name "pilot wave theory", was promoted and extended by David Bohm in the 1950s. Because Bohm was a holy Marxist while de Broglie was just a sinful aristocrat, the term "Bohmian mechanics" for de Broglie's theory has been used for decades and I will mostly follow this convention, too. ;-) At any rate, there is no real reason to fight for priority because the theory is worthless nonsense. The framework tries to describe the quantum phenomena in a deterministic way. What is the pilot wave theory? In this approach, the wave function is an actual wave, a "pilot wave", analogous to the electromagnetic waves. Besides these classical degrees of freedom, there are additional classical degrees of freedom, the positions and velocities of the particles. These positions are influenced by the "pilot wave" in such a way that the pilot wave drives the particles away from the interference minima. More precisely, if the probability distribution "rho(x)" for the (effectively unknown to us, but known to Nature in principle) particle's position "X(t)" at "t=0" agrees with "|psi(x,t)|^2", it will agree with it at later times "t", too. Such a law for "X(t)" can be written down - as a first-order equation - while the classical "psi(t)" obeys the classically interpreted Schrödinger equation. Thursday, January 22, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Cell phone battery and the fridge A month ago, during the 0 °F cold snap, it looked like my cell phone whose (18 months old) battery used to last for 10 days at the very beginning got discharged after 2 days or so. Even with a new battery (not from the same company but arguably having the same capacity), the energy disappeared quickly. What would you think was the reason? I decided that the cold weather could have something to do with it. When it's cold outside, the energy from the Li-ion battery is not released efficiently and some circuits might think that the battery is already getting empty. However, the battery emptied itself in 2-3 days even when the cell phone was kept at home, around 20 °C. This looked hopeless. Maybe, the cell phone is not being charged completely, because of some memory effects or a wrong idea about the voltage needed to have a full battery. But if there is a mistake based on a wrong calibration, the argument above must be revertible: if you recharge your cell phone in the fridge, the circuit must think that it's not yet full, and it will try to charge it more fully than if the recharging occurs outside the fridge. I tried to charge the cell phone in the fridge and indeed, it seems that the lifetime has been extended, at least to 4 days (and counting). Do you think it's possible, reproducible, and that my explanation is correct? Do you need to be a chemical engineer who studies Lithium batteries to give me a sensible answer? ;-) If you agree with my method, is the battery fixed for another year or do I have to charge it in the fridge all the time? Will it work? Can I recommend it to others to fight the aging of their batteries? Dramatic update Wednesday, January 21, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Klebanov and Maldacena about hyperbolic cows In the new issue of Physics Today, Juan Maldacena and Igor Klebanov write a semi-popular "feature article" about the 11th anniversary of the AdS/CFT correspondence: Solving quantum field theories via curved spacetimes (PDF) One of the way to describe a negatively-curved space (in this case the Euclidean AdS2, or the Poincaré disk) is in terms of hyperbolic cows: Physicists have been using spherical cows as an idealization of the real ones for centuries but only at the end of 1997, they finally discovered another comparably important idealization, the hyperbolic cow. Well, if these comments sound less technical than expected, you should prepare for the rest of their article that could be more technical than expected. HadCRUT3: autocorrelation and records Eduardo Zorita was kind enough to look at my previous calculations of autocorrelation and frequency of clustered records that used the GISS data. Because I claim that the probability of their clustered records is above 6% while they claim it to be below 0.1% and because both of us know that my higher result is caused by a higher autocorrelation of my random data relatively to theirs, Dr Zorita told me that he thought that my autocorrelation was much higher than the observed one. However, it's not the case. The only correct statement would be that my autocorrelation is higher than theirs. But my autocorrelation matches the observations while theirs is much lower than the observed one. One of the technical twists of our discussion has been that I switched to the HadCRUT3 monthly data. We have much more resolution here: 159 * 12 = 1908 months of global (and other) temperature anomalies. In a new Mathematica notebook (PDF preview), I did several things: Tuesday, January 20, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere The Friendship Algorithm Episode 2x13 of The Big Bang Theory: Watch at YouKu (click the right lower corner of the video rectangle for full screen) Sheldon develops a scientific procedure for making friends. ;-) Well, it's a bizarre episode but it's also fun. Reid has pointed out that Natalie Angier of the New York Times wrote another ideological article about women in science, pretending that it is "baffling" why women's percentage in maths and physics doesn't seem to increase. She also quotes some other zealous feminists who said that "diversity is a form of excellence". Oh, really? I thought that diversity is, by its very definition, a form of mediocrity and averageness because "diversity" is meant to reproduce the distributions of the average society. Finally, she expects Obama to promote her feminist values. Well, I might be an uncurable optimist but I don't see any evidence that Obama agrees with those obnoxious frigid women more than he agrees with me. Sitcoms and "stereotypes" It makes sense to mention this particular feminist article because she also blames The Big Bang Theory for the "stereotype" of having geeky chaps and a blonde attractive young woman. Well, realism is a major reason why I like TBBT so much. Monday, January 19, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Weather and climate: noise and timescales What if? Hep-th papers on Monday There are ten pretty interesting hep-th papers today. The last one, by Frederik Denef, Mboyo Esole, and Megha Padi, elaborates on a very clever way to look at type IIB orientifold compactifications: translate them into black holes. To do so, they compactify the Universe on a three-torus, T-dualize all of it, and end up with a black hole in a Universe where a left shoe may become a right shoe after a round trip. ;-) This allows them to count the vacua via the black hole entropy - they're able to say e.g. that a sector of compactifications contains 10^{777} vacua - and study some of their detailed properties, including a new orientifold variation of the OSV identity (which doesn't involve any squares of the partition sums). Very interesting. All classical N=8 SUGRA amplitudes The first paper, by J. Drummond, Mark Spradlin, Nastya Volovich, and Congkao Wen explains a recursive algorithm to calculate all tree level amplitudes in the N=8 supergravity: quite a powerful technical result (although the experts in this sub-field usually care about loop amplitudes more than they do about complicated tree-level ones). They build their new comprehensive algorithm on a similar recent algorithm for N=4 Yang-Mills theory: the only novelty is that some invariants have to be squared and new dressing factors have to be inserted. Universal, inflating N=1 SUGRA There are actually three papers today that deal with the N=8 SUGRA. The second one I mention, by James Gates and Sergei Ketov, argues that one chiral superfield coupled to the minimal flat space supergravity is equivalent to a higher-derivative supergravity built from the chiral curvature superfield. That allows them to view the early inflation and the present acceleration by the dark energy to have mathematically isomorphic roots - roots that they also try to trace to a dilaton-axion stringy origin. 3D toy model of N=8 SUGRA as a TOE The third paper where the N=8 SUGRA is important was written by Jean Alexandre, John Ellis, and Nikolaos Mavromatos. They study the emergence of various composite fields in three-dimensional coset field theories - with holons and spinons being the elementary building blocks - and argue that these mechanisms are also important for qualitatively new physics of the N=8 SUGRA in four dimensions that may be relevant for its application as a "TOE", a statement that is surely both both exaggerated and somewhat obsolete. Predicting a wrong, negative C.C. George Ellis and Lee Smolin have arguably submitted the first paper co-authored by the second author that I could agree with, even though all the content can be summarized in the following sentence: if there are infinitely many semi-realistic vacua with the cosmological constant clustered around minus epsilon - and no counterparts with a positive epsilon, as suggested by some recent papers e.g. Shelton-Taylor-Wecht -, then it is fair to say that the weak anthropic principle (apparently incorrectly) predicts that the cosmological constant is negative. Well, there are at least three facts that make this trivial (and probably obvious to many experts, because whether or not string theory predicted or predicts a negative C.C. has been discussed for years: of course not) conclusion weaker than tea. The stability and the physical character of the Shelton-Taylor-Wecht vacua has not really been established; it has not really been established that there are no corresponding positive-C.C. vacua; and the weak anthropic assumption is wrong which means that even an infinite multiplicative underrepresentation of a class of vacua doesn't kill it. ;-) AdS/QCD: quark-gluon plasma Johanna Erdmenger, Matthias Kaminski, and Felix Rust study an N=4 gauge theory coupled to N=2 matter, looking for mesons etc., and they claim that their results about their spectrum (and widths) are relevant for the quark-gluon plasma regime of ordinary QCD. Dimensional reduction of monopoles Brian Dolan and Richard Szabo consider the dimensional reduction of compactifications with spheres and focus on the effect of the reduction on the magnetic flux through the sphere, especially on the magnetic monopoles. They look at the Kaluza-Klein tower and its Yukawa interactions and make some tools more controllable by switching to a fuzzy sphere instead of the ordinary one. No fixed points in Yukawa systems Holger Gies and Michael Scherer study the UV properties of some toy models of the Higgs mechanism with various fermions and Yukawa couplings. They use the term "asymptotic safety" even though IMHO it should be reserved for the (unlikely) existence of UV fixed points in gravity. They show that some models admit no non-trivial UV fixed points. QFT on quantum graphs E. Ragoucy computes properties of a "quantum field theory" on graphs - which should probably be called "quantum mechanics" only, using the standard jargon. The author calculates physical properties including scattering amplitudes and conductance. Orientable objects? D. Gitman and A. Shelepin discuss "orientable objects" associated with some fields on the Poincaré group. I don't understand their point and the meaning of their "objects" but frankly speaking, I tend to doubt that dozens of pages that seem to be struggling with some elementary facts about the Poincaré group and spinors contains something really new. If I am wrong, some of their unexpected conclusions sound rather sharp - for example, you need ten quantum numbers to describe an orientable object. ;-) Arctic global warming reaches the CSA A Santa Claus in Indiana The American South is usually thought to be a hot place. However, Alabama is now colder than Alaska. Big Chill (AP) Record cold temperatures (Google News) In Central Europe, we've had temperatures around -15 °C or 0 °F for weeks and it is fair that those folks in the U.S. South taste it, too. Meanwhile, we expect a dramatically warmer end of January - around the freezing point. Time to find the swimming suits again. ;-) OK, not quite swimming suits but the Minnesotan global warming anthem is pretty apt now again. See also Imagine there's no global warming by John Lennon. ;-) These days, when you say Minnesota, I imagine these jolly guys whose smile always survives the freezing weather in similar songs. But a decade ago, I had completely different things in mind. In 1999, my friend, mathematician Petr Čížek, invited me to a journey across the Midwest etc. which I didn't attend because of looming qualifiers at Rutgers. In Minnesota, his Russian friend, a lively girl, borrowed the steering wheel and tried to drive on a road under construction which is fine because you could go in the left lane as there is no traffic there. Well, except for a truck that instantly killed the two (and other two students from the car spent months in hospital). For an electric car, Cadillac Converj Concept looks pretty hot and aggressive. Google Chrome 5.0 Click to magnify the screenshot. Friday, January 16, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Fighter pilot to lead NASA The tension between Michael Griffin and the new Obama administration has been sufficiently strong to expect Griffin's resignation. It will take place almost exactly during Obama's inauguration. Space.COM announced that Obama plans to name Jonathan Scott Gration as the new boss of NASA. His military experience is impressive - he might be classified as a hero and a natural authority - and I am sure that many conservative readers will be happy about such a choice. On the other hand, I am not that sure that the choice is good for NASA as a scientific institution. Jan Kaplický (1937-2009) Today, the wife of Czech architect Jan Kaplický gave birth to a child. That would be good news for Kaplický. Unfortunately, he died on the street today, too. My condolences to his family. His most famous building that has been realized is Bull Ring in Birmingham, followed by the Lord's Media Centre in London. He became famous as the author of a controversial project for a new building of the Czech National Library near the Prague Castle. It's been nicknamed octopus, blob, or jellyfish. It's been argued that the extraterrestrial aliens have landed and melted the Prague Castle and the building above was what was left. According to some YouTube dudes, the library was attacking people in some Marshmallow commercials. :-) Smartkit: Cryptogram Full screen... Try to solve this cryptogram. Play. Click an empty box and choose your letter - or drag the green letters to the blue ones with a mouse. Hint: The cryptogram hides a quotation by a late author who is considered wise even though he would probably find the black holes politically incorrect. The proposition both praises and criticizes the mankind's scientific skills. Obama okays coal industry The Wall Street Journal argues that the coal industry, responsible for about 1/2 of the U.S. energy, is going to do just fine under Barack Obama. The anti-fossil-fuels environmentalists who want coal and oil to be replaced by solar energy don't realize one important thing. Coal is solar energy, too. It is a solar energy that has been conveniently packaged by the Earth's geological processes much like meat is nothing else than nice plants that have been packaged by metabolism (Rajendra Pachauri should listen to both parts of the sentence!). Coal comes from the same beautiful Sun that has been buried much like in the famous song by Rammstein: The Sonne been buried, bringing a new ice age. But if you watch the video clip to the very end, you will see that the Sun may be revived again. Drill, baby, drill. Killing the information softly The information has been complaining, until 1996 or so, in this way: Yes, Strominger plays with his fingers, stringing her life with his words. I wanted to make sure that almost everyone finds something enjoyable in this posting. ;-) Moshe Rozali has written something about the black hole information paradox. He praised Juan Maldacena's 2001 paper about the information inside eternal AdS black holes, the paper that was essential for Stephen Hawking to convince himself and admit that the information was preserved. And he discussed the preservation of the information. Information is not lost, in principle In the AdS/CFT context, a black hole may also be described as a generic thermal "gas" of gluons and quarks. A new particle that enters this bath will eventually distribute its energy among all the other particles. Thursday, January 15, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere The Killer Robot Instability Episode 2x12 of The Big Bang Theory: Watch at ZSHARE.NET (right-click the video for full screen) With 11.8 million viewers, which has set a new record, this episode was, for the second time, the #1 in ratings. Record-breaking years in autocorrelated series As Rafa has pointed out, E. Zorita, T. Stocker, and H. von Storch have a paper in Geophysical Research Letters, How unusual is the recent series of warm years? (full text, PDF; see also abstract), in which they claim that even if we consider temperature to be an autocorrelated function of time with sensible parameters, there is only 0.1% probability that the 13 hottest years in the list of 127 years (since 1880) appear in the most recent 17 years, much like they do in reality according to HadCRUT3/GISS stations. If we add a non-autocorrelated noise, typical for local temperature data, the temperature readings become more random and a similar clustering of records becomes even less likely because the autocorrelation that keeps the probability of clustered records from becoming insanely low is suppressed. This matches the reality, too, because local weather records usually don't have that many record-breakers in the recent two decades. What percentage of civilized planets shoot An Inconvenient Truth? But after detailed simulations, I am confident that the main statement of their paper about the probability in the global context - 0.1% (that would strongly indicate that the recent warm years are unlikely to be due to chance) - is completely wrong. The correct figure for the global case is between 5-10% (depending on the damping of the long term memory, and we will argue that the 10% figure is realistic at the end), if you allow record cold years as well as record hot years, which you should because both possibilities could feed alarmism. If you ask about strict record hot years only, pretending that the alarmists wouldn't exist if we were breaking record cold years :-), you should divide my probability by two. The last alarmist planet I generated: temperature anomaly in °C in the last 127 years. About 10% of randomly generated realistic temperature data look like this and satisfy the 127/13/17 record-breaking condition - by chance. Click to zoom in. At any rate, the probability is rather high and it is completely sensible to think that the clustering of the hottest years into the recent decades occurred by chance. In roughly one decade per century, we get the opportunity to see this "miracle" (13 hottest years occurring in the last 17 years). Wednesday, January 14, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Final digit and the possibility of a cheating GISS David Stockwell has analyzed the frequency of the final digits in the temperature data by NASA's GISS led by James Hansen, and he claims that the unequal distribution of the individual digits strongly suggests that the data have been modified by a human hand. With Mathematica 7, such hypotheses take a few minutes to be tested. And remarkably enough, I must confirm Stockwell's bold assertion although - obviously - this kind of statistical evidence is never quite perfect and the surprising results may always be due to "bad luck" or other explanations mentioned at the end of this article. Update: Steve McIntyre disagrees with David and myself and thinks that there's nothing remarkable in the statistics. I confirm that if the absolute values are included, if their central value is carefully normalized, and the anomalies are distributed over just a couple of multiples of 0.1 °C, there's roughly a 3% variation in the frequency of different digits which is enough to explain the non-uniformities below. However, one simply obtains a monotonically decreasing concentration of different digits and I feel that they have a different fingerprint than the NASA data below. But this might be too fine an analysis for such a relatively small statistical ensemble. This page shows the global temperature anomalies as collected by GISS. It indicates that the year 2008 (J-D) was the coldest year in the 21st century so far, even according to James Hansen et al., a fact you won't hear from them. But we will look at some numerology instead. Looking at those 1,548 figures Among the 129*12 = 1,548 monthly readings, you would expect each final digit (0..9) to appear 154.8 times or so. That's the average statistics and you don't expect that each digit will appear exactly 154.8 times. Instead, the actual frequencies will be slightly different than 154.8. How big is the usual fluctuation from the central value? Cosmology of F-theory GUTs Dmitry Podolsky has brought my attention to a semi-popular explanation of cosmology in the F-theoretical grand unified models by Jonathan Heckman, Cosmology of F-theory GUTs, who is one of the young big shots working on this bottom-up phenomenology with Cumrun Vafa. The model is very predictive and quite a lot of these predictions seem to make sense. Tuesday, January 13, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Best European Blog: a victory Well, a winner (click the logo)... Thank you very much for your votes in the poll that was choosing the best European blog. Out of 6,200 votes, we received almost 34% of votes, defeating nine competitors. Additional thanks to Eduardo who nominated TRF in this category. Our continental neighbors in the Asian category saw some adjustments of the votes which have actually changed the winner but Europe is more honest and our lead was more substantial. So there were no changes made to our score and No Pasaran picked the silver medal for its 21% of votes. This is primarily a success of the TRF community and readers like you. Thank you again. I don't like awards but it is somewhat pleasant not to feel hunted all the time. :-) Monday, January 12, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Stalagmites support cosmoclimatology In this weekly dose of the peer-reviewed skeptical literature about the climate, we look at some new evidence for cosmoclimatology. In a news story called The earth's magnetic field impacts climate: Danish study AFP informs about a new article in the U.S. journal "Geology" by Mads Faurschou Knudsen and Peter Riisager (Denmark): Is there a link between Earth's magnetic field and low-latitude precipitation? (full text paper, PDF) The page with the abstract... In the last 5,000 years of data, they found a strong correlation between • the Earth's magnetic dipole moment, as extracted from lava flows and burned archeological materials, on one side • and the amount of precipitation in the tropics, as extracted from Oxygen-18 inside the stalactites and stalagmites in Oman and China, on the other side. The only plausible explanation of this correlation is Svensmark's mechanism of cosmoclimatology: the oscillating geomagnetic field regulates the amount of galactic cosmic rays that reach the lower layers of the atmosphere which subsequently influences the amount of cloudiness and precipitation (and temperature). Stereograms and dinograms Dmitry Podolsky chose an autostereogram, one of the types of stereograms, as a symbol of holography in quantum gravity. It is a pretty good choice, as I will argue later. He has also reminded me of some fun we had back in 1995 in Prague. Two very smart Slovak classmates of mine (we became the last freshmen who joined the college as federal, Czechoslovak students) saw an interesting exhibition somewhere in Prague with some crazy pictures that looked completely two-dimensional and chaotic, but if one looked at them properly, they became completely three-dimensional. You may also look for images of stereograms or articles explaining holovision or volumetric display. Reconstructing the secret know-how They described their "stunning" perceptions so well that even though they didn't quite know how the pictures worked and I hadn't seen them, it was possible for me to reconstruct the algorithm and write a working model in Turbo Pascal, together with a pedagogical explanation in the Pictures of Yellow Roses (a now-defunct math-phys student journal written in Czech); see the automatic translation to English. The article included (and still includes) a working program in Turbo Pascal and I will describe most of the article below. Although I can run Turbo Pascal programs within DOSbox 0.72 on Windows Vista, I decided to refresh the programming languages a bit and translate the Pascal program into Mathematica 7. It was a seemingly straightforward exercise but when I was converting some functions and procedures to more natural expressions in Mathematica, I confused "Min" and "Max" several times which made the program generate rubbish. Entropa: celebrating the European entropy Because Ukraine is going to sign the gas treaty with Russia again, without the insulting added declaration, the Czech EU presidency doesn't seem to have enough useful work to do, so they decided to switch to some creative arts for a while. :-) The EU taxpayers are going to pay EUR 50,000 to Mr David Černý (beware of an excessively funny website!), a highly independent artist - if you want me to describe the nutcase politely - whose most famous piece so far has been the Soviet tank No 23 (the first one that liberated Prague in 1945 and has served as a memorial for decades) which he painted pink. As far as I can say, his second most famous statue is the swimming statue of Saddam Hussein called "Shark". At any rate, he is going to get the rental costs for an EU puzzle called "Entropa" which shows the diversity of Europe by decomposing it into separate pieces of artwork whose shape usually resembles the country and whose content either confirms or negates some prejudices about the corresponding nation. See for the main presentation which includes the pictures and their funny descriptions. You may also read some comments about this event: Sunday, January 11, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Pravda: Earth on the brink of an ice age Pravda: Earth on the brink of an ice age argues that those nice 12,000 warm years are approaching their end and the following 100,000 years will impress us with a new ice age. ;-) Well, there is surely a potential for a 5-8 °C cooling in a few thousands years (or tens of thousands of years) - because the human civilizations that we know from the history textbooks already started in an unusually warm period (see the scary left side of the last, black graph on the image below) - but should we expect the cooling soon? Update: David Archibald has reminded me of this picture (with an imminent icy prediction) from the 2000 article in Science by Berger and Loutre. Archibald included it in his cute book, Solar Cycle 24, as Figure 3. Well, an imminent cooling is surely possible but the article doesn't look like the most convincing piece of science to me - both because of technical reasons, missing references, as well as entertaining, otherwise unimportant mistakes (for example, they claim that the Serbian astronomer Milutin Milanković was Czech). But I would like to ask you what you know and think about the reconstruction of the climate record from the Milankovitch cycles. How good a fit can we actually obtain by combining the known astronomical cycles with well-chosen coefficients? There are indications that the purely astronomical theory fails to describe very low-frequency signals - with approximately 100,000-year or 400,000-year periodicity - whose observed amplitude seems to be much larger than the theoretically predicted one: natural climate change at very long timescales seems to be much more intense than our theories say. Reincarnation of an infalling observer In this essay, I would like to talk about physics and perceptions inside black holes. The picture of reincarnation above was sketched by Prof Krishna :-). Note that the death and the birth are two faces of the same object, namely the infinity, through which they are connected. While our treatment will try to be more serious than what you have seen so far, similar spiritual considerations will actually be unexpectedly important, especially when we get closer to the singularity. Why? Well, we should start with the question: Saturday, January 10, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Russia-Ukraine gas disputes The delivery of the Russian gas to Europe has been stopped because Russia has accused Ukraine of stealing the gas on the Ukrainian territory, a claim that Yushchenko vehemently denies. The first thing to say is that none of us can be certain whether the accusation is true or not. The dispute is probably going to end soon and the gas delivery will be restarted - because Ukraine has agreed with the Russian proposal to put independent monitors on its territory, largely thanks to the EU Council boss, Mr Mirek Topolánek: see for official info from the Czech EU presidency. But let us look at the dispute, anyway. Many people tend to decide according to their prejudices. And the prejudices in the West usually say that the Ukrainians are the good guys while the Russians are the bad guys. One of the proposals for a Gazprom building in St Petersburg. Click for more. Well, give me a break with this stupidity. Ukraine and Russia are two parts of the same cultural territory. In fact, Ukraine is the real historical "cradle" of Russia; it is more Russian than Russia itself. People in Central Europe who actually have some experience with both nations know that both of them are poor, essentially Russian-speaking nations from the East. Members of both nations tend to be employed in low-paid occupations and they are ocassionally connected with the Russian-speaking mafias. Friday, January 09, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Vanishing entropy of extremal black holes? Sean Carroll, Matthew Johnson, and Lisa Randall have submitted a provocative paper that tries to defend an old unpopular idea that the entropy of extremal black holes is exactly zero. Their new argument is that the nonzero entropy calculated in hundreds of careful stringy papers, agreeing with the nonzero Hawking-Bekenstein entropy, actually refers to a different spacetime than the "pure" extremal black hole, namely a spacetime that has an extra "AdS2 x S2" patch in it. Here is the main picture: The object in the middle is the Penrose diagram of a non-extremal charged black hole. They slowly adjust the mass/charge relationship to approach the extremal limit. The extremal black hole is the object on the right. Instead of saying that the pink regions are the only ones that survive in the limit, they say that in the limit, the non-extremal black hole becomes a union of the extremal one (pink) and some additional "AdS2 x S2" space (a brown wiggly strip in the middle of the diagram), and it is the latter space (also redrawn as a straight strip on the left) that is supposed to carry the nonzero entropy. Thursday, January 08, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Eurosocialists insulted by common sense By the way, a good news in the journalistic world. The Wall Street Journal becomes the first important newspaper that praises Czechia for A Prague Spring for Political Honesty. The European socialists have read the refreshing if not brilliant essay by the Czech president in the Financial Times, Do not tie the markets: free them. Václav Klaus explains that all moments could be called "exceptional" but this adjective is usually used in order to manipulate with the people. And he argues that Europe should weaken if not repeal various environmental, social, health, and other regulations and "standards". How did the socialists react? Well, you can guess! ;-) They went ballistic: Eurosocialists angrily rejected Klaus' calls. The article above is pretty entertaining, so let me respond to individual paragraphs: They urged the Czech Premier Mirek Topolánek to issue an immediate statement declaring that the president speaks for himself and not the government and that his views do not reflect the priorities of the EU Czech presidency. Very nice but before 1989, I have seen a lot of virtually identical stuff. For example, in 1977, everyone was supposed to sign Anti-Charter 77. The government should also denounce the witches, right? Why do they exactly think that Klaus' opinions do not reflect the priorities of the Czech EU presidency? UAH MSU: month-on-month cooling UAH MSU satellite records have released the December 2008 data. The anomaly shows 0.074 °C of cooling since November which is pretty substantial (100 °C per century haha). Unfortunately, the newest numbers are not yet on the website I linked so we have to rely on private channels of Anthony Watts which are surely reliable because they are coming from the very center of UAH MSU. ;-) Of course, the data show 2008 as the coldest year of the century that began 8 years ago. So far because we may see colder years in the future. However, I just wanted to draw a few UAH graphs, anyway. So they don't contain December 2008 yet. First, here are the graphs that you rarely see: the temperature of the world's oceans and land: Click any graph to zoom in. You can see that the land was warming by about 50% faster rate than the ocean, 0.16 and 0.11 °C per decade, and you may hypothesize that some effects of the civilization have influenced this observation. It would probably be incorrect to talk about the "urban heat island" effects because the satellites are unlikely to be affected by the popular barbecue parties at the weather stations. Here are the two hemispheres: Shift/click the picture to open a bigger one in a new window. (That's an even more important keyboard shortcut than tabulator/enter that changes the color of a ball.) You see that the Southern Hemisphere is warming up more than 3 times slower than the Northern Hemisphere which is another reason to think that the observed changes are not really global in character. If you care, the Southern polar regions are cooling by 0.08 °C per decade. I am not going to post these graphs because that would be a more serious blasphemy than the pictures of Mohammed! :-) Genes and memes, ideas and empty words Because a large part of the Spanish online community seems to be infected with a meme of a ball that changes its color upon clicking (almost 1 visit to TRF per second, is the reason for the propagation of this meme, or a nonsense of the day, if you wish to follow their terminology), let me write something about the memes. A few weeks ago, I had an e-mail exchange about memes with a reader of TRF whom I have also met in the real life - unlike many of you. Greetings, Tom. He argued that the concept of a "meme" is an amazing discovery because it allows us to understand the fascinating phenomenon of a "Mexican wave" that moves around the Earth every 24 hours and that affects a field that is defined as the density of the vibrations of five-inch sticks referred to as toothbrushes.  How is it possible that these toothbrushes move in unison? It is surely a divine phenomenon proving that memes are jumping in between the brains of different people. And the extraterrestrial aliens would surely be talking about "memes" all the time when their attention would focus on the Mexican wave of toothbrushes on the Earth, Tom argued.  ;-) As you may expect, I was skeptical about these big assertions about the importance of "memes" because the aliens would probably be thrilled by very different things than "memes" or "toothbrushing waves" and they could even use the toothbrushes themselves in ways that we couldn't have predicted. So let me defend my viewpoint. Memes: a few positive words I am personally using the word "meme", at least sometimes. What does it mean? It is a small idea, an elementary building block of an ideology, a partial method to look at a particular or general problem, or a myth, a joke, or a viral video or another computer file that people send to each other to have some fun, and what is important for every meme is that it can spread just like an infection. It is very stupid to click at the ball in the previous posting. But people are doing it nevertheless. And they lead other people to do the same thing. There exists a clear analogy of this behavior to the concept of the genes. Much like genes, memes are "selfish", if you allow me to use Dawkins' colorful adjective. They have their own identity - or at least it's the point of "memetics" to imagine that they do - and they want to become more powerful and to control a larger portion of the world. So they are using and abusing the environment in order to spread. Each of them may choose a different strategy. Wednesday, January 07, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Nonsense of the day: click the ball to change its color Tontería del día: Pincha en la bola para cambiarla de color Full screen here... Special bienvenido for Spanish visitors! Tuesday, January 06, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere NCDC: the U.S. cool down by 0.49 °F per decade The National Climatic Data Center (NCDC) became the first major source of temperature data that has published its December 2008 figures. As a reader of Anthony Watts' blog nicknamed "crosspatch" revealed, you can now draw graphs that include the whole year 2008. Here is one: Click to zoom in. The graph shows the average U.S. temperatures in the most recent 10 years - between 1999 and 2008 - in Fahrenheit degrees. Here is how I created it: • go to an NCDC page • in the form, choose "Mean Temperature, Annual, United States, From 1999, To 2008, Base Period 1901-2000" Keep the output type for the sake of simplicity and click "Submit".  If you want my colors, take a screenshot of the right size, negate its colors, and fill the outer region by the TRF background color #113322 (17, 51, 34 decimally). ;-) Record cold temperatures in 2009 Update: For the summary of the average temperatures in 2009 and its ranking, as written at the end of December 2009, click the link in this sentence. Current U.S. temperatures in °F. See Anthony Watts' blog for more comments. Record cold temperatures have arrived to the United Kingdom, Canada (24 consecutive days below -24 °C in a city). Cold Siberian air has also hit Central Europe, France, and Italy. London is colder than Antarctica. Literally. The cold snap is costly. The temperature in Pilsen and Czechia in general keeps on oscillating around -10 °C, too. The snow around is clean and pretty. The coldest official Czech weather station, Stráž pod Ralskem, has seen -25.1 °C. Journalists are also freezing in Colorado and Wyoming, among other places, while North Dakota continues to see record snow. Poor people in chilly India solve the situation by burning books; at least 55 people have died. I hope that they have enough copies of An Inconvenient Truth, like in Belgium (I invented the joke before them!). Sorry, the picture above are commies in a warm weather, not poor people in a chilly weather.
d135e74ff2d28d19
 How Physicists Fool the Gullible World How Physicists Fool the Gullible World (trop ancien pour répondre) Pentcho Valev 2017-05-15 18:43:00 UTC Raw Message "Basically you can think of the division between the relativity and quantum systems as "smooth" versus "chunky" or continuously interconnected versus discretely segmented." https://www.meetup.com/Quantum-Physics-Discussion-Group/events/239534553/ Red herring - there is no such dilemma. The actual division is between Einstein's idiotic relative time (a consequence of Einstein's false constant-speed-of-light postulate) and Newton's absolute time: "The effort to unify quantum mechanics and general relativity means reconciling totally different notions of time. In quantum mechanics, time is universal and absolute; its steady ticks dictate the evolving entanglements between particles. But in general relativity (Albert Einstein's theory of gravity), time is relative and dynamical, a dimension that's inextricably interwoven with directions X, Y and Z into a four-dimensional "space-time" fabric." "In quantum theory, a "master clock" ticks away somewhere in the universe, measuring out all processes. But in Einstein's relativity, time is distorted by motion and gravity, so clocks don't necessarily agree on how it is passing - meaning any master clock must, somewhat implausibly, be outside the universe." "In Einstein's general theory of relativity, time depends locally on gravity; in standard quantum theory, time is global – all clocks "tick" uniformly." "One one hand, time in quantum mechanics is a Newtonian time, i.e., an absolute time. In fact, the two main methods of quantization, namely, canonical quantization method due to Dirac and Feynman's path integral method are based on classical constraints which become operators annihilating the physical states, and on the sum over all possible classical trajectories, respectively. Therefore, both quantization methods rely on the Newton global and absolute time. [...] The transition to (special) relativistic quantum field theories can be realized by replacing the unique absolute Newtonian time by a set of timelike parameters associated to the naturally distinguished family of relativistic inertial frames." "In quantum mechanics, time is absolute. The parameter occurring in the Schrödinger equation has been directly inherited from Newtonian mechanics and is not turned into an operator. In quantum field theory, time by itself is no longer absolute, but the four-dimensional spacetime is; it constitutes the fixed background structure on which the dynamical fields act. GR is of a very different nature. According to the Einstein equations (2), spacetime is dynamical, acting in a complicated manner with energy momentum of matter and with itself. The concepts of time (spacetime) in quantum theory and GR are thus drastically different and cannot both be fundamentally true." Pentcho Valev Pentcho Valev 2017-05-16 06:09:32 UTC Raw Message "Special relativity is based on the observation that the speed of light is always the same, independently of who measures it, or how fast the source of the light is moving with respect to the observer. Einstein demonstrated that as an immediate consequence, space and time can no longer be independent, but should rather be considered a new joint entity called "spacetime." http://community.bowdoin.edu/news/2015/04/professor-baumgarte-describes-100-years-of-gravity/ Physicists reject the "immediate consequence", spacetime, but worship the underlying premise, Einstein's false constant-speed-of-light postulate, knowing (or not knowing) that logic forbids the combination "true postulate, wrong consequence". Also, physicists reject spacetime but worship the ripples in spacetime (gravitational waves) gloriously faked by LIGO conspirators: "Splitting Time from Space - New Quantum Theory Topples Einstein's Spacetime. Buzz about a quantum gravity theory that sends space and time back to their Newtonian roots." "Rethinking Einstein: The end of space-time. It was a speech that changed the way we think of space and time. The year was 1908, and the German mathematician Hermann Minkowski had been trying to make sense of Albert Einstein's hot new idea - what we now know as special relativity - describing how things shrink as they move faster and time becomes distorted. "Henceforth space by itself and time by itself are doomed to fade into the mere shadows," Minkowski proclaimed, "and only a union of the two will preserve an independent reality." And so space-time - the malleable fabric whose geometry can be changed by the gravity of stars, planets and matter - was born. It is a concept that has served us well, but if physicist Petr Horava is right, it may be no more than a mirage." "[George] Ellis is up against one of the most successful theories in physics: special relativity. It revealed that there's no such thing as objective simultaneity. [...] Rescuing an objective "now" is a daunting task." Pentcho Valev
e3af3faff4fd9f0e
Tuesday, 25 September 2007 Physicalism and Conciousness As Colin McGinn has stated, "Consciousness defies explanation in [compositional, spatial] terms. Consciousness does not seem to be made up out of smaller spatial processes.... Our faculties bias us towards understanding matter in motion, but it is precisely this kind of understanding that is inapplicable to the mind-body problem." Nonsense. What is computer software? Can you explain it? How can you copy it without creating new matter or energy? It's information, that's why. Our thoughts are information, the product of physicalism and caused by it. Nothing inherently mysterious, though it might appear so to the human mind that is actually experiencing it. The mind-body duality dilema that people struggle with is analogous to an optical illusion - e.g. the hollow mask that appears solid, or the wire cube that flips orientation - as with these it's difficult to think in our mind of both states simultaneously. We can flip states, but we can't 'see' or imagine both simultaneously. In a similar way we can (almost) imagine computer software as information, but have greater difficulty imagining this condition when applying it to our own thoughts. It becomes even more confusing, and more like the attempt to simultaneously 'see' both states of an optical illusion, when we try an imagine what's happening when we think about what we are thinking now in the first person; and some explanations of conciousness and dualism confuse the issue by trying to do this. Did Mary (see site) learn something new about pain? Yes. She physically experienced (both in terms of physical neurological responses and informational interpretation) the real pain for which she had only previously had a physical neurological model. Her model has simply been updated with real first hand experiential data, when previously the only experiential data she had was neurological mapping of things she had already experienced. In practice of course this 'schrodingers's cat' type of thought experiment is limited. The definition of the experiment is incorrect. Pain is simply a more intense stimulus of corresponding stimuli - presumably Mary hadn't been denide the sense of touch, otherwise she would have had difficulty relating to much of the theoretical information she had read in the first place. What sort of human would have emerged from the room if that had been the case. It's a hypethetical case where the accuracy of the perceived consequences are dubious, to the extent that the conclusion does not necessarily follow. Mary can't even pick up the bowling ball if she's been deprived of the appropriate senses! This is metaphysical mumbo-jumbo. "compositional, spatial analysis of the intrinsic nature of an event" - does this actually mean anything? These arguments are often dressed up in these phrases that some researcher has latched onto or invented to describe some concept that is difficult to understand - fair enough. But then the problem is that these phrases are used in ways that make it difficult to grasp what is being said. How does a human feel pain? A cat? A worm? A bacterium? A cell? A complex molecule? A grain of sand? Physicaly, they don't, they simply react - either extremely passivily according to relatively simple laws of physics for a grain of sand, or in more complex physical/chemical ways for a molecule, or in increasingly more complex chemical/exlectrical/biological/neurological ways for higher organisms. Being organsims with a complex nervous system that includes the brain we have adapted ourselves to the interpretation of our environment. One of our interpretations is to feel/think/experience our environment in terms of our own experiences. The more animate and the more similar to us other entities are, the more easly we make this mapping - we anthropomorphise or personify. We do this with ourselves and our 'thoughts' to the greatest degree. Some of us even have to create, or imagine, or to model non-existant entities using the same principle - demons, faires, ghosts, gods, etc. Sometimes our brains get it wrong - they extrapolate (a very valuable tool used in the prediction process) - they extrapolate too much, they become gullible, seeing optical illusions, even delusions. Do some of the lower organsims not feel pain? If they do, do they refer to themselves in the first person? Again, when is this magical dualism switched on - just humans, apes, ...? Be careful, else you'll be dragging up biblical nonsense again. "As the theist René Descartes wrote...(quotes Descartes)..." Decartes: "I cannot distinguish in myself any parts" - could that be because there is nothing to distinguish? Is Decartes referring to the distinction between mind and body, or the distinction between parts of his thoughts? Is he struggling to identify his thoughts as distinct physical entities? Maybe he's struggling because they don't exist as such. When my computer is running some software I can see the results on screen, I can imaging the electrons moving at amazing speeds around the silicon based microscopic circuitry, and I can imaging the source code I have written if it's my program that's running - but can I imaging the actual 'software' itself as a physical entity? No more than I can be self aware and imagine my own thoughts as something distict from my physicality. I can certainly imagine what the dualists are describing. I can imaging some ghostly substance that might be my soul, spirit, thoughts - but that's all it is, an imagined concept. I have no reason to think it exists. When movies portray a dead soul rising out of a body - is that what we really think is happeng in some invisible dimension? Of course not (or maybe you do). But there is no evidence to support that imagining, that concept. I can imagine flying pigs, with little wings - do they exist? Because I can imagine something doesn't mean it exists. I can imagine God, angels - all with typically anthropomorphised representations. If God really exists with some of the real properties he's supposed to have, such as omniscience, can I imagine that? Only in a limited way, as I imagine the mathematical concept of infinity - something bigger than anything, but to which if I add more it is the same thing? Does that sound a little like the ontological argument for God? Figments of our limited imaginations! In postulating the concept of dualism we are using a limited capacity tool (the mind) to grasp something of itself that is merely apparent. We accept illusions, hoaxes, some delusions, for what they are - the mind not presenting a sufficiently good approximation of the external physical reality - but then for no apparent reason than the mystery of not underestaning something, we invent dualism, supernatural external agents, theism. Figments of our limited imaginations. Why is it so difficult to see that the alternative - the physical causal relationship between neurological activity and the resulting mental models? Don't be fooled by the apparent complexity. How can this proposed simple process take part in this argument, including those parts of the process that produce the written (typed) work above (whether you think its good or not it's still apparently complex). But, just as the many many simple little steps of evolution have produced us, so the many many simple little processes in this organism have produced this. If I had omnisciently and omnipotently flashed out all this text instantly, in zero time, then we might be closer to the realisation of what God is. But I didn't. Every impulse to my fingers to type, every nuerologocal action that contributes, is very very simple - they are simply working very fast and in great numbers. The sophisticaion comes from the co-ordination. But co-ordinated lesser orgaisms that are independent to some extent also produce similarly amazing results. Bees building honey combs, ants foreging for food - they are all sophisticated co-ordinated processes where the individual elements are all amazingly simple whan compared with the result. In maths, imagine a simple sum: 1 + 1 = 2. Now imagine some complex formula - say some series using powers and factorials - still with me? Now try some complex differential equations - still here? Now Schrödinger equation... - have you seen them and do you understand them? By now some, if not most of us (including me) has lost track of these equations - they are more complex than I am familar with. I can imagine some vague representation on a physicists blackboard, employing symbols I'm not familar with - it's all Greek to me. Now, let's imagine infinity - got that? I bet more people with upper high school and graduate level maths find it easier to grasp the notion of infinity than they do some complex expression representing something in physics. It's quite straight forward to imagine clearly some simpler things, and relatively easy to grasp something of the notion of a concept that is very extensive, in size, number, power, infomational capacity, than it is to imagine some things that are just more complex than we are used to. It's easier to imagine God as represented by some very vague notions of extreme extension to simpler human properties, than it is to imagine in detail more complex processes or organisms than those with which we are currently familar. Dualism is similar to some extent. We find it difficult to imagine where the boundary lies - or how the continuum flows - from the physical bodies that we have come to be familiar with and the thoughts that we are also familiar with. Because we can't imagine this we invent a separation - dualism. It's a failure of our current capacity to understand. So, are physicalists so advanced that they can conceive of it, while the poor dumb dualists can't? No, of course not. What is most likely at work here is an ingrained view that's difficult to shake off. I would guess, though I have nothing to support this, that all physicalists have had dualist interpretations at one time - simply because it is easier to imagine. This is an imagination gap. If the gap is narrow we can build a bridge easily. If the gap is wide we prefer to fly across, skipping whatever is missing. Go from what we are familiar with to some extreme concept based on the familar properties. It's difficult to imagine what we don't know. This imagination gap should be familar to most students, particularly the more advanced your studies*. You can read the fear of the apparent consequences in the writings of theists. We are dealing with a 'duality of the gaps' that is similar to the 'God of the gaps'. "we are not arguing that there is some gap in an otherwise seamless naturalist view of reality" Oh yes you are. "This is an argument from the fundamental character of reality and what kinds of things exist (purposes, feelings..." Yes, purpose and feelings exist, but not as some distinct dualist entity. They are properties of the organism that is experiencing. Particularly feelings and emotions - simple hormonal biological chemical electrical reactions. 'Purpose' is apparent, not real in the sense that is independent free-will. *I remember very clearly the earliest experience of this, on a very limited scale. In primary school I could do 'short-division' but I couldn't fathom out 'long-division' - it was very frustrating, and even frightening - I feared I was really dumb!. Then a neigbour's son, a year older than me, spent some time going through examples. I remember very clearly when the penny dropped. A spiritual revalation? Later, at university I struggled with some concepts of advanced chemistry - it was an electronics course and I naively hadn't expected to be learning chemistry and I'd skipped chemistry at highschool, so I was ill equiped for some of this stuff. I remember the anguish in class, seeing all the other students nodding knowingly while I was thinking "what the hell is he talking about". Recognising the response I went off to the library and made sure I caught up. Never be afraid of what you don't know! If you need to know it, put in sufficient effort so that your brain and its neurological patterns become famialar with it - eventually you'll see the light - alleluiah!
5d43982e3efb699d
Monthly Archives: July 2020 e, π and the Exponential Function Throughout mathematics and its applications, we often encounter the numbers e and π. But what do they actually mean, what makes them so prevalent, and how are they related? Both numbers are deeply intertwined with the exponential function, denoted exp, which can be described simply as “the function which is its own derivative”. (Or, in slightly less simple but more accurate terms – exp is the only function f:\mathbb{R}\to\mathbb{R} which is differentiable everywhere and satisfies f'(x)=f(x) for every x\in\mathbb{R}, and f(0)=1. You can also use \mathbb{C} instead of \mathbb{R}). Another way to say this, is that exp is a solution to the simple differential equation y'=y. As such, it is a building block for solutions to differential equations of all kinds. Differential equations describe how the change in some quantity relates to the quantity itself. They describe how the universe works at all levels – from the most microscopic and fundamental, such as • Electromagnetism (Maxwell’s equations), • Gravity (Einstein’s field equations), • Quantum mechanics (Schrödinger equation), to the macrosopic – • The motion of springs, pendulums, projectiles and planets, • Waves – be it sea waves, sound waves or radio waves, • Electronic circuits, • Radioactive decay, • Structural integrity of buildings, • Rockets and space launches, • The growth of populations, be it humans, animals, bacteria in a petri dish, viruses in a human host, or people sick with COVID-19, • Financial dynamics, like money in a bank account, stock prices, the revenues of a company, or the exchange rate of currencies such as Bitcoin, • Adoption of new technologies, • Social phenomena, like memes and viral videos, • And much more – including purely abstract mathematical concepts which have no direct ties to phenomena in the physical universe. So it is no surprise that the function which is the building block for solving differential equations comes up very often. In fact, some dub it “the most important function in mathematics”. Because the function is so important, we want to know more about it. One question of interest is – what is the value of \exp(1)? This is useful, because one of the properties of exp (which we can prove using the definition we started with) is that \exp(x+y)=\exp(x)\exp(y). Using this, we can show that \exp(n)=\exp(1)^n for every integer n (where taking a power is a simple repeated multiplication). In other words, knowing the value of the function at 1 allows us to find its value for every integer. So we give the value of \exp(1) a name. The name we choose is e. That’s what e is – the value of the exponential function at 1. The importance of e can be understood by understanding the importance of the exponential function, which itself can be understood by understanding the importance of differential equations. That understanding can come from some experience with their applications; the examples I gave above might help. In fact, if we extend a bit the definition of taking a power, we will find that for every real number x, we have \exp(x) = e^x, not just for integer x. This is why the exponential function is often written e^x instead of \exp(x). The exponential function is also where π comes from. If we look at it as a complex function, we find that it is periodic – there is a specific number p\in\mathbb{C} such that for every z\in\mathbb{C}, we have \exp(z+p)=\exp(z) (which is the smallest with this property). This number happens to be purely imaginary, so if we divide it by 2i, we get a real number. This real number is what we call π. This way of looking at π – as the period of the most important function in mathematics (divided by 2i) – is much more fundamental, and better explains why π comes up so often, than definitions based on the girth of arbitrary geometric shapes we might scribble. The Blind Men and the Elephant. [the BIG picture] | by Sophia Tepe ... It’s also noteworthy that the exponential function is reminiscent of the blind men and the elephant. It behaves differently and seems to be a different thing if we look at it from different perspectives. If we look at the positive real axis, it is rapidly growing. On the negative real axis, it is rapidly shrinking. On the imaginary axis it is neither growing nor shrinking – it is periodic, repeating the same values in a cycle. Which nature of the exponential function comes to light, depends on the specific differential equation we use it to solve. That’s why some of the applications I mentioned exhibit growth or decay, and some exhibit rotation and cycles. In fact, the well-known periodic functions sin and cos can be seen as projections of what the exponential function does along the imaginary axis. We’ve defined e as the value of the function at 1 – a real number, and we’ve defined π using the period of the function along the imaginary numbers. It should come as no surprise, then, that e often comes up in applications dealing with growth and decay, and π often comes up in applications dealing with cycles and circularity. They are two sides of the same coin.
4ea9239b0231e3b3
Section 13.6: Angular Solutions of the Schrödinger Equation m = Please wait for the animation to completely load. Most potential energy functions in three dimensions are not often rectangular in form. In fact, they are most often in spherical coordinates (due to a spherical symmetry) and occasionally in cylindrical coordinates due to a cylindrical symmetry. We begin by considering the generalization of the time-independent Schrödinger equation to three-dimensional spherical coordinates, which is1 −(ħ2/2μ)[(1/r2)∂/∂r(r2∂/∂r) + (1/r2sin(θ))(∂/∂θ)(sin(θ)∂/∂θ) + (1/r2sin2(θ))(∂2/∂φ2)]ψ(r) + V(r)ψ(r) = Eψ(r) . (13.19) The probability per unit volume, the probability density, is ψ*(r)ψ(r) and therefore we require ∫ ψ*(r)ψ(r) d3r = 1 (where d3r = dV = r2sin(θ)drdθdφ) to maintain a probabilistic interpretation of the energy eigenfunction in three dimensions. As in the two-dimensional case, we use separation of variables variables, but now using ψ(r) = R(r) Y(θ,φ), i.e., separate the radial part from the angular part. This substitution yields [(1/R(r))d/dr(r2dR(r)/dr) + (1/Ysin(θ))(∂/∂θ)(sin(θ)∂Y/∂θ) + (1/Ysin2(θ))(∂2Y/∂φ2)] − (2μr2/ħ2)[V(r) − E] = 0 , (13.20) as long as V(r) = V(r) only. Note that each term involves either r or θ and φ.  We can separate these equations using the technique of separation of variables to give (1/R(r)) d/dr (r2 dR(r)/dr) − (2μr2/ħ2)[V(r) − E] = l(l + 1) , (13.21) (1/Ysin(θ)) (∂/∂θ)(sin(θ) ∂Y/∂θ) + (1/Ysin2(θ)) (∂2Y/∂φ2) = −l(l + 1) , (13.22) for the radial and angular parts, respectively. The constant l(l + 1) is the separation constant that allows us to separate one differential equation into two. We can do so because the only way for preceding equation to be true for all r, θ, and φ is for the angular part and the radial part to each be equal to a constant, ± l(l + 1). Despite the seemingly odd form of the separation constant, it is completely general and can be made to equal any complex number. For the angular piece, we can again separate variables using the substitution Y(θ,φ) = Θ(θ)Φ(φ).  This gives: sin(θ)/Θ d/dθ(sin(θ) dΘ/dθ) + l(l + 1)sin2(θ) = m2 , (13.23) 1/Φ d2Φ/dφ2 = −m2 , (13.24) where we have written the separation constant as ± m2, again without any loss of generality. The Φ(φ) part of the angular equation is a differential equation, d2Φ/dφ2 = −m2Φ, we have solved before. We get as its unnormalized solution Φm(φ) = exp(imφ) , (13.25) where m is the separation constant which can be both positive and negative. Since the angle φ ε {0, 2π}, we have that Φm(φ) = Φm(φ + 2π).  Like the ring problem in Section 13.5, in order for Φm(φ) to be single valued means that m = 0, ±1, ±2, ±3,….  We show these solutions in Animation 1. The Θ(θ) part of the angular equation is harder to solve. It has the unnormalized solutions Θlm(θ) = A Plm(cos(θ)) , where the Plm are the associated Legendre polynomials, where Plm(x) = (1 − x2)|m|/2 (d/dx)|m| Pl(x), are calculated from the Legendre polynomials Pl(x) = (1/2ll!) (d/dx)l(x2 − 1)l  .      (Rodriques' formula) The first few Legendre polynomials are P0(x) = 1 ,        P1(x) = x ,        and    P2(x) = (1/2) (3x2 1) , or in terms of cos(θ) P0 = 1,        P1 = cos(θ),        and    P2 = (1/2) (3cos2(θ)−1) . We can also write the Plm(x) using the above formulas as: P00 = 1,        P11 = sin(θ),    P01 = cos(θ) , P02 = (1/2)(3cos2(θ)-1),    P12 = 3sin(θ)cos(θ),    P22 = 3sin2(θ) . We notice that l > 0 for Rodrigues' formula to be valid. In addition, |m| ≤ l since Pl|m|>l = 0.  (For |m| > l, the power of the derivative is larger than the order of the polynomial and hence the result is zero.) We also note that there must be 2l + 1 values for m, given a particular value of l.  Polar plots (zx plane) of associated Legendre polynomials are shown in Animation 2. A positive angle θ is defined to be the angle down from the z axis toward the positive x axis. The length of a vector from the origin to the wave function, Plm, is the magnitude of the wave function at that angle. You may vary l and m to see how Plm varies. We normalize Θlm(θ)Φm(φ) by normalizing the angular part separately from the radial part (which we have yet to consider): ∫∫ Ylm*(θ,φ)Ylm(θ,φ) sin (θ) dθdφ = 1    [θ integration from 0 to π, φ integration from 0 to 2π] where Ylm(θ,φ) = Θlm(θ)Φm(φ).  When the Ylm(θ,φ) are normalized, they are called the spherical harmonics.The first few are Y00(θ,φ) = (1/4π)1/2 , Y ±11(θ,φ) = −/+ (3/8π)1/2 sin(θ) exp(±iφ)        Y 01(θ ,φ) = (3/4π)1/2 cos(θ) , and in general for m > 0, Ylm(θ,φ) = (−1)m [(2l + 1)(lm)!/(4π(l + m)!)]1/2 exp(imφ) Plm cos(θ) , and Yl−m(θ,φ) = (−1)mYlm*(θ,φ) for m < 0. When we represent the spherical harmonics this way, they are automatically orthogonal:  ∫ Ylm*(θ,φ)Yl'm'(θ,φ) sin(θ) dθdφ = δm m' δl l' . 1To avoid future confusion, we hereafter use μ for mass, and reserve m for the azimuthal (or magnetic) quantum number. 2Classically, angular momentum is L = r × p. We can write  L using quantum-mechanical operators in rectangular coordinates as Lx = ypzzpy, Ly = zpxxpz, and Lz = xpyypx. We find that if we write L2 and Lz in spherical coordinates, L2 = −ħ2 [(1/sin(θ)) (∂/∂θ)(sin(θ) ∂/∂θ) + (1/sin2(θ)) (∂2/∂φ2) ,       Lz = − (∂/∂φ) .   To which we note L2Ylm = l(l + 1)ħ2Ylm and LzYl= Ylm; the spherical harmonics, the Ylm , are eigenstates of L2 and Lz.
66ba9f9164ac5522
Energy input amplifies nonlinear dynamics of deep water wave groups Thomas A A Adcock, Paul Taylor Research output: Contribution to journalArticlepeer-review 5 Citations (Scopus) A possible physical mechanism for the formation of freak waves on the open ocean is the localized interactions between wind and waves.Such interactions are highly complex and are currently poorly understood at the scale of an individual wave.Rather than attempt to model the detailed transfer of energy from wind to waves, we simply consider the modifications to wave group dynamics of adding energy to the system. We carried out numerical experiments on isolated wave groups using an excited version of the nonlinear Schrödinger equation. Energy input enhances any soliton-like structures relative to regular waves for unidirectional propagation. For directionally spread wave groups, energy input enhances the nonlinear changes to the shapes of focused wave groups: Groups contract in the mean wave direction and expand in the lateral direction to a significantly greater degree than observed for nonexcited wave groups. Original languageEnglish Pages (from-to)8-12 Number of pages5 JournalInternational Journal of Offshore and Polar Engineering Issue number1 Publication statusPublished - Mar 2011 Externally publishedYes Dive into the research topics of 'Energy input amplifies nonlinear dynamics of deep water wave groups'. Together they form a unique fingerprint. Cite this
d8ecb4f996e8d5a1
Citation for this page in APA citation style.           Close Mortimer Adler Rogers Albritton Alexander of Aphrodisias Samuel Alexander William Alston Louise Antony Thomas Aquinas David Armstrong Harald Atmanspacher Robert Audi Alexander Bain Mark Balaguer Jeffrey Barrett William Barrett William Belsham Henri Bergson George Berkeley Isaiah Berlin Richard J. Bernstein Bernard Berofsky Robert Bishop Max Black Susanne Bobzien Emil du Bois-Reymond Hilary Bok Laurence BonJour George Boole Émile Boutroux Daniel Boyd Michael Burke Lawrence Cahoone Joseph Keim Campbell Rudolf Carnap Nancy Cartwright Gregg Caruso Ernst Cassirer David Chalmers Roderick Chisholm Randolph Clarke Samuel Clarke Anthony Collins Antonella Corradini Diodorus Cronus Jonathan Dancy Donald Davidson Mario De Caro Daniel Dennett Jacques Derrida René Descartes Richard Double Fred Dretske John Dupré John Earman Laura Waddell Ekstrom Austin Farrer Herbert Feigl Arthur Fine John Martin Fischer Frederic Fitch Owen Flanagan Luciano Floridi Philippa Foot Alfred Fouilleé Harry Frankfurt Richard L. Franklin Bas van Fraassen Michael Frede Gottlob Frege Peter Geach Edmund Gettier Carl Ginet Alvin Goldman Nicholas St. John Green H.Paul Grice Ian Hacking Ishtiyaque Haji Stuart Hampshire Sam Harris William Hasker Georg W.F. Hegel Martin Heidegger Thomas Hobbes David Hodgson Shadsworth Hodgson Baron d'Holbach Ted Honderich Pamela Huby David Hume Ferenc Huoranszki Frank Jackson William James Lord Kames Robert Kane Immanuel Kant Tomis Kapitan Walter Kaufmann Jaegwon Kim William King Hilary Kornblith Christine Korsgaard Saul Kripke Thomas Kuhn Andrea Lavazza Christoph Lehner Keith Lehrer Gottfried Leibniz Jules Lequyer Michael Levin Joseph Levine George Henry Lewes David Lewis Peter Lipton C. Lloyd Morgan John Locke Michael Lockwood Arthur O. Lovejoy E. Jonathan Lowe John R. Lucas Alasdair MacIntyre Ruth Barcan Marcus Tim Maudlin James Martineau Nicholas Maxwell Storrs McCall Hugh McCann Colin McGinn Michael McKenna Brian McLaughlin John McTaggart Paul E. Meehl Uwe Meixner Alfred Mele Trenton Merricks John Stuart Mill Dickinson Miller Thomas Nagel Otto Neurath Friedrich Nietzsche John Norton Robert Nozick William of Ockham Timothy O'Connor David F. Pears Charles Sanders Peirce Derk Pereboom Steven Pinker Karl Popper Huw Price Hilary Putnam Willard van Orman Quine Frank Ramsey Ayn Rand Michael Rea Thomas Reid Charles Renouvier Nicholas Rescher Richard Rorty Josiah Royce Bertrand Russell Paul Russell Gilbert Ryle Jean-Paul Sartre Kenneth Sayre Moritz Schlick Arthur Schopenhauer John Searle Wilfrid Sellars Alan Sidelle Ted Sider Henry Sidgwick Walter Sinnott-Armstrong Saul Smilansky Michael Smith Baruch Spinoza L. Susan Stebbing Isabelle Stengers George F. Stout Galen Strawson Peter Strawson Eleonore Stump Francisco Suárez Richard Taylor Kevin Timpe Mark Twain Peter Unger Peter van Inwagen Manuel Vargas John Venn Kadri Vihvelin G.H. von Wright David Foster Wallace R. Jay Wallace Ted Warfield Roy Weatherford C.F. von Weizsäcker William Whewell Alfred North Whitehead David Widerker David Wiggins Bernard Williams Timothy Williamson Ludwig Wittgenstein Susan Wolf David Albert Michael Arbib Walter Baade Bernard Baars Jeffrey Bada Leslie Ballentine Marcello Barbieri Gregory Bateson John S. Bell Mara Beller Charles Bennett Ludwig von Bertalanffy Susan Blackmore Margaret Boden David Bohm Niels Bohr Ludwig Boltzmann Emile Borel Max Born Satyendra Nath Bose Walther Bothe Jean Bricmont Hans Briegel Leon Brillouin Stephen Brush Henry Thomas Buckle S. H. Burbury Melvin Calvin Donald Campbell Sadi Carnot Anthony Cashmore Eric Chaisson Gregory Chaitin Jean-Pierre Changeux Rudolf Clausius Arthur Holly Compton John Conway Jerry Coyne John Cramer Francis Crick E. P. Culverwell Antonio Damasio Olivier Darrigol Charles Darwin Richard Dawkins Terrence Deacon Lüder Deecke Richard Dedekind Louis de Broglie Stanislas Dehaene Max Delbrück Abraham de Moivre Bernard d'Espagnat Paul Dirac Hans Driesch John Eccles Arthur Stanley Eddington Gerald Edelman Paul Ehrenfest Manfred Eigen Albert Einstein George F. R. Ellis Hugh Everett, III Franz Exner Richard Feynman R. A. Fisher David Foster Joseph Fourier Philipp Frank Steven Frautschi Edward Fredkin Benjamin Gal-Or Lila Gatlin Michael Gazzaniga Nicholas Georgescu-Roegen GianCarlo Ghirardi J. Willard Gibbs Nicolas Gisin Paul Glimcher Thomas Gold A. O. Gomes Brian Goodwin Joshua Greene Dirk ter Haar Jacques Hadamard Mark Hadley Patrick Haggard J. B. S. Haldane Stuart Hameroff Augustin Hamon Sam Harris Ralph Hartley Hyman Hartman John-Dylan Haynes Donald Hebb Martin Heisenberg Werner Heisenberg John Herschel Basil Hiley Art Hobson Jesper Hoffmeyer Don Howard William Stanley Jevons Roman Jakobson E. T. Jaynes Pascual Jordan Ruth E. Kastner Stuart Kauffman Martin J. Klein William R. Klemm Christof Koch Simon Kochen Hans Kornhuber Stephen Kosslyn Daniel Koshland Ladislav Kovàč Leopold Kronecker Rolf Landauer Alfred Landé Pierre-Simon Laplace David Layzer Joseph LeDoux Gilbert Lewis Benjamin Libet David Lindley Seth Lloyd Hendrik Lorentz Josef Loschmidt Ernst Mach Donald MacKay Henry Margenau Owen Maroney Humberto Maturana James Clerk Maxwell Ernst Mayr John McCarthy Warren McCulloch N. David Mermin George Miller Stanley Miller Ulrich Mohrhoff Jacques Monod Emmy Noether Alexander Oparin Abraham Pais Howard Pattee Wolfgang Pauli Massimo Pauri Roger Penrose Steven Pinker Colin Pittendrigh Max Planck Susan Pockett Henri Poincaré Daniel Pollen Ilya Prigogine Hans Primas Henry Quastler Adolphe Quételet Lord Rayleigh Jürgen Renn Emil Roduner Juan Roederer Jerome Rothstein David Ruelle Tilman Sauer Jürgen Schmidhuber Erwin Schrödinger Aaron Schurger Sebastian Seung Thomas Sebeok Franco Selleri Claude Shannon Charles Sherrington David Shiang Abner Shimony Herbert Simon Dean Keith Simonton Edmund Sinnott B. F. Skinner Lee Smolin Ray Solomonoff Roger Sperry John Stachel Henry Stapp Tom Stonier Antoine Suarez Leo Szilard Max Tegmark Teilhard de Chardin Libb Thims William Thomson (Kelvin) Richard Tolman Giulio Tononi Peter Tse Francisco Varela Vlatko Vedral Mikhail Volkenstein Heinz von Foerster Richard von Mises John von Neumann Jakob von Uexküll C. S. Unnikrishnan C. H. Waddington John B. Watson Daniel Wegner Steven Weinberg Paul A. Weiss Herman Weyl John Wheeler Wilhelm Wien Norbert Wiener Eugene Wigner E. O. Wilson Günther Witzany Stephen Wolfram H. Dieter Zeh Ernst Zermelo Wojciech Zurek Konrad Zuse Fritz Zwicky Free Will Mental Causation James Symposium The Physics of Information Creation Information physics shows that the core creative process in the universe involves quantum mechanics and thermodynamics. To understand information creation, information physics provides new insights into the puzzling "problem of measurement" and the mysterious "collapse of the wave function" in quantum mechanics. It results in a new information interpretation of quantum mechanics that disentangles the Einstein-Podolsky-Rosen paradox and explains the origins of information structures in the universe. Information physics also probes deeply into the second law of thermodynamics to establish the irreversible increase of entropy on a quantum mechanical basis, something that could not be shown by classical statistical mechanics or even quantum statistical physics.. Although "Information physics" is a new "interpretation" of quantum mechanics, it is not an attempt to alter the standard quantum mechanics, for example, extending it to theories such as "hidden variables" to restore determinism or adding terms to the Schrödinger equation to force a collapse. Information physics investigates the quantum mechanical and thermodynamic implications of cosmic information structures, especially those that were created before the existence of human observers. It shows that no "conscious observers" are required as with the Copenhagen Interpretation or the work of John von Neumann or Eugene Wigner. • Step 1: A quantum process - the "collapse of a wave function." Quantum Mechanics In classical mechanics, the material universe is thought to be made up of tiny particles whose motions are completely determined by forces that act between the particles, forces such as gravitation, electrical attractions and repulsions, etc. The equations that describe those motions, Newton's laws of motion, were for many centuries thought to be perfect and sufficient to predict the future of any mechanical system. They provided support for many philosophical ideas about determinism. In classical electrodynamics, electromagnetic radiation (light, radio) was known to have wave properties such as interference. When the crest of one wave meets the trough of another, the two waves cancel one another. In quantum mechanics, radiation is found to have some particle-like behavior. Energy comes in discrete physically localized packages. Max Planck in 1900 made the famous assumption that the energy was proportional to the frequency of radiation ν. E = hν For Planck, this assumption was just a heuristic mathematical device that allowed him to apply Ludwig Boltzmann's work on the statistical mechanics and kinetic theory of gases. Boltzmann had shown in the 1870's that the increase in entropy (the second law) could be explained if gases were made up of enormous numbers of particles. Planck applied Boltzmann's statistics of many particles to radiation and derived the distribution of radiation at different frequencies (or wavelengths) just as James Clerk Maxwell and Boltzmann had derived the distribution of velocities (or energies) of the gas particles. Note the mathematical similarity of Planck's radiation distribution law (photons) and the Maxwell-Boltzmann velocity distribution (molecules). Both curves have a power law increase on one side to a maximum and an exponential decrease on the other side of the maximum. The molecular velocity curves cross one another because the total number of molecules is the same. With increasing temperature T, the number of photons increases at all wavelengths. But Planck did not actually believe that radiation came in discrete particles, at least until a dozen years later. In the meantime, Albert Einstein's 1905 paper on the photoelectric effect hypothesized that light comes in discrete particles, subsequently called "photons," analogous to electrons. Planck was not happy about the idea of light particles, because his use of Boltmann's statistics implied that chance was real. Boltzmann himself had qualms about the reality of chance. Although Einstein also did not like the idea of chancy statistics, he did believe that energy came in packages of discrete "quanta." It was Einstein, not Planck, who quantized mechanics and electrodynamics. Nevertheless, it was for the introduction of the quantum of action h that Planck was awarded the Nobel prize in 1918. Louis de Broglie argued that if photons, with their known wavelike properties, could be described as particles, electrons as particles might show wavelike properties with a wavelength λ inversely proportional to their momentum p = mev. p = h/2πλ Experiments confirmed de Broglie's assumption and led Erwin Schrödinger to derive a "wave equation" to describe the motion of de Broglie's waves. Schrödinger's equation replaces the classical Newton equations of motion. Note that Schrödinger's equation describes the motion of only the wave aspect, not the particle aspect, and as such it implies interference. Note also that it is as fully deterministic an equation of motion as Newton's equations. Schrödinger attempted to interpret his "wave function" for the electron as a probability density for electrical charge, but charge density would be positive everywhere and unable to interfere with itself. Max Born shocked the world of physics by suggesting that the absolute values of the wave function ψ squared (|ψ|2) could be interpreted as the probability of finding the electron in various position and momentum states - if a measurement is made. This allows the probability amplitude ψ to interfere with itself, producing highly non-intuitive phenomena such as the two-slit experiment. Despite the probability amplitude going through two slits and interfering with itself, experimenters never find parts of electrons. They always are found whole. In 1932 John von Neumann explained that two fundamentally different processes are going on in quantum mechanics. 1. A non-causal process, in which the measured electron winds up randomly in one of the possible physical states (eigenstates) of the measuring apparatus plus electron. The probability for each eigenstate is given by the square of the coefficients cn of the expansion of the original system state (wave function ψ) in an infinite set of wave functions φ that represent the eigenfunctions of the measuring apparatus plus electron. cn = < φn | ψ > This is as close as we get to a description of the motion of the particle aspect of a quantum system. According to von Neumann, the particle simply shows up somewhere as a result of a measurement. Information physics says it shows up whenever a new stable information structure is created. 2. A causal process, in which the electron wave function ψ evolves deterministically according to Schrödinger's equation of motion for the wavelike aspect. This evolution describes the motion of the probability amplitude wave ψ between measurements. (ih/2π) ∂ψ/∂t = Von Neumann claimed there is another major difference between these two processes. Process 1 is thermodynamically irreversible. Process 2 is reversible. This confirms the fundamental connection between quantum mechanics and thermodynamics that is explainable by information physics. Information physics establishes that process 1 may create information. Process 2 is information preserving. Collapse of the Wave Function Physicists calculate the deterministic evolution of the Schrödinger wave function in time as systems interact or collide. At some point, they make the ad hoc assumption that the wave function "collapses." This produces a set of probabilities of finding the resulting combined system in its various eigenstates. Although the collapse appears to be a random and ad hoc addition to the deterministic formalism of the Schrödinger equation, it is very important to note that the experimental accuracy of quantum mechanical predictions is unparalleled in physics, providing the ultimate justification for this theoretical kluge. Moreover, without wave functions collapsing, no new information can come into the universe. Nothing unpredicatable would ever emerge. Determinism is "information-preserving." All the information we have today would have to have already existed in the original fireball at the universe origin. The "Problem" of Measurement Quantum measurement (the irreducibly random process of wave function collapse) is not a part of the mathematical formalism of wave function time evolution (the Schrödinger equation of motion is a perfectly deterministic process). The hypothesized collapse is an ad hoc heuristic description and method of calculation that predicts the probabilities of what will happen when an observer makes a measurement. In many standard discussions of quantum mechanics, and most popular treatments, it is said that we need the consciousness of a physicist to collapse the wave function. Eugene Wigner and John Wheeler sometimes describe the observer as making up the "mind of the universe." John Bell sardonically asked whether the observer needs a Ph.D. Von Neumann contributed a lot to this confusion by claiming that the location of a "cut" (Schnitt) between the microscopic system and macroscopic measurement system could be anywhere - including inside an observer's brain. Information physics will locate the cut (outside the brain). Measurement requires the interaction of something macroscopic, assumed to be large and adequately determined. In physics experiments, this is the observing apparatus. But in general, measurement does not require a conscious observer. It does require information creation or there will be nothing to observe. In our discussion of Schrödinger's Cat, the cat can be its own observer. The Boundary between the Classical and Quantum Worlds Some scientists (Werner Heisenberg, John von Neumann, Eugene Wigner and John Bell, for example) have argued that in the absence of a conscious observer, or some "cut" between the microscopic and macroscopic world, the evolution of the quantum system ψ and the macroscopic measuring apparatus A would be described deterministically by Schrödinger's equation of motion for the wave function | ψ + A > with the Hamiltonian H energy operator, (ih/2π) ∂/∂t | ψ + A > = H | ψ + A >. Our quantum mechanical analysis of the measurement apparatus in the above case allows us to locate the "cut" or "Schnitt" between the microscopic and macroscopic world at those components of the "adequately classical and deterministic" apparatus that put the apparatus in an irreversible stable state providing new information to the observer. John Bell drew a diagram to show the various possible locations for what he called the "shifty split." Information physics shows us that the correct location for the boundary is the first of Bell's possibilities. The second law of thermodynamics says that the entropy (or disorder) of a closed physical system increases until it reaches a maximum, the state of thermodynamic equilibrium. It requires that the entropy of the universe is now and has always been increasing. (The first law is that energy is conserved.) This established fact of increasing entropy has led many scientists and philosophers to assume that the universe we have is running down. They think that means the universe began in a very high state of information, since the second law requires that any organization or order is susceptible to decay. The information that remains today, in their view, has always been here. This fits nicely with the idea of a deterministic universe. There is nothing new under the sun. Physical determinism is "information-preserving." Creation of information structures means that in parts of the universe the local entropy is actually going down. Reduction of entropy locally is always accompanied by radiation of entropy away from the local structures to distant parts of the universe, into the night sky for example. Since the total entropy in the universe always increases, the amount of entropy radiated away always exceeds (often by many times) the local reduction in entropy, which mathematically equals the increase in information. "Ergodic" Processes We will describe processes that create information structures, reducing the entropy locally, as "ergodic." This is a new use for a term from statistical mechanics that describes a hypothetical property of classical mechanical gases. See the Ergodic Hypothesis. Ergodic processes (in our new sense of the word) are those that appear to resist the second law of thermodynamics because of a local increase in information or "negative entropy" (Erwin Schrödinger's term). But any local decrease in entropy is more than compensated for by increases elsewhere, satisfying the second law. Normal entropy-increasing processes we will call "entropic". Without violating the inviolable second law of thermodynamics overall, ergodic processes reduce the entropy locally, producing those pockets of cosmos and negative entropy (order and information-rich structures) that are the principal objects in the universe and in life on earth. Entropy and Classical Mechanics Ludwig Boltzmann attempted in the 1870's to prove Rudolf Clausius' second law of thermodynamics, namely that the entropy of a closed system always increases to a maximum and then remains in thermal equilibrium. Clausius predicted that the universe would end with a "heat death" because of the second law. Boltzmann formulated a mathematical quantity H for a system of n ideal gas particles, showing that it had the property δΗ/δτ ≤ 0, that H always decreased with time. He identified his H as the opposite of Rudolf Clausius' entropy S. In 1850 Clausius had formulated the second law of thermodynamics. In 1857 he showed that for a typical gas like air at standard temperatures and pressures, the gas particles spend most of their time traveling in straight lines between collisions with the wall of a containing vessel or with other gas particles. He defined the "mean free path" of a particle between collisions. Clausius and essentially all physicists since have assumed that gas particles can be treated as structureless "billiard balls" undergoing "elastic" collisions. Elastic means no motion energy is lost to internal friction. Shortly after Clausius first defined the entropy mathematically and named it in 1865, James Clerk Maxwell determined the distribution of velocities of gas particles (Clausius for simplicity had assumed that all particles moved at the average speed 1/2mv2 = 3/2kT). Maxwell's derivation was very simple. He assumed the velocities in the x, y, and z directions were independent. [more...] Boltzmann improved on Maxwell's statistical derivation by equating the number of particles entering a given range of velocities and positions to the number leaving the same volume in 6n-dimensional phase space. This is a necessary state for the gas to be in equilibrium. Boltzmann then used Newtonian physics to get the same result as Maxwell, which is thus called the Maxwell-Boltzmann distribution. Boltzmann's first derivation of his H-theorem (1872) was based on the same classical mechanical analysis he had used to derive Maxwell's distribution function. It was an analytical mathematical consequence of Newton's laws of motion applied to the particles of a gas. But it ran into immediate objections. The objection is the hypothetical and counterfactual idea of time reversibility. If time were reversed, the entropy would simply decrease. Since the fundamental Newtonian equations of motion are time reversible, this appears to be a paradox. How could the irreversibile increase of the macroscopic entropy result from microscopic physical laws that are time reversible? Lord Kelvin (William Thomson) was the first to point out the time asymmetry in macroscopic processes, but the criticism of Boltzmann's H-theorem is associated with his lifelong friend Joseph Loschmidt. Boltzmann immediately agreed with Loschmidt that the possibility of decreasing entropy could not be ruled out if the classical motion paths were reversed. Boltzmann then reformulated his H-theorem (1877). He analyzed a gas into "microstates" of the individual gas particle positions and velocities. For any "macrostate" consistent with certain macroscopic variables like volume, pressure, and temperature, there could be many microstates corresponding to different locations and speeds for the individual particles. Any individual microstate of the system was intrinsically as probable as any other specific microstate, he said. But the number of microstates consistent with the disorderly or uniform distribution in the equilibrium case of maximum entropy simply overwhelms the number of microstates consistent with an orderly initial distribution. About twenty years later, Boltzmann's revised argument that entropy statistically increased ran into another criticism, this time not so counterfactual. This is the recurrence objection. Given enough time, any system could return to its starting state, which implies that the entropy must at some point decrease. These reversibility and recurrence objections are still prominent in the physics literature. The recurrence idea has a long intellectual history. Ancient Babylonian astronomers thought the known planets would, given enough time, return to any given position and thus begin again what they called a "great cycle," estimated by some at 36,000 years. Their belief in an astrological determinism suggested that all events in the world would also recur. Friedrich Nietzsche made this idea famous in the nineteenth century, at the same time as Boltzmann's hypothesis was being debated, as the "eternal return" in his Also Sprach Zarathustra. The recurrence objection was first noted in the early 1890's by French mathematician and physicist Henri Poincaré. He had found an analytic solution to the three-body problem and noted that the configuration of three bodies returns arbitrarily close to the initial conditions after calculable times. Even for a handful of planets, the recurrence time is longer than the age of the universe, if the positions are specified precisely enough. Poincaré then proposed that the presumed "heat death" of the universe predicted by the second law of thermodynamics could be avoided by "a little patience." Another mathematician, Ernst Zermelo, a young colleague of Max Planck in Berlin, is more famous for this recurrence paradox. Boltzmann accepted the recurrence criticism. He calculated the extremely small probability that entropy would decrease noticeably, even for gas with a very small number of particles (1000). He showed the time associated with such an event was 101010 years. But the objections in principle to his work continued, especially from those who thought the atomic hypothesis was wrong. It is very important to understand that both Maxwell's original derivation of the velocities distribution and Boltzmann's H-theorem showing an entropy increase are only statistical or probabilistic arguments. Boltzmann's work was done twenty years before atoms were established as real and fifty years before the theory of quantum mechanics established that at the microscopic level all interactions of matter and energy are fundamentally and irreducibly statistical and probabilistic. Entropy and Quantum Mechanics A quantum mechanical analysis of the microscopic collisions of gas particles (these are usually molecules - or atoms in a noble gas) can provide revised analyses for the two problems of reversibility and recurrence. Note this requires more than quantum statistical mechanics. It needs the quantum kinetic theory of collisions in gases. There are great differences between Ideal, Classical, and Quantum Gases. Boltzmann assumed that collisions would result in random distributions of velocities and positions so that all the possible configurations would be realized in proportion to their number. He called this "molecular chaos." But if the path of a system of n particles in 6n-dimensional phase space should be closed and repeat itself after a short and finite time during which the system occupies only a small fraction of the possible states, Boltzmann's assumptions would be wrong. What is needed is for collisions to completely randomize the directions of particles after collisions, and this is just what the quantum theory of collisions can provide. Randomization of directions is the norm in some quantum phenomena, for example the absorption and re-emission of photons by atoms as well as Raman scattering of photons. In the deterministic evolution of the Schrödinger equation, just as in the classical path evolution of the Hamiltonian equations of motion, the time can be reversed and all the coherent information in the wave function will describe a particle that goes back exactly the way it came before the collision. But if when two particles collide the internal structure of one or both of the particles is changed, and particularly if the two particles form a temporary larger molecule (even a quasi-molecule in an unbound state), then the separating atoms or molecules lose the coherent wave functions that would be needed to allow time reversal back along the original path. During the collision, one particle can transfer energy from one of its internal quantum states to the other particle. At room temperature, this will typically be a transition between rotational states that are populated. Another possibility is an exchange of energy with the background thermal radiation, which at room temperatures peaks at the frequencies of molecular rotational energy level differences. Such a quantum event can be analyzed by assuming a short-lived quasi-molecule is formed (the energy levels for such an unbound system are a continuum of, so that almost any photon can cause a change of rotational state of the quasi-molecule. A short time later, the quasi-molecule dissociates into the two original particles but in different energy states. We can describe the overall process as a quasi-measurement, because there is temporary information present about the new structure. This information is lost as the particles separate in random directions (consistent with conservation of energy, momentum, and angular momentum). The decoherence associated with this quasi-measurement means that if the post-collision wave functions were to be time reversed, the reverse collision would be very unlikely to send the particles back along their incoming trajectories. Boltzmann's assumption of random occupancy of possible configurations is no longer necessary. Randomness in the form of "molecular chaos" is assured by quantum mechanics. The result is a statistical picture that shows that entropy would normally increase even if time could be reversed. This does not rule out the kind of departures from equilibrium that occur in small groups of particles as in Brownian motion, which Boltzmann anticipated long before Brown's experiments and Einstein's explanation. These fluctuations can be described as forming short-lived information structures, brief and localized regions of negative entropy, that get destroyed in subsequent interactions. Nor does it change the remote possibility of a recurrence of any particular initial microstate of the system. But it does prove that Poincaré was wrong about such a recurrence being periodic. Periodicity depends on the dynamical paths of particles being classical, deterministic, and thus time reversible. Since quantum mechanical paths are fundamentally indeterministic, recurrences are simply statistically improbable departures from equilibrium, like the fluctuations that cause Brownian motion. Entropy is Lost Information Entropy increase can be easily understood as the loss of information as a system moves from an initially ordered state to a final disordered state. Although the physical dimensions of thermodynamic entropy (joules/ºK) are not the same as (dimensionless) mathematical information, apart from units they share the same famous formula. S = ∑ pi ln pi To see this very simply, let's consider the well-known example of a bottle of perfume in the corner of a room. We can represent the room as a grid of 64 squares. Suppose the air is filled with molecules moving randomly at room temperature (blue circles). In the lower left corner the perfume molecules will be released when we open the bottle (when we start the demonstration). What is the quantity of information we have about the perfume molecules? We know their location in the lower left square, a bit less than 1/64th of the container. The quantity of information is determined by the minimum number of yes/no questions it takes to locate them. The best questions are those that split the locations evenly (a binary tree). For example: • Are they in the upper half of the container? No. • Are they in the left half of the container? Yes. • Are they in the upper half of the lower left quadrant? No. • Are they in the left half of the lower left quadrant? Yes. • Are they in the upper half of the lower left octant? No. • Are they in the left half of the lower left octant? Yes. Answers to these six optimized questions give us six bits of information for each molecule, locating it to 1/64th of the container. This is the amount of information that will be lost for each molecule if it is allowed to escape and diffuse fully into the room. The thermodynamic entropy increase is Boltzmann's constant k multiplied by the number of bits. If the room had no air, the perfume would rapidly reach an equilibrium state, since the molecular velocity at room temperature is about 400 meters/second. Collisions with air molecules prevent the perfume from dissipating quickly. This lets us see the approach to equilibrium. When the perfume has diffused to one-sixteenth of the room, the entropy will have risen 2 bits for each molecule, to one-quarter of the room, four bits, etc. For Teachers For Scholars Chapter 1.1 - Creation Chapter 1.3 - Information Home Part Two - Knowledge Normal | Teacher | Scholar
8caae9bc2cbb1fc5
Quantum equation What is Schrodinger’s equation used for? The Schrodinger equation plays the role of Newton’s laws and conservation of energy in classical mechanics – i.e., it predicts the future behavior of a dynamic system. It is a wave equation in terms of the wavefunction which predicts analytically and precisely the probability of events or outcome. What is Z in quantum physics? Introduction. Hydrogen atom is the simplest atom at to apply Bohr model ans Schrodinger equation. Z is called the atomic number. The more Z is high, the more complex to study the atom becomes. The electrons interact both with each other and with protons of nucleus. What is a quantum force? Three different quantum field theories deal with three of the four fundamental forces by which matter interacts: electromagnetism, which explains how atoms hold together; the strong nuclear force, which explains the stability of the nucleus at the heart of the atom; and the weak nuclear force, which explains why some What is Schrodinger’s law? In Schrodinger’s imaginary experiment, you place a cat in a box with a tiny bit of radioactive substance. Now, the decay of the radioactive substance is governed by the laws of quantum mechanics. This means that the atom starts in a combined state of “going to decay” and “not going to decay”. What is Schrodinger’s model? Erwin Schrodinger. A powerful model of the atom was developed by Erwin Schrödinger in 1926. The Schrödinger model assumes that the electron is a wave and tries to describe the regions in space, or orbitals, where electrons are most likely to be found. What is H in Schrodinger equation? The Schrödinger equation is written Hψ = Eψ, where H is an operator and E is the energy of the system. In the Schrödinger case, we would see a fog of negative charge. The fog is denser near the nucleus and thins out with distance from the nucleus. What is quantum physics for beginners? Quantum mechanics is a physical science dealing with the behaviour of matter and energy on the scale of atoms and subatomic particles / waves. Through a century of experimentation and applied science, quantum mechanical theory has proven to be very successful and practical. What does quantum mean? How many dimensions are there? Is quantum realm real? While quantum physics is amazing in many ways, it is very definitely not magic– the real “quantum realm” is in fact quite tightly constrained by rules that we understand very well. When we do calculations in quantum mechanics, the end result is always a probability distribution. Is gravity a quantum force? Quantum mechanics suggests everything is made of quanta, or packets of energy, that can behave like both a particle and a wave—for instance, quanta of light are called photons. Detecting gravitons, the hypothetical quanta of gravity, would prove gravity is quantum. Is the cat alive or dead? In simple terms, Schrödinger stated that if you place a cat and something that could kill the cat (a radioactive atom) in a box and sealed it, you would not know if the cat was dead or alive until you opened the box, so that until the box was opened, the cat was (in a sense) both “dead and alive”. Why is it called quantum? The word quantum derives from the Latin, meaning “how great” or “how much”. The discovery that particles are discrete packets of energy with wave-like properties led to the branch of physics dealing with atomic and subatomic systems which is today called quantum mechanics. Leave a Reply Convert to an exponential equation How do you convert a logarithmic equation to exponential form? How To: Given an equation in logarithmic form logb(x)=y l o g b ( x ) = y , convert it to exponential form. Examine the equation y=logbx y = l o g b x and identify b, y, and x. Rewrite logbx=y l o […] H2o2 decomposition equation What does h2o2 decompose into? Hydrogen peroxide can easily break down, or decompose, into water and oxygen by breaking up into two very reactive parts – either 2OHs or an H and HO2: If there are no other molecules to react with, the parts will form water and oxygen gas as these are more stable […]
ab9f65866d0f7015
Describing Many Particles A Quick Introduction to Quantum Mechanics You probably already have some notion of what quantum mechanics is: it describes things at a very small scale and it says that nature obeys weird and counterintuitive rules. Here we’re going to give a brief rundown of the basics of quantum mechanics, introducing it in a way which is non-mathematical but mirrors how physicists like to think about it. This means we will introduce some important concepts, such as that of a quantum state, before we describe the weird quantum phenomena it is responsible for. If you’ve never encountered the topic before, it may well be worth reading through this page twice before moving on. First let’s discuss the scope of quantum physics. As you may know it doesn’t directly describe the familiar, everyday physics of the world around us. Physicists have a special term for this type of physics, which describes why balls fall to the ground due to gravity, how fluids swirl around containers or down rivers, and how light propagates as an electromagnetic wave: classical physics. A detailed course on classical physics is beyond the scope of this site—we’re only concerned with how it differs from quantum physics. Our first key point is: • Quantum physics (or quantum mechanics) is used by physicists to describe things that happen on the scale of atoms or smaller. Of course balls, rivers and other things we usually describe with classical physics are made up of atoms, but there are so many atoms interacting in such a complex manner that it wouldn’t be practical to model their behaviour with quantum mechanics (see article on emergence for more details). Apart from the different scales on which we apply quantum and classical physics, another important difference is what it means to measure a classical versus a quantum system. You probably make classical measurements all the time: when you stand on the bathroom scales you measure your mass, and when you use a tape measure you’re measuring the distance between two points. This brings us to our next key point, something you probably never consider when making such measurements: • When we make a classical measurement, we can safely assume that our measurement doesn’t have any effect on the system we’re measuring. For example, measuring the width of a room obviously doesn’t change anything about the room. This might seem blindingly obvious, but it’s important to establish it now as it turns out making measurements in quantum mechanics is very different. Let’s now establish one other thing about a classical system. System can mean basically anything we’re attempting to describe with classical mechanics—a nice simple example is a ball rolling along a flat surface. • At a particular instant in time, a classical system is completely described by the measurements we can take of the system at that instant in time. For the example of a ball rolling on a flat surface, we need to measure its position on the surface, how fast it’s moving up/down and left/right, its mass, and its size. Then we know everything about our system at a particular time. This set of measurements is called the state of the system. If we have a set of mathematical equations describing how the ball moves, we can feed in these quantities we’ve measured and work out what state it will be in at any future moment in time. Quantum States We say that the information about a quantum system is contained in a quantum state, just as we do above for a classical system and a classical state. When we take measurements of classical system, we’re accessing information contained in the classical state which describes it. This is also true of making measurements on a quantum system. This is where there is a key difference—a distinction so important that if you understand this you’re well on your way to having a better understanding of quantum mechanics than most people do. • When we measure a quantum system, we can never access all the information contained in the quantum state. This is a really weird idea—something which is always true of a classical system is never true for a quantum one. It also means that a quantum state is a physical thing, something more than just a set of measurements. It also leads us to ask what result we do get when we measure a quantum system? • When we measure a quantum system, we can have different possible outcomes. The quantum state contains information about all these possible outcomes, and when we make a measurement we will randomly access the information about just one of these outcomes. Each outcome is associated with a different set of measurable quantities—these are called the quantum numbers associated with a particular outcome. • After we make a measurement of a quantum system and get a particular outcome, we can make another measurement. But this time, instead of getting a random outcome, we will always get the same outcome as before. By making a measurement, we have changed the quantum state of the system. We can associate this new quantum state with the outcome of the first measurement we made: such a state is called a pure quantum state. • Our original quantum state was a mixture of different pure quantum states, each associated with a random probability of being the outcome of a measurement we took. But when we measure a pure quantum state, we always get the same outcome: and this outcome is the quantum number associated with this particular pure quantum state. The way quantum states behave when measured can help us to understand another really important thing about quantum systems: what the term quantum actually means. • When we have a quantum system in a pure quantum state, and change something about it (for example we could put more energy into our system, say by pointing a laser at it), we change its quantum state. But a different quantum state is associated with a new quantum number. So if we now make a new measurement of a certain quantity, we will find it has either ‘jumped’ to a new value, or stayed the same. This happens no matter how little we change the system. For example, we could keep turning up the power of our laser from zero, and keep measuring the energy of the quantum system. To start with we will keep measuring the same value as we originally got, but as the laser gets powerful enough, we will suddenly find the energy of the quantum system has jumped up by a certain amount. In this case we would say we have excited our quantum system, and it’s now represented by a new quantum state. We sometimes call these quantum states with different energies energy levels. • Quantities we measure in a classical system are said to be continuous, because as we slowly add energy into a classical system, the quantities we measure slowly and continuously increase to match it. On the other hand we say that quantities we can measure in a quantum system are quantiseda term that means we find things broken up into indivisible chunks rather than varying smoothly. In fact this is where the word quantum comes from. Now we’ve outlined the basic principles of quantum mechanics, we will show how it is described mathematically. For a more complete discussion of the mathematics, see reference [1] at the bottom of the page. Mathematical Description of Quantum Mechanics Before discussing the mathematics of quantum mechanics, let us first think about how the classical world is described mathematically. Usually, the approach is to write down an equation which, when solved, yields a mathematical expression or a set of expressions which contain all the information about a physical system. Say we have a pendulum and we want to know its velocity, position and acceleration at any point in time. With some assumptions about the forces involved we can write down an equation, known as the equation of motion, for the system which relates the forces to the acceleration of pendulum—note that this equation does not tell us what the acceleration actually is, it just tells us how the acceleration is related to the forces that act on the pendulum. It’s called the equation of motion of the system. Solving the equation of motion yields a mathematical expression which tells us the position of the pendulum at any given time. Now that we’ve discussed the mathematical description of a pendulum, let us take the leap to a quantum system. Here it is not so simple as to find an equation relating acceleration to force. However we do need to start with an equation that can be solved to obtain information about the quantum system. Fortunately this equation does exist and is known as the Schrödinger equation. • The Schrödinger equation relates the Hamiltonian, which is a mathematical expression of all the interactions in the system, to the total energy of the quantum state. What the interactions are depends on the constituents of the quantum system. For example, if you had a quantum system made up of two electrons, then there would have to be a mathematical expression of the fact that the electrons repel each other (because they have the same charge) in the Hamiltonian. • Solving the Schrödinger equation yields the wavefunction of the quantum system which is central to the mathematical description of quantum mechanics. It is a mathematical representation of a quantum state that contains all the information we could possibly know about said quantum state. We apply mathematical operators to the wavefunction to extract the information contained within it. It’s a bit like the mathematical expression we get when we solve the equation of motion of a pendulum. This is the Schödinger equation. The H represents the Hamiltonian, and the ‘hat’ on it lets us know that it is operating on the wavefunction, ψ. E is the energy associated with the quantum state which the wavefunction represents. We have barely scratched the surface of the theory of quantum mechanics. There is much more to explore such as the connection of the wavefunction to actual waves, the famous Heisenberg uncertainty principle, quantum entanglement and what we mean by operators. 1. Leonard Susskind, The Theoretical Minimum Lectures, Physics of many particles really is different from the physics of one. New and strange behaviour occurs when many particles come together… This really isn’t too different from people: one person can behave in a certain way when you’re alone with them, but completely differently in a group of people. For example you may have a friend who behaves in a different way with a group of friends from work or school than they do when they’re alone with you. The interactions in the group causes new behaviour to emerge… As far as we know, all the matter in the universe that we have any detailed knowledge of, including our minds and bodies, is controlled by the same set of fundamental laws. However, in the words of physicist Philip Warren Anderson, “The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe.” It might seem like this is exactly what physicists are trying to do, but starting from the most fundamental laws is not always useful. The physics of many (many means approximately 1023) atoms is different to the physics of a single atom, which is what Anderson argues in his paper ‘More is Different’ [1]. It’s not just that there are more atoms, but there are phenomena we observe when many atoms come together, for example to form a crystal, that we could not possibly observe with a single atom. Think about the air around you, which is made up of molecules, which are made of atoms, and we can go all the way down to electrons and quarks, which as far as we know are fundamental particles. The physics of quarks and electrons is very well understood, so we could try to write down all the relevant equations for all the air molecules in a room. One might think that, in principle, we could calculate the motion of every single air molecule. There are roughly 1023 air molecules in a room and this is not even the biggest number that will be dealt with in the computation of such a problem. Because we would have to consider the interaction of every molecule with every other molecule, whatever computer—and it would have to be a computer at this point—we use would have to deal with a number larger than 101024 which vastly exceeds the number of particles in the observable universe [2]. This is obviously a physical limitation. The behaviour of a large number of fundamental particles with complex interactions between them cannot simply be understood by building up from the fundamental laws that govern each particle. New properties often emerge and these emergent phenomena can be found everywhere in condensed matter physics. For example, thermal properties of solids are understood in terms of ‘fictitious’ particles called phonons. Phonons are particles that emerge from the vibrations of the crystal lattice (which makes up a solid). We really can ‘see’ these phonons when we probe materials, meaning we can bounce neutrons off them and measure their energy-momentum relationship, just like any other particle. Phonons and other emergent particles (often known as quasiparticles), however, will never be found at the Large Hadron Collider, which looks for fundamental particles. They only arise as a consequence of the complex interaction of many particles. John Conway’s Game of Life is a really fun, visual way to see complex behaviour emerging from simple rules. Although the fundamental laws of physics aren’t simple in the same way that the rules of the Game of Life are, we can still draw parallels between what happens in the game, and what happens in our universe. The game is simple: a square grid of cells is defined and each cell can either be alive or dead (usually coloured black or white respectively). The game begins with an initial configuration of cells, and a new configuration is obtained at each time-step based only on the previous configuration. The rules are: • Any live cell with fewer than two live neighbours dies • Any live cell with two or three live neighbours lives on to the next time-step • Any live cell with more than three live neighbours dies • Any dead cell with exactly three live neighbours becomes a live cell There is nothing more than these simple rules, yet we see rich behaviour emerging such as the ‘queen bee shuttle.’ See if you can make it yourself in the interactive game board below. Of course the Game of Life is different in that we can actually compute all the possibilities from the fundamental rules but it allows us to see emergent behaviour that doesn’t need to be described in terms of the fundamental rules. An interactive version of the ‘game of life’. Pause or reset the game to setup the board (click on cells to make them live / dead), then hit play to see the game progress (the rules above are applied at each time step). The board is periodic, which means that cells along the edges of the board consider cells on the direct opposite edge of the board to be their neighbours too. This works best on desktop browsers. As well as new properties emerging, systems of many particles often do not possess the same symmetries as their constituents—they are different in that they break the symmetry which their constituents hold. We say that a system has a symmetry if the system is unchanged before and after a transformation associated with that symmetry. For example, if a system has rotational symmetry, rotating the system by some angle does not change it: if no one told us whether a rotation had been applied or not we would not be able to tell. There are more abstract symmetries such as time reversal symmetry, or parity but the general idea of leaving the system unchanged is still there. Many materials have been found not to possess all of these symmetries which the fundamental particles that make up the material do possess. See our article on topological classification here to find out how physicists make use of this idea. 1. Philip Warren Anderson, More is Different, available at Science: 2. Stephen Blundell, Emergence, causation and storytelling: condensed matter physics and the limitations of the human mind, available on arXiv: Spin and the Pauli Exclusion Principle Basically, spin is a property that allows charged particles, such as electrons, to interact with magnetic fields. When we measure the spin of electrons, there are two possibilities which we call up and down. In a magnetic field, we can think of the electrons as being a little bit like tiny bar magnets because they can either be aligned or anti-aligned with a magnetic field. Electrons obey the Pauli exclusion principle which means that two electrons absolutely cannot be in the same state when they are in the same quantum system—this is largely responsible for the chemical properties of all the elements in the periodic table. This article will be a discussion of some of the key properties of elementary particles important to condensed matter physics. First of all, what do we mean by elementary particles? These are particles that, as far as we know, have no sub-structure, so the electron is an elementary particle but an atom is not (because it is made up of protons, electrons and neutrons). In fact, protons and neutrons are not elementary particles themselves because they are made up of elementary particles known as quarks [1]. All elementary particles can be classified based on properties called mass, charge and spin. The property of mass should already be familiar from day-to-day life—objects with more mass feel heavier when you pick them up. The charge of a particle controls how strongly it interacts with electrical fields. Here we will discuss spin. The spin of a particle is defined as the intrinsic angular momentum of the particle. If this doesn’t mean anything to you, that’s okay. What we will be most interested in is how the electron spin affects its behaviour. For charged particles, like the electron, spin allows them to interact with magnetic fields through their magnetic moment associated with spin. A magnetic moment is a quantity that represents the magnetic strength and orientation of an object that produces a magnetic field. A familiar example of this would be a bar magnet—we can directly visualise the magnetic field with iron filings. Both elementary and composite particles can have spin. The particle does not actually spin, in fact the notion of a solid particle is a bit fuzzy, but its behaviour in a magnetic field is similar to that of a spinning charged classical object which is why we use this term. When we measure the spin of a particle, there are two things we can measure: we can measure the magnitude of the spin and the projection of its spin in a chosen direction. To talk about the projection of spin, the following classical picture might be useful (but ultimately spin is like nothing in the classical world).The figure below shows a classical object rotating—a spinning top. Here, we can measure the projection of its axis of rotation on to a chosen direction, which is labelled z. The angle between the z direction and the axis of rotation indicates how much we would have to tilt the spinning top for its axis of rotation to be the same as the the z direction. This is roughly what the term projection means, but there are some more details involved in the proper definition. To measure the projection of a particle’s spin, we choose a direction in space, which will usually be the direction of an applied magnetic field, and we obtain a number through experiment which is interpreted as the projection of the particle’s spin in that direction—this is a little bit like the angle in the classical picture. The number that we obtain is quantised, which means that when we measure it we will always find it to be a certain discrete number, a very different situation to what happens in the classical case where the angle can vary continuously. For a spin-half particle for example, we will only ever measure values +½ or -½ for the projection of its spin. The +½ value corresponds to the spin ‘pointing’ in our chosen direction and the -½ value corresponds the the spin ‘pointing’ exactly opposite to our chosen direction. We call these two situations spin-up and spin-down respectively. The magnitude of spin is a bit like the magnitude of the angular momentum of a spinning object. Even though the rigorous definition of spin is that it is the intrinsic angular momentum of a particle there is no physical rotation of the particle. The numbers we get when we measure the magnitude and projection of spin are known as quantum numbers . Elementary particles can be split into two classes based on the quantum number giving the magnitude of the spin, which can be only an integer (1, 2, 3, 4, …) or a half-integer (1/2, 3/2, 5/2, 7/2, …). Particles with integer spin are known as bosons and particles with half-integer spin are known as fermions. Fermions obey something called the Pauli exclusion principle. Spin analogy The spinning top is a classical object and can be used to illustrate the idea of the projection of spin. The number we get when we measure the projection of spin is a bit like the angle between the z direction and the axis of rotation, which tells us how much we have to tilt the spinning top so that its axis matches the z direction. Identical particles and the Pauli exclusion principle The Pauli exclusion principle states that two or more identical fermions cannot occupy the same quantum state within a quantum system at the same time. Here we mean identical in the sense that there is no possible experiment that can distinguish between so called identical particles. This sounds obvious, but think about collisions between balls on a pool table and let’s say all these balls have identical mass, shape, are made of the same material and are even the same colour. You would probably call these balls identical but they are not in the quantum mechanical sense because there is an experiment that can distinguish between them. We can film the collisions on the pool table, and in principle we could label and keep track of every single ball, using video editing software maybe, which provides a way for us to distinguish between them. There is no such experiment we can perform for a quantum mechanical system. Quantum mechanical pool balls would be completely indistinguishable. Most of the matter around us is governed by the Pauli exclusion principle because electrons are spin-half particles, and we usually describe the behaviour and properties of a material in terms of the behaviour of the electrons within it. An electron in an atom has a number of quantum states it can occupy. When there are multiple electrons in an atom, they must obey the Pauli exclusion principle, which means that each available quantum state can only have one electron at a time occupying it. It’s a bit like how there is only ever one person at a time in each seat on a bus (usually). Unlike the bus however, the electrons will obey a strict order when filling the the quantum states in an atom. This leads to an elaborate structure of orbitals, shells and subshells, which determines the chemical properties of a given atom and the way that it interacts with other atoms and compounds. In fact, many of the phenomena that we observe in materials are consequences of the quantum mechanical behaviour of electrons. 1. CERN website, the Standard Model, Metals, Insulators, and Semiconductors All materials can be broadly classified as metals, insulators or semiconductors. Metals are good at conducting electricity, insulators are bad at conducting electricity and semiconductors can sometimes conduct electricity well and sometimes they can’t, depending on what we need them to do. The understanding of semiconductors is basically responsible for the digital age and you wouldn’t be on this website without them. The fact that metals, insulators and semiconductors have different properties comes down to the way electrons organise themselves when many atoms come together to form a material. Materials are broadly split into three groups according to their electrical conductivity: metals, insulators, and semiconductors. Electrical conductivity is a measure of how easy it is for a material to conduct electricity: metals, such as copper, have a high conductivity and insulators, for example rubber, have a low conductivity. The electrical conductivity of semiconductors lies somewhere in-between. This classification is based on how electrons within atoms organise themselves when many atoms come together to form a solid. There are a number of quantum states available to all the electrons within an atom. Each quantum state can only be occupied by one electron at a time (due to the Pauli exclusion principle). For an isolated atom, each quantum state is specified by four numbers. The first is the energy an electron has while it occupies said quantum state—the other three are not so important for now. One of the strange predictions of quantum mechanics, which we know to be true, is that this energy number can only take certain values. It’s a bit like having a car that can precisely travel at speeds that are multiples of 10, say, but no other speeds are accessible. It would be perfectly possible for this car to travel at 30 mph but 34 mph or 45 mph, for example, would be absolutely impossible. This really is what happens in the quantum world—there are certain energies which are forbidden for all the electrons within an atom. Without considering the detail of the other three numbers that specify a quantum state, the electrons in an isolated atom are organised in discrete energy levels. There are a number of electrons at each value of energy (the other three numbers make sure that electrons with the same energy are indeed in different quantum states), with no electrons having energies with values that lie between the allowed energies. When a large number of atoms come together to form a material, they interact in a way that results in our material only having two energy levels that its electrons can occupy—sort of. We actually call each of these ‘energy levels’ a band because within each band there is actually a range of energies available to the electrons, not just a single energy, but the energies that lie between the bands are forbidden. The lower energy band is known as the valence band  and the higher energy band is known as the conduction band. energy levels vs bands The picture on the left is a visual representation of the energies an electron can have within an atom. Each line corresponds to a single value of energy and the gaps are the forbidden energies. The picture on the right is what we get when we do the same thing for the energies an electron can have in a material made up of many atoms. Again, the gap represents the forbidden energies and the blue and yellow strips represent the bands. These are thicker because within each band, the electrons can have a range of energies. In order for a material to conduct electricity, the electrons within said material have to be free to move around. It turns out that this requires empty quantum states that the electrons can easily access (there may be empty quantum states that the electrons can’t access easily). In a conducting material, there is actually no gap of forbidden energies, so there is effectively only one band which isn’t completely filled up with electrons, which means that the electrons are free to move around. Electrons being able to move around is exactly what is meant by saying a material conducts electricity. In an insulating material, the bottom band (the valence band) has all its quantum states filled up with electrons and the gap of forbidden energies is too big for the electrons to easily access the empty quantum states in the upper band (the conduction band). Semiconductors have the same structure as insulators, but have a much smaller gap. The empty quantum states in the conduction band can be made available to the electrons in the filled valence band by adding impurities to the semiconductor or increasing the temperature. The understanding of semiconductors through quantum condensed matter physics has led to perhaps the greatest technological advance of the modern era: the development of semiconductor devices. One device in particular, the transistor, forms the basis of every modern electronic circuit and every phone, tablet and computer literally contains billions of transistors. The invention of the transistor is credited to a team from Bell Labs in 1947: John Bardeen, Walter Brattain, and William Shockley [1]. The three were awarded the Nobel prize in Physics 1956 for their research on semiconductors and discovery of the transistor. We may not be too far from another technological advance as recent work connecting quantum condensed matter theory with topology may help overcome one of the biggest challenges in the development of quantum computers. 1. Steven H. Simon, The Oxford Solid State Basics Superconductors and Superfluids Superconductors really are super: when they are cooled down enough, they lose all their electrical resistance and can conduct electricity with zero resistance. The critical temperature at which this occurs changes depending on the material, but the highest critical temperature we know of so far is still well below freezing. A theory developed by J. Bardeen, L. Cooper, and J. R. Schrieffer describes superconductivity in terms of the electrons in a material. The super simple version is this: electrons pair up which allows them to move through the material with no resistance and superfluids flow with no viscosity at extremely low temperatures. Superconductors are some of the best understood quantum materials, and are starting to find applications in technology. At room temperature, all known materials have some electrical resistance. This ranges from metals with a small resistance, which we call conductors (copper, gold, aluminium and iron are some of the best, which is why they’re used in electronics), through to materials with a very high resistance which we call insulators (plastics, for example, are used to insulate electrical wires for our safety). Materials which are superconductors also have some electrical resistance at room temperature. But when superconductors are cooled down to very cold temperatures, something remarkable happens—they completely lose all electrical resistance (i.e. their resistance drops to zero). This phenomenon is an example of a phase transition. Elsewhere on the website we discuss how physicists describe the normal (non-superconducting) and superconducting phases of these materials. But there exists another theory, developed by Bardeen, Cooper, and Schrieffer (BCS) that describes superconductivity in terms of what happens to the particles that carry electricity through the material (electrons). What’s important to understand is that in conductors, electrons carry electricity by moving around, and as they do so they interact with the lattice (a repeating, 3D grid) made up of the atoms of the material. This means that the energies of the electrons are affected by the nature of the lattice (this is the basis of how we understand metals and insulators); but it also means that the electrons can affect how the atoms of the lattice vibrate. In fact, at normal temperatures it is these lattice vibrations which scatter electrons and give the material its resistance. But at low temperatures they have a very different effect. The lattice vibrations and the electrons interact in such a way that the electrons will pair up and move through the material without any resistance. These are called Cooper pairs, and are responsible for superconductivity. If the temperature of a superconductor is raised, the electrons have too much energy to stay paired up and break apart instead, returning the material to its normal state with electrical resistance. The BCS theory, proposed in 1957, won its creators the 1972 Nobel Prize. • You’ll notice we haven’t really explained how Cooper pair formation gives the superconductor no resistance. The reason is that the quantum mechanics required to properly explain it is far beyond the scope of this website—and, in fact, most undergraduate physics courses too. Sometimes so-called classical analogies are given to try and explain this phenomena, but we think it’s better to use it as an example of just how weird quantum mechanics can be. One familiar application of superconductors today is in MRI scanners. The principle of their operation is that in a very large magnetic field, hydrogen atoms in water (the H in H2O) will resonate in a way that produces detectable radiation. This allows the human body to be imaged. The MRI ‘tube’ contains a large electromagnet, a coil of wire which produces a magnetic field inside when an electrical current is passed through it. The higher the current the larger the field. If the coil was made from a normal metal—even a good conductor like copper—only so much current could be passed through it before it would become too hot (this is called resistive heating, which is how kettles work, and is why your laptop warms up as you use it) and the coil would melt. Superconductors, having no resistance, avoid this problem, meaning very high currents can be passed through it. This generates magnetic fields 50000 times stronger than the Earth’s natural magnetic field in the case of MRI scanners. And the only price to pay is that the superconductor has to be kept very cold at 4K (-269°C). An MRI scanner An MRI scanner in a hospital. Photo: Superfluids are materials closely related to superconductors. Superconductivity is the flow of charged particles (electrons) through a material without resistance; superfluidity is the flow of particles without viscosity. The technical meaning of viscosity is the same as its day-to-day usage except it extends to gases as well as liquids—treacle is very viscous, whereas air isn’t (when physicists talk about fluids we mean both gases and liquids). A superfluid has no viscosity at all. Like in superconductors, this only occurs in a very low temperature phase. Here we will discuss the two best known examples of superfluids: helium-4 and helium-3 (helium is the only element which doesn’t freeze in to a solid, no matter how cold it gets). Although helium-4 and helium-3 are both superfluids, the physics behind their behaviour is different. The reason why the two superfluids have such different physics is due to the difference between the two most fundamental categories of particles, bosons and fermions. Bosons have integer spin, whereas fermions have half-integer spin. Protons and neutrons, which make up the nuclei of atoms (see diagram), have spin ½. The nucleus of helium-4 has two of each, adding up to integer spin overall—it’s a boson. Whereas the nucleus of helium-3 has two protons but only one neutron, so overall it has half integer spin, making it a fermion instead. Both atoms are helium because they have the same number of protons, two each, and thus are chemically the same (they are called different isotopes of helium). If you’ve already read our article about spin and the Pauli exclusion principle, you’ll know that in a system consisting of fermions, we can only ever place one fermion at a time into each quantum state of the system. In practice, this means that at low temperatures we will find one fermion in the lowest energy state of the system, one in the next lowest energy state, etc. But bosons do not have to obey this principle, so at low temperatures every boson in a system can be packed into the lowest energy state. This means that as a gas of bosons is cooled down, at some low temperature we will find a significant fraction of the particles are now occupying the zero energy (i.e. not moving) state. This is yet another example of a phase transition. The phase in which this happens is called a Bose-Einstein condensate, named after Albert Einstein and Satyendra Nath Bose, who first theorised its existence in 1925. Helium-4 atoms are bosons, so clearly liquid helium-4 should become a Bose-Einstein condensate at low enough temperatures. But Bose and Einstein didn’t account for interactions between the atoms, and once we account for these we find that helium-4 will also superflow (flow without viscosity i.e. become a superfluid). This is what happens to helium-4 when it’s cooled to around 2K (-271°C), as discovered in 1938. The existence of the helium-3 superfluid phase was theorised in the 1960s before it was discovered in 1972. The superfluid behaviour of helium-3 can’t be explained in the same way as the superfluid behaviour of helium-4 because it is a fermion, not a boson. If you’ve read the section above, however, then you already know why helium-3 becomes a superfluid—electrons, which are also fermions, can form Cooper pairs at low temperature and move without electrical resistance. Helium-3 atoms can also form Cooper pairs, which allows them to flow without viscosity. This only happens below an extremely low temperature of 2.7 mK or 0.00027 K. Helium isotopes Illustration of atoms of each of the two helium isotopes. Protons are red, neutrons are blue, and electrons are yellow. The nucleus of each atom contains the same number of protons, which means two electrons bind to either nucleus. This means they are chemically the same. However as each nucleus has a different number of neutrons, each isotope has a different mass and a different spin.
2e28922f0d551292
Riding Waves in Neuromorphic Computing Marios Mattheakis • John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA Physics 13, 132 An artificial neural network incorporating nonlinear waves could help reduce energy consumption within a bioinspired (neuromorphic) computing device. Figure 1: A potential design for an artificial neural network utilizing nonlinear waves. In the encoding layer (left), data inputs are written into the amplitude and phase of a set of waves. The waves follow nonlinear evolution in the wave reservoir layer (middle). The output is received at the readout layer (right) by combining the results from wave measurements. The network utilizes a reservoir computing architecture, which means the weights in the input layer (green) are fixed, whereas those in the output layer (orange) are tuned during optimization.A potential design for an artificial neural network utilizing nonlinear waves. In the encoding layer (left), data inputs are written into the amplitude and phase of a set of waves. The waves follow nonlinear evolution in the wave reservoir layer (mid... Show more Machine learning has emerged as an exceptional computational tool with applications in science, engineering, and beyond. Artificial neural networks in particular are adept at learning from data to perform regression, classification, prediction, and generation. However, optimizing a neural network is an energy-consuming process that requires a lot of computational resources. One of the ways to improve the efficiency of neural networks is to mimic biological nervous systems that utilize spiking potentials and waves—as opposed to digital bits—to process information. This so-called neuromorphic computing is currently being developed for intelligent and energy-efficient devices, such as autonomous robots and self-driving cars. In a new development, Giulia Marcucci and colleagues from the Sapienza University of Rome have analyzed the potential of using nonlinear waves—such as rogue waves and solitons—for neuromorphic computing [1]. These waves interact with each other in complex ways, making them inherently suited for designing neural network architectures. The researchers consider ways to encode information on the waves, providing recipes that could guide the development of machine-learning devices that can take advantage of wave dynamics. A necessary neural network ingredient is complexity, which allows the system to approximate a given function or learn from a dataset. An example of this complexity is how a deep feed-forward network processes information. It consists of an input layer, hidden layers, and an output layer. From the input to output layers, information propagates through a complex system of hidden “neurons,” which are fully or partially connected to each other. The connections between neurons are characterized by parameters, or “weights.” The neurons perform linear and nonlinear transformations yielding a nonlinear mapping between the input and output. Optimizing the network consists of feeding it a training dataset and tuning the weights to arrive at a desired relationship between inputs and outputs. The nonlinearity in the information processing is a necessary condition for the network to become a universal approximator, which means it can represent any function when given appropriate weights. An avenue for improving the efficiency of neural networks is to adopt neuromorphic techniques, which can provide extremely fast real-time computing at a low energy cost. Recent work has demonstrated neuromorphic computing in deep (multilayered) networks with representative platforms that include coherent nanophotonic circuits [2], spiking neurons [3], and waves [4]. But an open problem in neuromorphic computing of deep networks is the optimization of the parameters. Deep networks require many connections to be sufficiently complex; thus the training usually requires a lot of computational power. To reduce these computing demands, researchers have suggested alternative architectures, such as reservoir computing (RC) [5]. The input and hidden nodes in an RC network are randomly initialized and fixed, while only the weights of the output layer need to be trained. This approach makes the optimization more efficient than regular architectures, yet the network retains the requirements for learning complexity. Proposed designs for RC neuromorphic devices involve optoelectronic technology [6], photonic cavities [7], and photonic integrated circuits [8]. Marcucci and colleagues have now added to this list of potential RC neuromorphic devices with their proposed architecture based on wave dynamics [1]. They show the possibility of building a device that is able to learn by harvesting nonlinear waves. Nonlinear waves, such as solitons, breathers, and rogue waves, show divergent behavior and provide sufficient complexity to develop a learning method. The proposed architecture, called single wave-layer feed-forward network (SWFN), goes beyond standard neuromorphic RC because the reservoir comprises nonlinear waves rather than using randomly connected hidden nodes. In other words, the coupled artificial hidden neurons have been replaced by waves that interact naturally through interference. The SWFN architecture consists of three layers (Fig. 1): the encoding layer, where an input vector is written into the initial amplitude or phase of a set of representative waves; the wave reservoir layer, where the initial state evolves following a nonlinear wave equation; and the readout layer, where the output is recovered from the final state of the waves. As this network is an RC one, only the weights in the readout layer need to be trained. Although wave dynamics have been used in neuromorphic computing [4, 7], a general theory that links nonlinear waves with machine learning was missing. Marcucci and co-workers introduced a general and rigorous formulation, bridging the gap between the two concepts. For their model system, the researchers encoded the input vector in the initial state of a set of plane waves and represented the wave evolution in the reservoir layer with the nonlinear Schrödinger equation—but any nonlinear wave differential equation would have worked. In fact, any system that is characterized by nonlinear wave dynamics can be used to build a neuromorphic nonlinear wave device. A simple example would be a wave tank with several wave generators on one end and wave detectors on the other. From their general analysis, the researchers showed that two conditions must be fulfilled for a transition to the learning regime. First, the wave evolution must be nonlinear, as linear evolution would prevent the SWFN from being a universal approximator. The second condition connects the number of output channels with the size of the training data. Specifically, the number of output nodes has to be the same as the number of training data points per input node for the SWFN to approximate a function or to learn a finite dataset. Marcucci and colleagues present three different encoding methods through three representative examples. First, the SWFN is used to approximate a one-dimensional function that maps a binary string to the initial phase of a set of waves. In the second example, the neuromorphic device is asked to learn an eight-dimensional dataset that is encoded in the initial amplitudes of the waves. In the last example, the researchers show that the proposed neuromorphic architecture can be used as Boolean logic gates that operate on two binary inputs. In each case, the SWFN performs as well as conventional neural networks verifying that SWFN is indeed a universal approximator able to approximate arbitrary functions and learn high-dimensional datasets. Neural network technology is a rapidly growing scientific field, and neuromorphic computing can offer an energy-efficient way to meet the technology’s computing demands. Marcucci and colleagues have provided a recipe for a neuromorphic neural network using nonlinear wave dynamics. This groundwork opens the door to a wide range of nonlinear-wave phenomena in electronics, polaritonics, photonics, plasmonics, spintronics, hydrodynamics, Bose-Einstein condensates, and more. Among these wave-based technologies, photonics seems very promising, as photonic materials absorb little energy and can be fashioned into circuit elements at micro- or nanoscales. The computing in a photonic neural network is as fast as the speed of light, and different signals can be encoded in different frequencies, allowing multiple computations to be performed simultaneously. With such potential, it’s easy to imagine neuromorphic devices riding these waves to technological and engineering achievements in the near future. 1. G. Marcucci et al., “Theory of neuromorphic computing by waves: Machine learning by rogue waves, dispersive shocks, and solitons,” Phys. Rev. Lett. 125, 093901 (2020). 2. Y. Shen et al., “Deep learning with coherent nanophotonic circuits,” Nat. Photon. 11, 441 (2017). 3. S. K. Esser et al., “Convolutional networks for fast, energy-efficient neuromorphic computing,” Proc. Natl. Acad. Sci. U.S.A. 113, 11441 (2016). 4. T. W. Hughes et al., “Wave physics as an analog recurrent neural network,” Sci. Adv. 5, 6946 (2019). 5. H. Jaeger and H. Haas, “Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication,” Science 304, 78 (2004). 6. L. Larger et al., “High-speed photonic reservoir computing using a time-delay-based architecture: Million words per second classification,” Phys. Rev. X 7, 011015 (2017). 7. S. Sunada and A. Uchida, “Photonic reservoir computing based on nonlinear wave dynamics at microscale,” Sci. Rep. 9, 19078 (2019). 8. A. Katumba et al., “Low-loss photonic reservoir computing with multimode photonic integrated circuits,” Sci. Rep. 8, 2653 (2018). About the Author Image of Marios Mattheakis Marios Mattheakis received his Ph.D. in computational physics from the University of Crete, Greece, in 2014. He then completed a postdoctoral appointment at Harvard University. Currently, he is a research associate in the Institute for Applied Computational Science at Harvard John A. Paulson School of Engineering and Applied Sciences. His primary area of research is the design of artificial neural networks and neuromorphic architectures with implementations in physics, engineering, and beyond. Moreover, he is interested in electronic and optical properties of two-dimensional materials, wave dynamics in random systems, and metamaterials. In addition to research, he teaches advanced classes in data science. Subject Areas Computational Physics Related Articles Machine-Learning Model Reveals Protein-Folding Physics Computational Physics Machine-Learning Model Reveals Protein-Folding Physics An algorithm that already predicts how proteins fold might also shed light on the physical principles that dictate this folding. Read More » Defects Control Silica’s Viscosity Soft Matter Defects Control Silica’s Viscosity The quirky temperature dependence of liquid silica’s viscosity comes from the liquid equivalent of crystal defects, according to new simulations. Read More » Multi-Echo Imaging Multi-Echo Imaging With a one-pixel detector and a pulsed emitting device, researchers are able to produce a 3D image of a room from multiple echoes. Read More » More Articles
d524055f2998657c
Matter and Energy: A False Dichotomy Matt Strassler [April 12, 2012] It is common that, when reading about the universe or about particle physics, one will come across a phrase that somehow refers to “matter and energy”, as though they are opposites, or partners, or two sides of a coin, or the two classes out of which everything is made. This comes up in many contexts. Sometimes one sees poetic language describing the Big Bang as the creation of all the “matter and energy” in the universe. One reads of “matter and anti-matter annihilating into `pure’ energy.” And of course two of the great mysteries of astronomy are “dark matter” and “dark energy”. As a scientist and science writer, this phraseology makes me cringe a bit, not because it is deeply wrong, but because such loose talk is misleading to non-scientists. It doesn’t matter much for physicists; these poetic phrases are just referring to something sharply defined in the math or in experiments, and the ambiguous wording is shorthand for longer, unambiguous phrases. But it’s dreadfully confusing for the non-expert, because in each of these contexts a different definition for `matter’ is being used, and a different meaning — in some cases an archaic or even incorrect meaning of `energy’ — is employed. And each of these ways of speaking implies that either things are matter or they are energy — which is false.  In reality, matter and energy don’t even belong to the same categories; it is like referring to apples and orangutans, or to heaven and earthworms, or to birds and beach balls. On this website I try to be more precise, in order to help the reader avoid the confusions that arise from this way of speaking.  Admittedly I’m only partly successful, as I’ll mention below. Summing Up This article is long, but I hope it is illuminating and informative for those of you who want details.  Let me give you a summary of the lessons it contains: • Matter and Energy really aren’t in the same class and shouldn’t be paired in one’s mind. • Matter, in fact, is an ambiguous term; there are several different definitions used in both scientific literature and in public discourse.  Each definition selects a certain subset of the particles of nature, for different reasons.  Consumer beware!  Matter is always some kind of stuff, but which stuff depends on context. • Energy is not ambiguous (not within physics, anyway).  But energy is not itself stuff; it is something that all stuff has.   • The term Dark Energy confuses the issue, since it isn’t (just) energy after all.  It also really isn’t stuff; certain kinds of stuff can be responsible for its presence, though we don’t know the details. • Photons should not be called `energy’, or `pure energy’, or anything similar.  All particles are ripples in fields and have energy; photons are not special in this regard. Photons are stuff; energy is not. • The stuff of the universe is all made from fields (the basic ingredients of the universe) and their particles.  At least this is the post-1973 viewpoint. What’s the Matter (and the Energy)? First, let’s define (or fail to define) our terms. The word Matter. “Matter” as a term is terribly ambiguous; there isn’t a universal definition that is context-independent.  There are at least three possible definitions that are used in various places: • “Matter” can refer to atoms, the basic building blocks of what we think of as “material”: tables, air, rocks, skin, orange juice — and by extension, to the particles out of which atoms are made, including electrons and the protons and neutrons that make up the nucleus of an atom. • OR it can refer to what are sometimes called the elementary “matter particles” of nature: electrons, muons, taus, the three types of neutrinos, the six types of quarks — all of the types of particles which are not the force particles (the photon, gluons, graviton and the W and Z particles.)  Read here about the known apparently-elementary particles of nature.  [The Higgs particle, by the way, doesn’t neatly fit into the classification of particles as matter particles and force particles, which was somewhat artificial to start with; I have a whole section about this classification below.] • OR it can refer to classes of particles that are found out there, in the wider universe, and that on average move much more slowly than the speed of light. With any of these definitions, electrons are matter (although with the third definition they were not matter very early in the universe’s history, when it was much hotter than it is today.) With the second definition, muons are matter too, and so are neutrinos, even though they aren’t constituents of ordinary material.  With the third definition, some neutrinos may or may not be matter, and dark matter is definitely matter, even if it turns out to be made from a new type of force particle.  I’m really sorry this is so confusing, but you’ve no choice but to be aware of these different usages if you want to know what “matter” means in different people’s books and articles. Now, what about the word Energy. Fortunately, energy (as physicists use it) is a well-defined concept that everyone in physics agrees on.  Unfortunately, the word in English has so many meanings that it is very easy to become confused about what physicists mean by it. I’ve briefly describe the various forms of energy that arise in physics in more detail in an article on mass and energy. But for the moment, suffice it to say that energy is not itself an object.  An atom is an object; energy is not. Energy is something which objects can have, and groups of objects can have — a property of objects that characterizes their behavior and their relationships to one another.  [Though it should be noted that different observers will assign different amounts of energy to a given object — a tricky point that is illustrated carefully in the above-mentioned article on mass and energy.] And for this article, all we really need to know is that particles moving on their own through space can have two types of energy: mass-energy (i.e., E= mc2 type of energy, which does not depend on whether and how a particle moves) and motion-energy (energy that is zero if a particle is stationary and becomes larger as a particle moves faster). Annihilation of Particles and Antiparticles Isn’t Matter Turning Into Energy Let’s first examine the notion that “matter and anti-matter annihilate to pure energy.” This, simply put, isn’t true, for several reasons. In the green paragraphs above, I gave you three different common definitions of “matter.”  In the context of annihilation of particles and anti-particles,  speakers may either be referring to the first definition or the second. Here I want to discuss the annihilation of electrons and anti-electrons (or “positrons”), or the annihilation of muons and anti-muons.  I’ve described this in detail in an article on Particle/Anti-Particle Annihilation. You’ll need it to understand what I say next, so I’m going to assume that you have read it.  Once you’ve done that, you’re ready to try to understand where the (false) notion that matter and antimatter annihilate into pure energy comes from. What is meant by “pure energy”?  This is almost always used in reference to photons, commonly in the context of an electron and a positron (or some other massive particle and anti-particle) annihilating to make two photons (recall the antiparticle of a photon is also a photon.)  But it’s a terrible thing to do.  Energy is something that photons have; it is not what photons are.  [I have height and weight; that does not mean I am height and weight.]   The term “pure energy” is a mix of poetry, shorthand and garbage.   Since photons have no mass, they have no mass-energy, and that means their energy is “purely motion-energy”.  But that does not mean the same thing, either in physics or intuitively to the non-expert, as saying photons are “pure energy”.   Photons are particles just as electrons are particles; they both are ripples in a corresponding field, and they both have energy.  The electron and positron that annihilated had energy too — the same amount of energy as the photons to which they annihilate, in fact, since energy is conserved (i.e. the total amount does not change during the annihilation process.) (See Figure 3 of the particle/anti-particle annihilation article. Moreover (see Figures 1 and 2 of the particle/anti-particle annihilation article), the process muon + anti-muon → two photons  is on exactly the same footing and occurs with almost exactly the same probability as the process muon + anti-muon → electron + positron — which is matter and anti-matter annihilating into another type of matter and anti-matter.  So no matter how you want to express this, it is certainly not true that matter and anti-matter always annihilate into anything you might even loosely call `energy’; there are other possibilities. For these reasons I don’t use the “matter and energy” language on this website when speaking about annihilation.  I just call this type of process what it is: • particle 1 + anti-particle 1 → particle 2 + anti-particle 2 With this plain-spoken terminology it is clear why a muon and anti-muon annihilating to two photons, or to an electron and a positron, or to a neutrino and an anti-neutrino, are all on the same footing. They are all the same class of process. And we need not make distinctions that don’t really exist and that obscure the universality of particle/anti-particle annihilation. Not Everything is Matter or Energy, By a Long Shot Why do people sometimes talk about “matter and energy” as though everything is either matter or energy?   I don’t know the context in which this expression was invented.  Maybe one of my readers knows?  Language reflects history, and often reacts slowly to new information.  Part of the problem is that enormous changes in physicists’ conception of the world and its ingredients occurred between 1900 and 1980. This has mostly stopped for now; it’s been remarkably stable throughout my career. [String theorists might argue with what I’ve just said, pointing out that their great breakthroughs occurred during the 1980s and 1990s.  That’s true, but since string theory hasn’t yet established itself as reality through experimental verification, one cannot say that it has yet been incorporated into our conception of the world.] Our current conception of the physical world is shaped by a wide variety of experiments and discoveries that occurred during the 1950s, 1960s and 1970s. But previous ways of thinking and talking about particle physics partially stuck around even as late as the 1980s and 1990s, while I was being trained as a young scientist. This isn’t surprising; it takes a while for people who grew up with an older vision to come around to a new prevailing point of view, and some never do. And it also takes a while for a newer version to come into sharp focus, and for little niggling problems with it to be resolved. Today, if one wants to talk about the world in the context of our modern viewpoint, one can speak first and foremost of the “fields and their particles.” It is the fields that are the basic ingredients of the world, in today’s widely dominant paradigm.  We view fields as more fundamental than particles because you can’t have an elementary particle without a field, but you can have a field without any particles. [I still owe you a proper article about fields and particles; it’s high on the list of needed contributions to this website.]  However, it happens that every known field has a known particle, except possibly the Higgs field (whose particle is not yet certain to exist, though [as of the time of writing, spring 2012] there are significant experimental hints.) What do “fields and particles” have to do with “matter and energy”? Not much. Some fields and particles are what you would call “matter”, but which ones are matter, and which ones aren’t, depends on which definition of “matter” you are using.  Meanwhile, all fields and particles can have energy; but none of them are energy. Matter Particles and Force Particles — Well… On this website, I’ve divided the known particles up into “matter particles” and “force particles”. I wasn’t entirely happy doing this, because it’s a bit arbitrary. This division works for now; the force particles and their anti-particles are associated with the four forces of nature that we know so far, and the matter particles and their anti-particles are all of the others. And there are many situations in which this division is convenient.  But at the Large Hadron Collider [LHC] we could easily discover particles that don’t fit into this categorization; even the Higgs particle poses a bit of a problem, because it arguably is in neither class. There’s an alternate (but very different) division that makes sense: what I called matter particles all happen to be fermions, and what I called force particles all happen to be bosons. But this could change too with new discoveries. What this really comes down to is that all the particles of nature are simply particles, some of which are each other’s anti-particles, and there isn’t a unique way to divide them up into classes . The reason I used “matter” and “force” is that this is a little less abstract-sounding than “fermions” and “bosons”  — but I may come to regret my choice, because we might discover particles at the LHC, or elsewhere, that break this distinction down. Matter and Energy in the Universe Another place we encounter words of this type is in the history and properties of the cosmos as a whole.   We read about matter, radiation, dark matter, and dark energy. The use of the words by cosmologists is quite different from what you might expect — and it actually involves two or three different meanings, and depends strongly on context. Matter vs. Anti-Matter:  when you hear people talk this way, they’re talking about the first definition within the green paragraphs above.  They are typically referring to the imbalance of matter over anti-matter in our universe — the fact that the particles that make up ordinary material (electrons, protons and neutrons in particular) are much more abundant than any of their anti-particles. Matter vs. Radiation: if you hear this distinction, you’re dealing with the third definition of `matter’.  The universe has a temperature; it was very hot early on and has been gradually cooling, now at 2.7 Celsius-degrees above absolute zero. If you have a gas (or plasma) of particles at a given temperature T, and you measure the energies of these particles, you will find that the average motion-energy per particle is given by k T, where k is Boltzmann‘s famous constant. Now matter, in this context, is any particle whose mass-energy mc2  is large compared to this average motion energy kT; such particles will have velocity much slower than the speed of light. And radiation is any particle whose mass-energy is small compared to kT, and is consequently moving close to the speed of light. Notice what this means. In this context, what is matter, and what is not, is temperature-dependent and therefore time-dependent! Early in the universe, when the temperature was trillions of degrees and even hotter, the electron was what cosmologists consider radiation. Today, with the universe much cooler, the electron is in the category of matter.  In the present universe at least two of the three types of neutrinos are matter, and maybe all three, by this definition; but all the neutrinos were radiation early in the universe. Photons have always been and will always be radiation, since they are massless. What is Dark Matter?  We can tell from studying the motions of stars and other techniques that most of the mass of a galaxy comes from something that doesn’t shine, and lots of hard work has been done to prove that known particles behaving in ordinary ways cannot be responsible. To explain this effect, various speculations have been proposed, and many have been shown (through observation of how  galaxies look and behave, typically) to be wrong. Of the survivors, one of the leading contenders is that dark matter is made from heavy particles of an unknown type.  But we don’t know much more than that as yet.  Experiments may soon bring us new insights, though this is not guaranteed.  [Note also there may be not be any meaning to dark anti-matter; the particles of dark matter, like photons and Z particles, may well be their own anti-particles.] And Dark Energy? It was recently discovered that the universe is expanding faster and faster, not slower and slower as was the case when it was younger.  What is presumably responsible is called “dark energy”, but unfortunately, it’s actually not energy. As my colleague Sean Carroll is fond of saying, it is tension, not energy — a combination of pressure and energy density. So why do people call it “energy”? Part of it is public relations. Dark energy sounds cool; dark tension sounds weird, as does any other word you can think of that is vaguely appropriate. At some level this is harmless.  Scientists know exactly what is being referred to, so this terminology causes no problem on the technical side; most of the public doesn’t care exactly what is being referred to, so arguably there’s no big problem on the non-technical side. But if you really want to know what’s going on, it’s important to know that dark-energy isn’t a dark form of energy, but something more subtle. Moreover, like energy, dark-energy isn’t an object or set of objects, but a property that fields, or combinations of fields, or space-time itself can have.  We don’t yet know what is responsible for the dark-energy whose presence we infer from the accelerating universe.  And it may be quite a while before we do. By the way, do you know what an astronomer means by “metals”?  It’s not what you think… You might conclude from this article that modern physicists and their relatives have not been very inventive, creative, or careful with language.  Apparently it’s not our collective strong suit.  Big Bang?  Black Hole?  The world’s poets will never forgive us for choosing such dull names for such fantastic things…. 294 thoughts on “Matter and Energy: A False Dichotomy” 1. Thanks again for an excellent review. Even though most of it is known as bits and pieces by us the lay readers, this clearly, and coherently explains the intricacies of the commonly misused terms. You write that all fields have corresponding particles and only Higgs field and it’s particle is in doubt but LHC has tantalizing clues of its presence. Wouldn’t gravitational field is something for which no known particle exists even though it is hypothesized? 2. “Energy is something which objects can have”. I can’t say I’m happy with this. The energy of an object, in classical or SR models, at least depends on the frame of reference. I don’t see the Hamiltonian as the generator of time-like translations anywhere here, which at least deserves a mention if we move to QM models. Energy in GR is different again. If we move to signal processing, time-series analysis, we can define an energy of a signal mathematically, but that may not be reducible to phase space concepts (this is arguably engineering, but Physicists do use such concepts in some measurement contexts). I don’t see energy to be as settled in Physics as you suggest. • I think I understand your first point. Energy is a property of an object (or system) but the amount depends on the observer. I emphasized this in my mass and energy article, but not here; based on your comment I’ve added a remark to that effect in the text. Your second point about the Hamiltonian: this technical point is not appropriate for the main readers of this article. The fact that energy conservation is related to the time-independence of physical law has been covered on this site elsewhere; this particular article wasn’t the place for it. The notions of Lagrangian and Hamiltonian are above the assumed knowledge of my readership. Energy in general relativity is an advanced subject, again beyond the scope of this article. Finally: engineering notions of energy in the context of signal processing are beyond the scope of this website. It never arises in any of my research or that of any of my immediate colleagues. I am always referring to that conserved quantity (even in general relativity) that follow from the (local, in GR) time-independence of physical laws. • Just read your article. I think you are right about one thing: We still use language left over from another time. I think you are guilty of this yourself when you use the word “particle.” Sorry, there is no such thing as a particle. Everything is energy. Energy exists as a wave. A wave has momentum. Either that momentum is dedicated to travelling through space at some fraction of the speed of light, or it is turned in on itself as a standing wave to stay in one place as “mass.” Even as mass, it is still a wave. That wave is still energy, it’s just energy that stays in one place rather than travelling through space. When I put my hand on the tabletop, I’m not actually touching it. The electron waves of all the atoms on the tables surface repel the electron waves on the surface of my hand. There is no physical mass or particles touching, merely waves of negatively charged energy repulsing other negatively charged waves of energy. There is no such thing as a “particle.” That description is outdated. • interesting way of explaining the outdated term “particle “…. but isn’t “mass” made of “particles” ? how different is a particle from mass ? Isn’t a standing wave everywhere ? • No. The wave function is just a mathematical concept used to describe the momentum of a particle. It predicts the possibility of where one particle may exist in a single frame of time. It is not the same thing to say that the particle IS the momentum OR the mass. It doesn’t make any sense. This is exactly what the writer is referring to. The use of the words such as mass and energy which are fairly consistent in the macrocosmic world of ours fail to describe anything in the subatomic regions. None of which the writer says is outdated, but a lot of people who don’t really understand the underlying mechanism try to hijack the bandwagon with vague mystical sayings such as ‘Everything is Energy’ and the forth. Energy describes how the particle functions, it is the property of a particle. It is not the same thing as the particle itself. • Strassler made it pretty clear that fields, and not particles, are the fundamental ingredients of the world. He called particles “ripples” in these fields. You saying “everything is energy” leads me to believe you actually wanted to say “all things are fields”. • Hello. I am not a physicist at all. I came upon this article in a search because I could not accept the concept that energy and matter are separate entities and it’s what I keep reading over and over again. That matter has mass but energy doesn’t. Then, how can E=mc^2??? I realize it has been several years since this article has been published online, but I’m still not satisfied with the explanation. How can matter and energy not “be” the same thing? One can’t exist without the other. Where is the example of matter that has no “energy” associated with it? To me its like saying that lung tissue is not “human” even though it contains all of the same DNA as every other cell in the body. The lung tissue is just expressing or taking on the function that it needs to do. Otherwise, how could stem cells be used for various items. I feel the same way about matter and energy. How can you accept E=mc^2 but not accept that matter and energy are just expressions of the same thing? Or like when you say that you “have” height and weight, but you are “not” height and weight. Then what exactly are you? You don’t carry height and weight around as a separate entity in your hand that you can walk away from. It is you, just as the energy that animates our human body “is” each one of us. I am by NO way a scientist, but these concepts have been driving me insane for years. I guess what I’m getting at, if you still access this site: Where is a book, an essay, a study I can turn to that provides some kind of proof? I would be interested to study these concepts further. Thank you. • Even through Quantum Physics and the ToE we find that an ‘unexplained’ conscious ‘force’ is necessary for existence to occur; or something to that effect. The answers will only be found in the quantum realm and common sense. Every atom is built with frequency, energy and vibration which is ‘energy’ regardless of how the textbooks warp the mind. • Sarah, it has been a long time since you asked the questions here. I can answer one of them. You ask “How can matter and energy not “be” the same thing? One can’t exist without the other. Where is the example of matter that has no “energy” associated with it?” This last question should be reversed. There is no matter that has no energy, but there is energy that is not associated with matter. Photons, for instance, have energy, but they are not matter by anyone’s definition. The same is true for gravitational waves; they have energy, but I’ve never heard anyone suggest they should be called matter. Do not be confused by E=mc^2. First, “m” is mass, not mattter, and not all matter has mass, either. Second, “E” does not refer to all possible forms of energy. E=mc^2 means only that: the rest mass m of a particle that has some arises from E, energy, stored inside it. • Hi Matt, I am also bothered by the same question as Sarah. I understand your distinction of matter from energy and, as I’m sure you’re familiar with, the commonly surrendered definition of energy is “the ability to do work”, so its treatment as a property is entirely sensible. The part that spins me for a loop – and I suspect this is why people commonly insist that everything IS energy – is that I don’t know of anything that can exist without energy as a basic property. Even quantum fields have zero-point energy at their lowest-energy state. Then there’s the idea that energy is that thing which is the particle-producing ripple in the field. So if: a) everything Has energy as an irremovable and universal property, and b) things are defined (categorized and distinguished) by their properties, does it not follow that the essence of all things is energy? Even those conceptions which are themselves the absence of things (ie. shadows, absolute zero) are defined by their quality of energy. Maybe I am failing to consider something. Is there anything that can exist without energy? Or is the problem really about the practical issues with using the concept of energy so broadly? Or am I misunderstanding a subtle idea very seriously? Really appreciate your feedback, and the article! • Every human has age. We cannot imagine a human without it. Does that mean every human IS age? Every human has volume. All ordinary objects have volume, in fact. Does that mean the essence of objects is volume? We must not confuse properties that objects have — sometimes even necessary properties — with what those objects *are*. The fact that energy is a property of all objects and even non-objects reflects the fact that time is a fundamental property of the universe. It tells us nothing about the nature of those objects. Does this help? Nothing is made from energy, but everything has energy. 3. Can we say that the building stuff of the cosmos are mere 2 types of vibrations — organized ripples and psuedo ripples — which we call real or virtual particles ?in something we call fields ? • No — you really don’t want to approach it this way. The basic stuff is fields. Now you want to ask what fields can do and how that contributes to the stuff we see around us. You can’t reduce that the two types. • Is field stuff though? Field is more of an information matrix. Strictly speaking, you have a field of mass or a field of properties, but that doesn’t mean the field itself is either mass or energy. I don’t think we should give definition to ‘the basic stuff’ just as yet. I think we need to learn how to distinguish fields from mass-energy, and mass-energy from its carrier particles for this discussion to have any merit. And why are we so intent on reducing everything to one thing anyway? What good does it even do? Even the scientific world sometimes… • In my experience, a field is a mathematical construct, nothing more. Problems arise when one attempts to interpret a field as some sort of physical entity. This is not to say physical phenomena do not exist. Rather, fields are a mathematical device (the best we presently have) to describe physical reality at the micro level. It’s simply a theoretical framework based on mathematics. 4. There are a contradiction here : 1- E is something mass have 2- Mass is E/c^2 3- E is something E/c^2 have !!?? • You’re making some classic mistakes: First, #2 is false. Mass is only E/c^2 for a particle at rest and sitting on its own. [If you define m to be E/c^2, then you’re using the archiac notion of “relativistic mass”, which particle physicists avoid for several reason (to be explained soon.) And I’m not using “m” in this way anywhere on this website.] For a particle on its own, but moving, E = mc^2 + motion energy . For a more general system of particles and fields, you also have to account for other contributions to the total energy that cannot be assigned to any one particle. Second, #1 is false. Energy is something that stuff can have. Mass is also something that stuff can have. Stuff is not mass; it’s something that can have mass. But not necessarily. Some stuff has no mass — photons, for instance. And it is an accident that electrons are massive; remember that the Higgs field being non-zero on average is responsible for this. Electrons would still be stuff even if the Higgs field were zero on average and electrons were massless. [Experts: I know there’s a tiny subtlety with this statement; let it pass. To make the above statement precise I should also turn off some other interactions when I turn off the Higgs field’s value. But the substance of the remark is true.] Part of the point of this article (and my earlier particle-antiparticle annihilation and mass-and-energy articles) is that photons and electrons are both particles. They are both stuff. It happens that photons are massless and electrons are massive, so they behave quite differently. But the equations that govern them are very similar, and one should not think of electrons as stuff and photons as something else. Mass is something they may or may not have; energy is something they have too. • E=mc^2… 1 the energy content isnt determined by either. As you point out energy is intrinsic to the stuff aka it allready poses order to reject the fact photons have 0 rest mass you assign it velocity to give it relative mass . The problem is if the universe is a closed system there is an objective standard & all frames of reference consolidate into 1 objective frame of reference. Relative speed then becomes an objective speed & the phase space oscilates in a uniform fashion (higgs field). If it were not as such higgs field tensor values would varry & we can only imagine the chaos that would ensue as the phenomena of matter would dissolve along with our physical existence. 5. You say that “matter is always some kind of *stuff* …” and “photons are *stuff*”; this would appear to mean that photons are matter (unless photons are *stuff* that is not matter, of course). Yet photons fit none of your green-paragraph definitions of “matter”. I’m mindful that the whole thrust of the article is that matter is an ill-defined concept (and, as usual, I learned some things from it — thanks!); I am perhaps reinforcing that point by noting that your green-paragraph definitions are yet not enough. • Photons are “stuff”, but they are not matter — for almost every definition of “matter”. Matter is a subset of stuff, though which subset depends on context. I do know of one or two contexts where photons would be called “matter” too — but these settings are ones that you won’t come across often, and usually different terminology is used anyway. What I mean by “stuff”, in general, needs a little more working out. A silly but useful working definition is that something is stuff if it can be used to damage other stuff. I can’t damage your cells with mass or energy — I can’t make an “energy beam” or something like that. I have to make a beam of photons or a beam of electrons or muons or protons or neutrinos. That “stuff” carries energy, sure, but the energy has to be carried by a physical object — a particle — stuff — for it to be able to do anything. (The particles of dark matter — whatever they turn out to be — are stuff too, though I’d need one heck of a beam of dark matter particles to damage anything!) Fields are stuff too: they can be used to pull other stuff apart. Maybe you can point out a flaw in that definition? A challenge to the reader… • Space-time can be curved in ways to ripple ‘big’ stuff apart. That means spacetime itself is a “stuff”? If thats the case, can stuff ‘damage’ spacetime? • Sure. That’s Einstein’s point. You can make two black holes, made entirely from space-time curvature, and arrange for them to orbit each other. The two orbiting black holes form a system — a sort of “atom” — which can be stable for a fairly long time. A sufficiently powerful beam of photons, or electrons, could break that system of two black holes apart. It’s not very different from disrupting an ordinary atom using a extremely powerful gravitational wave, which is also possible in principle. In the first example you would be using something which is obviously stuff to damage an object made entirely from curvature of space and time. In the second you would be using space-time to damage an object made from other stuff. Not that doing either of these is practical — but in principle you could do either one. 6. Haha, that is true! I’m not sure Lemaitre’s “primeval atom” was much better than “Big Bang.” All the really dramatic names I can think of smack of religion. Besides, who wouldn’t want to write a poem about the charmed quark? It’s so… charming. (Yes, I’ll be here all night, folks.) I’ve also heard Dark Energy described as a sort of cosmic “pressure”. I’m not sure how valid that comparison turns out to be. • Yet, his ‘atom primitif’ suggestion sounded better than his other suggestion to name it the ‘cosmic egg’ which was probably inspired by one of many eastern mythologies. That would really have been the worst possible name. 7. I am an IT pro and we have trouble with overloaded terminology just within our field. As with particle physics, our objects keep splitting into pieces, too. The only thing to do is have fun with it! What are your thoughts about the popular vision of an object made of matter meeting an object made of antimatter? Now that I’ve read your article, I realize that “an object made of antimatter” is not a well-defined concept, and it might fall apart if we look at what the statement really means, compared with how matter is really composed. Being that “antimatter” is at the center of the popular imagination, what are your comments on hypothetical, macroscopic anti-objects? • If you could collect enough anti-matter [definition #1] and shield it from all the matter [definition #1] that’s flying around (stray electrons and the like), then there would be no problem in principle to construct anti-salt and anti-steel and anti-cells and anti-cars. The laws of nature are sufficiently symmetric (not exactly, but very close) that anything you can do with matter you could do with anti-matter. And indeed if you brought any significant amount of matter in contact with any significant amount of anti-matter you’d make an explosion. But no one has any practical reason to try to construct large amounts of anti-matter. The closest we’ve gotten is making powerful beams of anti-protons and anti-electrons (positrons) for use in particle physics experiments. But the amounts of energy that you could obtain by slamming those beams into a wall is small. In fact that’s exactly what happens to those beams when we’re done with them; we slam them into a wall, underground somewhere. The wall does heat up, but mostly because the anti-electrons or anti-protons are traveling really fast and have lots of motion-energy — not because of the energy released when they find an electron or proton and annihilate to something else (photons or pions, for instance.) And as far as we can tell, the part of the universe that we can see does not naturally have large amounts of anti-matter anywhere. [If there were regions of the universe with large amounts of anti-matter, at the border between regions of matter and regions of anti-matter you’d expect to see large quantities of photons with very particular energies emitted. We don’t see signs of such borders.] • Thanks, Professor. Two things about our hypothetical antimatter #1, that I hope might surprise me. One, is that I don’t understand what happens when two macro objects “touch,” but I’ve been told it has mostly to do with the electromagnetic force. Are there subtle effects, like the exclusion principle, that changes the way antielectron shells would behave near electron shells? Two, is that I’ve read some interesting history about the early atomic energy laboratories, and the way that fission materials went “prompt critical” almost always interrupted the nuclear reaction, milliseconds after it began. It was as if the nature of the reaction was to defuse itself, contrary to popular imagination. So when the first few leptons meet their anti-partners, and throw off some lightweight particles, what happens next? Their atoms become ions and their molecules would break apart, for one, if they have time. But how do we expect the nucleii to come in contact? Would we get more of a chemical than a nuclear reaction? And if there was an explosive force, wouldn’t it force the macroscopic objects apart? That’s assuming they were solid objects, as in the popular imagination; if gas met anti-gas, or liquid met anti-liquid, the macroscopic dynamics would be quite different. That’s why I wonder if, perhaps, objects and anti-objects might behave differently together than the popular imagination dictates. • Ah, I see what you’re asking. There is no question if you took two cubes, one of matter and one of antimatter, and brought them safely together, the actual contact would be instantly different from the contact between matter and matter. At the surfaces of contact, the electrons on the outskirts of the atoms and the positrons (anti-electrons) on the outskirts of the anti-atoms would start finding each other on very short microscopic time scales, and immediately begin turning into pairs of photons of 511,000 electron-volts of energy each. These “gamma rays” would then create an electromagnetic shower of particles: lower-energy electrons, positrons and photons. Since it only takes a few electron-volts of energy to rip the electrons off an atom (or positrons off an anti-atom), all the atoms and the anti-atoms near the surface would be quickly disrupted, vaporizing the material in that region. The force from the released energetic particles smashing into the remaining atoms of the cubes would most definitely push the cubes apart, just as in the fission experiments you mentioned. So there’d be an explosion, but only of the material nearest the surface of contact, and unless the two cubes were slammed together with enormous energy, as in a fission bomb, you’d only get annihilation near the contact surface. • I am sure that someday soon people will make many millions of anti-atoms. One just has to remember that a glass of water has something like a million million million million atoms of hydrogen. I might be off by a factor of a thousand or so (I didn’t check the number carefully), but you get the point. 8. Now you reduced everything to a word ; stuff , which can have mass , energy , can do work ……but what is stuff , you said what it can do but if we want to go deeper is it the end of our search ? it is all some kind of circular definition ….some kind of dna-protein closed cycle !! where we are lost. 9. I think is quite misleading to say such a thing as “you can’t have an elementary particle without a field, but you can have a field without any particles” because the concept of particle depends on the observer, what is vacuum in one frame of reference has lots of particles in another (accelerated) frame. If the Higgs is the caveat of the phrase then, in my mind, not founding the corresponding particle indicates that such a field does not exist. But I’m by no means an expert in elementary particles and could be (very) wrong about this statement. As a complementary note on the anti-matter in the large scale, there are experiments on producing some dozen anti-hydrogen atoms also. Of course nothing macroscopical. Sorry for being somewhat pedantic. • I did not explain what I meant here, because it is technical, but since you ask… A field that strongly interacts with itself, or with other fields, may have no particle states at all. This is a well known fact about conformal field theory. There are many concrete examples in the context of solid state physics, and many hypothetical examples that arise in high-energy physics. Said another way: a non-interacting field always has well-behaved ripples (which in quantum mechanics are made from quanta), a weakly-interacting field has largely well-behaved ripples (though these may have a finite lifetime), but a strong-interacting field may have nothing resembling a ripple at all. So no, what I said is not misleading — it is the particle picture of fields, which assumes that fields are weakly interacting, that is misleading. And you used that weakly-interacting-field intuition in your comment. • Hum, that’s quite interesting and a sign that I should be studying a lot more. My comment was really made with non-interacting fields in mind, so I see what I got wrong. Just to make clear what I meant was that for non-interacting fields the concept of particle is observer-dependent because of things like Unruh Effect. But I couldn’t agree more with you that fields are the essential concept and not particles (the point I was trying to make). Thank you for the explanation. 10. According to all of your articles the end most fundamental stuff is the ripples in fields , i mean if fields are extended stuff , ripples are ” real concrete stuff relative to our senses ” which can have mass , energy , etc. , but if fields are stuff then we reach an absurdity : what is stuff ? stuff is stuff !!?? • Fields are not traditional ‘stuff’ as we know it. They’re almost like things that can have stuff in them. (The ripples.) I think that a nice definition of ‘stuff’ would be ‘particles’ (Ripples.) since they’re the things that ‘have’ all the other things, such as speed, mass, energy and so on which we usually associate with ‘stuff’. But in the end ‘stuff’ is just a label we attach to things. Matter, particles, objects, squiggles, these are all just names that we use. Nature does not care about our neat little ordering systems and so sometimes things can get a bit confusing. 11. Great stuff, Matt. Thanks. A parenthetical question: This is a whole new way of thinking for me, having spent my career in applied physics before 1980. In my struggle to adapt to your post-classical, if not post-modern, way of thinking, I am trying to understand where and how the Higgs mechanism appears in the mathematical constructs that underlie the standard model (what are those mathematical constructs?) and how we know what decay products to expect from Higgs particle decay. If I wanted to find an answer to those two question on Google, what should I google? • Do you understand superconductivity? The photon obtains a mass inside a superconductor when Cooper pairs (represented by a charge-2e scalar field) condense. The W and Z particles obtain a mass within the universe when the Higgs field condenses. The mathematics is almost the same — relativistic instead of non-relativistic, non-Abelian instead of Abelian, but it is the same idea. The most important difference is that we know that for the superconductor, the charge-2e scalar is a kind of bound state of two electrons, but for the universe, we don’t know what the Higgs field is yet — whether it is a composite of something else or not, whether it is just one field or several — and that’s what the LHC is aiming to find out. As for how the Higgs decays; that’s a little more complicated. Did you read what I wrote on Standard Model Higgs decays? situation; we have developed some nice methods and we are looking to swap solutions with others, be sure to shoot me an e-mail if interested. • Hello! This post couldn’t be written any better! Reading a good read. Thank you for sharing! 12. Is mass constant? Or should I be asking is the Higgs field constant? In the beginning of time (and space) there was finite temperature within the absolute maximum of Fermi spheres, the singularity that created the Big Bang. The expansion of this sphere created time and space and hence a field (presumably the gravitational field). With further expansion and lowering of temperature (energy densities) resonances where created and hence more fields, like EMF and Higgs. Now, if energy is a conserved quantity then the sum of all fields must also be constant. So with the continued expansion of time and space (spacetime) the magnitudes of the various ripples as created by the various fields should be reducing. (I say should because the densities continue to lower both absolutely (larger expanse of the entire universe) and locally around the smaller gravity wells. • 1) We do not understand the Big Bang at the very earliest times with the precision that you suggest. 2) Energy is not in any simple sense a conserved quantity in a rapidly expanding universe. Even if it were, fields are not energies and the sum of all fields isn’t even well defined, so it certainly need not be constant. 3) The mass of particles like electrons is not believed to have been constant over time, because the Higgs field was not constant over time. At very high temperatures the Higgs field would have been zero on average. As the universe cooled, an “electro-weak phase transition”, which must have happened a microscopic time after the Big Bang began, must have taken place — its details are not yet known, because we don’t know enough about the Higgs field yet. But the Higgs field is believed to have changed very rapidly and then settled down to its present non-zero value during that transition. 4) As far as we know, the Higgs field’s value, and the electron mass, have not changed a bit since then. In principle they might have varied over space and time, but there is neither experimental evidence nor good theoretical reason to think they actually did, at least within the part of the universe that we can see and over the time since the electroweak phase transition. (For instance, the success of Big Bang Nucleosynthesis in predicting the original helium to hydrogen ratio of the universe could easily have been messed up had the electron or proton or neutron masses been significantly different from what they are today.) Scientists continue to try experimentally and observationally to test whether there is any sign of variation. 13. Hi I was woundering if energy is conserved in general relativity. what i have read on the internet has left me confused. Also woundering where the energy of the cmb has gone. • Ummm… I don’t think I can give a very good answer here that I’m ready to stand behind. Let me say two things. 1) LOCALLY (that is, in any small region of space, for a short time) energy and momentum are conserved the way we are used to. 2) GLOBALLY (this is, across regions or across eons where gravitational fields are very important, such as the universe as a whole) you have to be much more careful about how you define the total energy of a system. It’s quite subtle, and can’t always be done. And I’m not expert enough to answer off the cuff and get all the details right as to when you can and when you can’t, and how you do it in the cases when you can. [Been too long since I reviewed this subject, which is mostly outside my research…] If what you mean, in asking about the energy of the cosmic microwave background radiation (cmb), is how did the photons lose energy as they cooled off during the universe’s expansion, one way to answer that is to say that the expansion of space itself took the energy from the photons, and from all the other particles too. But I’m not sure that’s super-intuitive. I know an intuitive answer that isn’t really correct (namely, to consider how photons in a box lose energy if the box expands). Maybe a general relativity expert here knows a better intuitive and also correct response. H = U + pV H is the enthalpy of the system U is the internal energy of the system p is the pressure at the boundary of the system and its environment V is the volume of the system. As lup mentioned this equation is somewhat confusing. It can be interpreted as the energy is spent to apply pressure to the boundary and expand the volume. But that interpretation implies there is an external environment outside our universe. Are we living in a bubble within a bigger bubble, in a asymmetric phase within a symmetric environment. Are there other asymmetric bubbles nearby? The thing that amazes me Professor is regardless of which theory you believe non explain the real nature of energy. Why was the temperature so high at the Big Bang? Is the symmetric phase an “infinitely” stretched spring that when broken releases all its energy at a very short time (space) and hence the large energy density, temperature? • So energy is conserved when cmb photons have thier wavelength stretched by the expanding universe? is this energy helping to expand the universe further or is it more like a stored potential energy? • The thing about energy in general relativity is that for it to be defined globally the spacetime must be stationary, which is a form of saying that “the space is the same at every time”. The problem is that exapanding universes are not the same at every time, so that energy is not conserved. With that in mind the cmb energy has not gone anywhere but just disappeared. We can’t really ask where has it gone because that implies it is conserved. Since it’s not, the energy has just gone away. I don’t think this has any intuitive answer. The closest to intuitive I have heard is the “photon in a box” thing, which I kind of like even if is not precise. The only thing one must remember is that in general relativity there is no “potential energy” related to gravitational fields, so the energy of photons is not stored in gravity, it has really gone away. To sum everything: the energy of cmb is not conserved when the wavelenght of photons is increased by universe expansion, and the energy is not “stored” in any potential form; it has really gone away. • I’m sure you have heard of the seemingly obsolete theory of “tired “light – could the “hot”photon’s energy not drive the expansion of the universe : in other words red-shifting because the wave packets are themselves expanding (such as water waves dissapate their energy over distance) – rather than being stretched to red because an unknown dark energy is pulling them apart? • Einstein’s equations for the expansion rate of the universe work very well; they are used in deriving the prediction for how much helium there is relative to hydrogen, and in predicting the cosmic microwave background spectrum, both of which agree very well with data. [A nice (if a little out of date) discussion of the latter appears here: ] This is all done without appealing to anything like “tired light”. Moreover, the dominant energy density in the universe hasn’t been from light (and other very light-weight particles such as neutrinos) since the universe was 100,000 years old or so. So it’s hard to suggest that light could be essential to the expansion during most of the last 13.7 billion years. I’m not the world’s expert, but I suspect that with the current precision available in cosmology, there isn’t a lot of room left for exotic theories of why and how the universe is expanding. Of course we don’t know for sure what got the Big Bang started, though there are plenty of theories of that (under the name of “reheating following inflation”). • yes-but as far as I understand Einstein introduced the cosmological constant to make his equations fit the observable reality (non-collapsing universe) rather than this being a mathematical necessity per se and as you point out he seemed have done so particularly well. I also did not want to imply that dark energy be the dominant energy of the universe – it only has to be stronger than the universal gravitational force and supposedly wins the upper hand the further the mass particles are apart. But on the same note, if the universe continues to expand eventually the photons of the background radiation will be stretched beyond the temperature of absolute zero – do they then cease to exist?, do they blend into and become (non-particular) elements of the electromagnetic field? – or could that result in the end of the expansion phase and gravity, however weak, but now unopposed lead to the beginning of the hypothesised big crunch? • Thanks heaps…thats certainly answered my question. The only remaining puzzle for me is that i thought that energy conservation was a consequence of a the laws of physics dont change with time? • So the subtlety here is: if time is just something that sits there and the laws of physics operate within it, then yes, conservation of total energy follows from Noether’s theorem. But once time starts participating in the physics — as it does in Einstein’s theory, where space and time are actually part of the physical phenomena — then to ask whether the laws of nature change with time becomes subtle. For example, you can’t even necessary define a global notion of time in a sufficiently curved space. What survives of our usual notion is that within sufficiently small and weakly-curved regions of space and time, the laws of nature do behave in a time- and space-independent way, to a sufficiently good approximation that energy and momentum are conserved there. This is the notion of LOCAL conservation of energy and momentum. There is still a Noether theorem and still a conservation law, but it applies locally, not globally across all of space and all of time (except in special circumstances, such as a time-independent space-time.) (Technically: there are energy- and momentum-currents that are locally conserved.) • Thanks Matt, thats cleared up a lot of things. Cesar mentioned that the cmb photon energy is really just going away (ceasing to exist) as the universe expands. Can we have the opposite – can we have energy appear that wasnt there before? • Well lup, yes we can have the opposite and it’s pretty much standard. The idea behind energy “going away” is that since the universe is expanding it “stretches” the photon wavelenght and consequentially it’s energy decreases. In order to have the opposite effect all we need is a contracting universe, like those cyclic universe models. To be very clear, in those cases there is so much mass in the whole universe that at some point is stops expanding and starts collapsing. During the contracting phase the photon wavelenght is “compressed” by the contraction of space and the energy increases. It’s really like watching the expanding universe going backwards in time. But remmember that this is just a model, the current astrophysical data supports the accelerated expanding universe which kind of eliminates the recollapsing universes in such a naive way. But from the theoretical point of view there are no problems with photons “heating up”. 14. Hi Matt ! An experiment consisting of a holometer is currently in the design stages at the Fermilab centre. Headed by Dr Craig Hogan, the aim is to determine if spacetime has holographic properties by attempting to measure a quantifiable planck unit. I know that a holographic field is a constructive interference pattern with information encoded in the boundary of the pattern.I also know that I am voyaging into the realms of speculation by suggesting this, but could the fermions be nodal points of a standing wave? Certainly, the fact that they are non-locally connected seems consistent with this. If they are, then could this also mean that mass is a measurement of the negative interferometric-visibility density of a Higgs scalar field instead of a particle ? Could it also imply that the laws of physics are an hierarchical layer of anti-nodal displacement cycles, generated by an underlying constructive interference pattern state, with the gauge bosons being the anti-nodes themselves ? I apologise if these questions seem rather wacky, but ever since I heard of the Fermilab experiment I’ve been racking my brains trying to figure out how the holographic principle would work in practice, and how it could relate to the search for a non-standard model Higgs effect ! • Much as I like and respect Craig Hogan, I’m pretty skeptical about this experiment, I must say. I suspect that the effects he’s looking for are vastly too small to be detected. As far as I can see, there’s absolutely no need for there to be any connection between the Holographic Principle and the Higgs mechanism. One operates on the weak nuclear force at the energy scale of 250 GeV or so; the other relates to space-time viewed at the energy scale of 10,000,000,000,000,000 GeV or so. I’m afraid that the answers to all of your questions are “no” — or better, “be a lot more precise”. Remember particles aren’t little marbles; they’re ripples in fields, and you have to explain the fields first. We have very successful quantum field theory for fermion fields and gauge boson fields, working in some cases to one part in a trillion; so you have to tell me how you could reformulate all of what we know about quantum fields to make the fermions come out as nodes in some kind of standing wave and interacting properly with gauge bosons that come out as anti-nodes in some kind of standing wave. Sounds like an extremely tall mathematical order and I have no idea whatsoever how you would start to make that work. You can’t do theoretical physics with words, because you end up just speaking ambiguities. You have to do math — that’s both the essential part and the hard part, not just because math is technical but because most math you try to write down will be self-inconsistent or inconsistent with existing experiments. Einstein is widely quoted in our culture. But if you read his papers, you’ll find that all those nice-sounding words are backed up with solid calculations — and that’s why it isn’t ambiguous what he means when he starts talking about the subtleties of special and general relativity. And he always checks that what he’s proposed is consistent with existing experiments. 15. In all your posts you mentioned fields so many times , you never explained what is it , the max. you said is ; fields are stuff that can have E , m , charges…. but is this all of what we can define a field ? what are fields , i know that fundamentals cannot be defined as there are nothing more basic to refer to it . Are we to stop at a word ; field? Are fields an ontological reality of a mathematical representation of our observations? Forgive my insistence but i cannot stop at mid road. • In the current way of thinking about the world, fields are about as far as you can go. In specific attempts to go beyond this way of thinking, fields can be manifestations of other things. For example, in theories with extra dimensions, some (but not all) fields can be manifestations of the shapes and sizes of dimensions that are too small for us to see. In other attempts, some of the known fields can be themselves made out of other fields which are more fundamental. But all of these attempts are speculative; we don’t know which ones are correct. So I would say that the fields are currently viewed as the fundamental ingredients; that is where things currently stop. Even space and time are to be understood in terms of gravitational fields. However, knowledge accumulates over time, and what we think is fundamental may change. There can also be multiple equivalent interpretations of the same information — two ways of looking at a problem that have exactly the same mathematics and the same physical predictions. Philosophers are frustrated by this ambiguity, but theoretical physicists have learned that we have to remain light on our feet. • Is it valid to say fields are any change (quantitatively and/or qualitatively) of one or more quantum numbers over the space and/or time domain? In other words, they are defined by the topography of spacetime. I speculate this way because as fundamentals they must all be derived back to the initial field, whether it is gravitation or something more fundamental like vortices created by the rotations of the three dimensional space. This is why in my early post I speculated that the sum of all fields must be constant, because spacetime has an orderly progression. So my logic is there must be a constant constraint that drives order otherwise we would have global chaos. • It’s very important to distinguish what is known (by combining experiments with a clear theoretical framework) from what is speculation and may not end up surviving experimental tests. You say: “as fundamentals they must all be derived back to the initial field”. We don’t know that. You say: “spacetime has an orderly progression. So my logic is there must be a constant constraint that drives order otherwise we would have global chaos.” Again, we don’t know that. You can’t really talk about fields as a change in quantum numbers, no. Quantum numbers are very specific things: they are labels of certain quantum states; in particular, they are eigenvalues of overall quantum operators that are meaningful in specific quantum states. Quantum fields are a more general concept. For example, the electric charge of an electron is a quantum number; but there is no field for that. And conversely, most states involving the electromagnetic field do not have a definite value for the field, and so there’s no sense in which there is a suitable quantum number associated to those states. The electromagnetic field from freshman year physics is the best place to start really understanding classical fields. Quantum fields are like classical fields in that they can support waves; they are unlike them in that the waves cannot come with arbitrary amplitude, but instead must have an amplitude equal to a minimal value (one quantum) times any integer. I’m afraid they are unlike them in a lot of other subtle ways too. • “.. but instead must have an amplitude equal to a minimal value (one quantum) times any integer …” Interesting, is that because we over simplified the math by introducing renormalization because of our inability to visualize the physics, (reached our limit of our conscious awareness to map nature’s secrets). I know that Dirac and Feynman were concern with renormalization because it would contaminate the math in to most of fundamental ways and get us trapped in a our own math. Is renormalization prevent us to involve naturally into more advance physics? Dirac’s criticism was the most persistent. As late as 1975, he was saying: “Most physicists are very satisfied with the situation. They say: ‘Quantum electrodynamics is a good theory and we do not have to worry about it any more.’ I must say that I am very dissatisfied with the situation, because this so-called ‘good theory’ does involve neglecting infinities which appear in its equations, neglecting them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it is small – not neglecting it just because it is infinitely great and you do not want it!” Another important critic was Feynman. Despite his crucial role in the development of quantum electrodynamics, he wrote the following in 1985: “The shell game that we play … is technically called ‘renormalization’. But no matter how clever the word, it is still what I would call a dippy process! Having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent. It’s surprising that the theory still hasn’t been proved self-consistent one way or the other by now; I suspect that renormalization is not mathematically legitimate.” I am inclined to agree with Dirac and Feynman in that we need a better handling of the infinities and constraints to make real progress. I am afraid our ability to experiment is getting very quickly to a stagnation point because of the limits of our machines. I also want to take this opportunity to stress how important it is to convince NASA and their handlers that more resources should be spent in sensors, telescopes, and space experiments than the very expensive human spaceflight projects. • If you think renormalization is about infinities (or that it has to do with an inability to visualize the physics), you have not understood it. Unfortunately most textbooks still talk about it in terms of removing infinities. This is deeply unfortunate and misleading, because in fact, even in theories with no infinities, there is renormalization. Even in quantum mechanics, in the anharmonic oscillator, there is a perfectly finite renormalization. Once you have understood that, then you can understand that renormalization (both perturbative and non-perturbative) has nothing to do with the infinities themselves, but with something more physical and deeper; and you can also see which types of infinities are acceptable in quantum field theory and what types are not. Dirac clearly never grasped this point. As for Feynman, I am sorry I never got to ask him exactly what he meant by his comment. But let’s just say that our comprehension of quantum field theory has come a long way since 1985. Unfortunately this is an extremely subtle and technical subject (which is why almost nobody does it in a sensible way) and I doubt I will be able to explain it on this website. At some point I may write a short technical monograph about it. But suffice it to say that I think you’re not even close to being on the right track. • There are interesting discussions going on below this very nice article 🙂 In particular, I feel really affected by the second paragraph in Prof. Strassler s above comment ending with “… fields can be manifestations of the shapes and sizes of dimensions that are too small for us to see.” 😛 😉 🙂 • Hi Matt, Since you mentioned that “Even space and time are to be understood in terms of gravitational fields”, I want to remark that I have found this concept very tricky to explain to non-experts. We’re so used to thinking about fields as things that “live in” static space and time that understanding space and time themselves in terms of fields is hard to wrap our minds around. If someday you have the time and inclination to tackle an article about this issue, I think many of your readers would appreciate it, and I would be interested to see what approach you take. 16. I read in an article about quantum gravity in stanford encyclopedia of philosophy that fields are specific properties of space-time itself , is this correct ? • No. They can be. But often they are not, even in string theory. My advice: don’t get your physics from philosophy encyclopedias (and I wouldn’t get your philosophy from physics encyclopedias either.) 17. But my dear friend , physics is an ontological science , it is interconnected to philosophy, any thing you say about fundamental level of existence IS philosophy. • (THANKS). This is an interesting point, and I do think I disagree. Physics is interconnected to philosophy, yes. But physics is a quantitative, predictive enterprise. Theoretical physicists will often accept levels of understanding or ambiguity that are unacceptable to philosophers. (And to mathematicians!) Collectively, we are typically much more practically minded than our colleagues in either of these subjects. This allows us to make rapid, but often very ragged, progress. Often when we learn how to calculate something, it is some years, even decades, before we understand what we’ve actually learned well enough for either mathematicians on the one hand, and philosophers on the other, to engage with it. For instance: what people said about particles and fields in the 1950s, before quantum field theory was understood at the much deeper level that is available to us today, was in many cases deeply misleading philosophically. The story I tell you today is based on insights that emerged in the 60s, 70s, 80s and 90s. Indeed, a good chunk of what I learned in grad school was misleading philosophically. Meanwhile mathematicians still haven’t figured out what we’re doing in quantum field theory… and I wish they had, because there are many puzzles about it that we can’t solve. • “physics is a quantitative, predictive enterprise” up to the point where we decide what DoFs and what formalism to use when we construct models for experiments. If we choose the wrong number of DoFs or formalism, we find the accuracy is OK, but not great, but the inaccuracy gives us precious little clue as to what different DoFs or formalism to use. We can’t do much better than guess again. Hence the enormous disparity of models that are published in journals. We also can’t measure the Lagrangian in a comparable way to measuring the field strength, say, supposing we have managed to guess the right DoFs and formalism. The process of guessing again is done slightly better by some Physicists than by others, but that x-factor is not as quantitative and predictive as we’d like it to be. We can call this guessing Foundations of Physics (or we could call it Philosophy done by experienced Physicists, but that’s just names, silly to argue over), in contrast to Philosophy of Physics (Physics done by Philosophers), but there’s a lot of crossover between these two academic communities, with good ideas on both sides being taken seriously by people on both sides. Of course it’s much easier to generate bad ideas, when the subject matter is beyond computationally complex because we don’t know what questions are possible. In the end, however, we can waste 50 years in quantitative predictions if we are using the wrong DoFs and formalism, so it’s worth some people working concurrently on what we think we’re doing, even if many of them waste their time. 18. I think no time is wasted once they are on the grand march to understand existence , no one will put his hand on the ultimate truth , but science and philosophy are 2 faces of same coin……the sacred search for what IS and our place in it , it is the most precious effort humans can perform, even a simple layperson question can lead to some great answer it is our duty and our destiny as humans to think ,to reflect , and to wonder ……nothing is greater than our feeling of awe in front of beauty , design , perfection in a realm which is not perfect itself ….this is the meaning of being human…to enjoy life , to love , to care , to share , to embrace the whole of creation. 19. once upon a time in space,i tried to write a thesis.I went back in time and forward in space…today became yesterday and zero was squared…BANG!!! the singularity of zero was infinite(-0.0000….)and eternal (+0.0000….)…and thus the first law was laid,from which the universe arose…the expanding singularity of infinite totality,,,a total of 1,and a value of 0…as a totality of one singularity,one could not be measured therefore totality was 0-,or 0+ depending on direction as it relates to the opposing direction…giving rise to every thing that followed,,what i speak of is a singularity of time,,,but could have easily mistaken it for the higgs and thee higgs field…or possibly or on a larger scale dark energy,and dark matter…its very hard to make strong case to include all the that this theory covers,without producing a thesis…so just a little snippet for now,as i reduce it to simplicity,,,its a binary universes…where zero has two values,,0,and minus 0,from minus0,0 has the value of enxpanding 1,or 0.9999……and from 0,minus 0 negative 1infinte,or -0.9999 Time is the equal and opposite reaction,preceeding the action of forming space,,,just as all mass is basically congealed energy,all of space,is congealed time,both the positive,and negative…quantum at the particular level,yet in general its relative… thank you for reading,and i value any input 20. A good direction to take the discussion: Why do fermions feel more like “stuff” than bosons? This brings in a little quantum mechanics, which you might not be ready to do yet. But the “rule” that two fermions cannot be in the same quantum state, while bosons can, seems central to a working definition of “matter.” Intuitively, we expect matter to “take up space,” which fermions do, but bosons not so much. Of course fermions don’t take up space by themselves, since any collection of fermions is always accompanied by bosons transmitting forces between fermions, and so on. Thanks for all the great articles! • Well, if you keep running with this too far you start to stumble… For example: in a world with only gluons and no quarks, the gluons would bind together to make hadrons that are called “glueballs”. Those take up some space. Granted you couldn’t make anything macroscopic out of them. If the electron were a boson, you’d still have hydrogen. That takes up space. And hydrogen’s a fermion; you could make things out of it. In our world, hydrogen is a boson, deuterium (heavy hydrogen, whose nucleus contains a neutron as well as a proton) is a fermion. Is one of them matter and the other not? In certain hypothetical universes, you could make a proton-like fermion out of combining a fermion and a boson. And you can make bosons out of combining fermions. Even in our world, protons contain quarks (fermions, which you’d like to call matter) and gluons (which you’d like to say aren’t part of matter so much as representing the force that holds the fermions together.) But you can imagine a world in which some quarks were fermions and others were bosons, and the number of types of gluons was different — and then you could get fermionic proton-like objects which were made from quark fermions and quark bosons as well as gluons. What would you call the quark bosons? Matter or not? The theory of supersymmetry causes a problem, because it combines fermions and bosons in pairs. It doesn’t make sense to say these pairs are part matter and part force; when you write the equations down, the boson-fermion pair appear in a single mathematical object. And in string theory all of these particles can be made from the same type of string. Let’s not forget what we call “dark matter” may be made from bosons. So there’s all sorts of ways in which this line of thinking can break down. At different layers of structure, what you’d want to call matter might change in a profound way. I don’t view the current distinction as likely to survive into the future too much further. • Which is valid, matter is standing spherical waves oscillating at the Compton wavelength or is matter a Fermi sphere with such a radius so as to give a Fermi energy equivalent to the mass-energy of that particular particle? I can understand Pauli exclusion principle via the Fermi sphere definition but not with the standing wave theory. Are we missing some math? • Well, calling fermions the essential ingredient to “matter that takes up space” still seems consistent to me, as long as we add a caveat about how closely you look. An atom or molecule can have the properties of a boson when viewed from the appropriate distance, but when you get “too close,” the properties of the fermions that it is made of become important. How about, for now, say “matter that takes up space” must contain quarks, leptons, or both. Whether or not it has the properties of a boson when viewed from a sufficient distance does not keep it from being “matter that takes up space.” • In a theory with no quarks, the gluons would bind up into hadrons called “glueballs”. These hadrons would take up some space too, individually. Granted I couldn’t make a lattice out of glueballs. But I suspect there are bosonic systems where I could do make a lattice, if I could arrange for some short-distance repulsion from a short-range force. And it is possible to build fermions as solitons in a theory with only fundamental bosons. So this still raises questions about your dichotomy… I think you’re still relying on very specific properties of our universe that would not necessarily be true in other ones. • How do theoretical physicists verify the validity of using the gamma function and the rather very simple energy – momentum (dispersion relationship, E = ap^s) for deriving the thermal de Broglie wavelength. Does this derivation fall in the argument of whether the renormalization is appropriate for quantisation of the “quantum nature of gas? This approximation seems so critical in defining the true natural of the vacuum, how has it been verified and conversely how do you rule out dimensions above the usual 3 that we perceive to live in? 21. Now if the mass of the proton is all of its quarks E divided by c^2 , how can we express the mass of electron or any primary ” particle ” ? if an electron is at “rest” –whatever that means — what is the meaning of its mass ? even what is its E then? are we running in a circle ? • The mass of the proton is not merely all of its quarks’ energies E divided by c^2 (and to the extent a more precise statement is true, it is true only of a stationary proton.) It’s more complicated; that article is coming. An electron can have definite momentum (including zero) as long it is in a state where all position information is lost. So it can be at rest, yes. And I can figure out its mass-energy E_rest in many ways experimentally. One way is to bring an anti-electron close by, watch the two annihilate into two photons, and measure the energies of the photons (which are pure motion-energy, since the photon has no mass). Since energy is conserved, and the initial energy was (to a very good approximation) the mass-energy of the electron plus the mass-energy of the positron, which is 2 E_rest, the energy of each of the two photons is equal to E_rest for the electron. Divide by c^2 and you have the mass of the electron. See for example 22. Hello Matt, Incredible what a interesting subject physics really is, how much there is to say about it and what thorough knowledge you have of every detail of physics! It reminds me of Feynman. For me having studied theoretical physics but not actually working as a physicist, the first two volumes of Weinberg’s “The Quantum Theory of Fields” were a gift from heaven. Thanks to these book I understand physics at a much more fundamental level than when I was a student, when they talked about gauge invariance, fields, particles, representations, CPT and renormalizability and I had no idea how they were all linked together. A remark: To call matter particles fermions is a good thing because of the stability they give to matter due to the Fermi exclusion principle. Usually we see the bosons as force particles and usually this is not too bad a picture. But what about light-light scattering. Here in the box diagram the fermions are the force particles. • Notice Mark Wallace’s comment here, and my reply. You’ll see that I am a little cautious about calling fermions “matter particles” because it runs you into trouble. In fact, your remark suggests another problem. Indeed, pairs of virtual fermions (more precisely, appropriate quantum disturbances in a fermion field) can cause forces. (Though we should recall that virtual particles aren’t really particles: ) In fact, the force which holds a nucleus together is, from some points of view, due to pions — which are bosons, but are made from fermions (quarks and anti-quarks.) So now we have a force particle made from matter particles. Which means our naming scheme is a mess. There is also a (subtle) way to make fermions from bosons (the word Skyrmion appears here.) You see that this distinction just causes problems. At some point I think you have to take the physical phenomena for what they are and not spend too much time worrying about finding the perfect naming scheme for them. 23. Now i really wonder , what is the source of the intrinsic movements / momentum in all sub-atomic entities , what “pushed ” the quarks or electrons to always be in motion , you said – nothing is at rest – well , what physical mechanism is the CAUSE of all that movements , i am beyond some equations equilibrium , i am at the CAUSE , THE URGE , THE DESIRE !!!!! TO BE IN PERPETUAL MOVEMENT!!!! conservation of energy ?but if movement was generated , what caused its generation ? 24. Let me be bold and see if I, an “outside observer”, can “cause” a “disturbance” and create some “fire” in this “medium”. I say, for the lack of a better theory, God caused the universe to ignite and hence the Big Bang. The boundary of our math only goes to the time-energy uncertainty principle was given in 1945 by L. I. Mandelshtam and I. E. Tamm, as follows. For a quantum system in a non-stationary state ψ and an observable B represented by a self-adjoint operator , the following formula holds: σE x (σB / | dB / dt | ) >= Planck’s reduced constant / 2 where σE is the standard deviation of the energy operator in the state ψ, σB stands for the standard deviation of B. Although, the second factor in the left-hand side has dimension of time, it is different from the time parameter that enters Schrödinger equation. It is a lifetime of the state ψ with respect to the observable B. In other words, this is the time after which the expectation value changes appreciably. This principle says the quantum state ψ cannot stay the same forever since that means infinite energy, I hope I am right, 🙂 So, once an external disturbance ignites the charged initial state change will continue until all the useful work is spent. 25. One thing has always puzzled me about dark energy. It is supposed to contribute 74% of the mass-energy density of the universe, but at least naively it doesn’t even seem like it should be commensurate with energy. Obviously that’s wrong. I’m dimly aware that there’s such a thing as the stress-energy tensor. But I would have thought “density” would be a component of the tensor, but it seems like the 74-22-4 decomposition must be based on some kind of norm? I guess my real question is, what does that 74-22-4 decomposition mean? 26. One of the great properties i like much in what Matt. write is his total freedom of any prejudice ane pre-conceptions , he gives us the state of the matter with true honest description , so let me state what — being non-physicist — i understand as the core of all what have been said : 1- We only know that there are something /stuff which is the most – maybe- fundamental kind of physical existence which we call fields of which reality –the thing as it is- we know nothing. 2- That stuff have properties we can observe we call m , E , p , interacting according to pre set codes we call theories , these theories are our POINT OF VIEW AS PER TODAY never to be confused with ultimate reality. Now the main essential difference is : Matt. never claimed that what he says is the final ultimate truth in contrast to 99% of articles and books where they claim that what they present is such……………thanks Matt. , this is the true , honest way of doing science. • I think you are more or less stating my point of view, yes. I don’t know what ultimate reality is and I have no idea how I would come into contact with it. I just know that we have found ways of classifying the objects in the world such that we can predict in great detail how they behave. I can’t tell you that classification is unique or complete. Along these lines I think it is also important to keep in mind that we, as minds, never come in contact with anything physical at all. See the table across the room? What do you know about this table? The only thing you know is the image created somehow in your brain, formed on the basis of electrical impulses down your optic nerve from your eye, which in turn are based on photons impinging on your eye, some of which came down from the sun or from the light bulb in the room, and bounced off the table in just the right direction to enter your retina. There are many steps from your image of the table to the table itself. Even when you touch the table, what you feel is in your brain, created from nerve impulses sent down from your fingers, from nerves firing in response to the deformation of your skin by the inter-atomic forces between your skin and the table. Your brain is not in contact with the table. What you feel is in your brain, not in your fingers. Our senses are no different from the measuring equipment used by scientists, allowing us the ability to detect aspects of the world around us. What we know of the world, through our natural sense organs and through the artificial sense organs of scientific experiments, is always indirect. 27. Objection : NO images or feelings are IN the brain , for the first time in all your presentations you decreed/decided on something where no shred of evidence exists as to the ultimate reality of consciousness / feelings / concepts…..etc. David chalmers wrote an article titled ( consciousness and its place in NATURE ) , that was a prejudice , i sent to him asking : how can you decree that its place IS in nature while your article is searching for its place with no conclusion reached ? that is what i call pre-conception/prejudice where a scientist decide / decree a point of view as fact , i really hope that some day science can free itself from decrees based ONLY on relative time dependent observations far from final absolute knowledge. • You are right, I did not phrase this well. Your statement is, I think, a little too strong — we do know that there are electrical phenomena occurring in the brain that are related in some way to the things we see and think, and we know that damage to areas of the brain (strokes, direct injury, disease) result in correlated damage to conscious experience. But how they are related, we do not know. So I would say that it’s not that there’s “no shred” of evidence — just that there’s no clear understanding of the meaning of the evidence. In any case, all I really wanted to say is that conscious experience does not in any sense involve a direct encounter with the physical objects of which we are conscious. What conscious experience itself arises from, I certainly don’t know. • Neurons —> Bosons Synapse chemicals —> Fermions Eddy currents across the neuron membranes —> Field(s) (EMF) Transfer of chemicals across synapses —> Fields (Colors) Synapse firings —> Atoms (type) Sequence of synaptic firings —> Ensemble of atoms (hadrons) Thoughts —> Relativistic structures Hence, one can speculate that our consciousness is part of the universal consciousness and our body, including our wiring in our brain is just one more fiber and/or groups of fibers of the overall cosmic quilt. • Um — You can speculate all you want, but there’s no math on the left-hand side of your correspondance, while there’s a huge amount of detailed, predictive mathematics on the right side. The reason we take the right-hand side seriously is that it goes along with mathematical equations which predict, correctly, the results of millions of experiments. If you can’t make the link from the right-hand side’s math to some corresponding math for the left-hand side, then we have no reason to believe any correspondance of the form you suggests exists. For example, fermions satisfy a Pauli Exclusion Principle. Are you suggesting synapse chemicals satisfy a similar principle? Do Neurons form condensates the way bosons do? Atoms can be bosons or fermions; are you suggesting that synapse firings can be neurons or synapse chemicals? I insist on precise statements. Because that’s what’s needed for science to get done. • I am glad you brought up Pauli’s Exclusion Principle. I can understand Pauli Exclusion Principle via the Fermi sphere definition but not with the standing wave theory. Are we missing some math? • I think you are confusing some things. Atoms involve a nucleus surrounded by electrons which are standing spherical waves (or more complicated standing waves.) The Pauli Exclusion Principle simply says that no two electrons can be in the same standing wave if their spins have the same orientation. Metals involve matter in which the electrons in a given volume form a Fermi sphere. NOTE: this is not a sphere in physical space. It is a sphere in an abstract space (“momentum-space.”) [The standing waves that electrons in atoms occupy are spheres in physical space.] The vacuum does not have an associated Fermi sphere. The mass-energies of particles in empty space are not associated with a Fermi energy. • Ok, thank you for clarifying. It’s been awhile since I done some of this math so I am quickly trying to catch up before diving into B-E and F-D statistics. So to help me visualize, Pauli Exclusion Principle says if the standing waves are in-phase they cannot interfere due to possible ‘beating” and hence tend to infinite resonance (infinite energy which is not allowed by conservation?) Conversely, a standing wave of 180 degree phase shift will cancel the electron’s wave and bring it down to the ground state, (Dirac’s antiparticle? What does he mean when he describes it, antiparticle, as a hole? Is not opposite phase the same thing?) The different fermions are basically standing waves of different amplitudes and frequencies? Why are the half lives different? So, the Fermi sphere is the “mechanism” of transferring momentum from one standing wave to another? How does that work, superposition? Final question, does the standing wave’s ripples propagate to infinity or do they reduce down to ground state at some radius? Is there any association between this radius and quantum entanglement? • I’m afraid that’s not what the Pauli Exclusion Principle says. You have to first work out, for a particular physical system, what its one-electron states are; then the exclusion principle says that nature cannot put two electrons in the same state. This is not something that I know how to visualize, because it involves a quantum mechanical effect for which there is no visualizable analogue. Not all facts about quantum theory can be represented by a picture in the mind; this is part of what makes it hard. • PS; I apologies for my hieroglyphics but my brain works better with images. i can sit a study a math page and get very quickly but the next day I would forget the details and only the images that were created remain, for a long long time, 🙂 Can one interpret this as saying the second “electron” wave cannot superimpose over the first “electron” wave because the “electric charge” imbalance will “push” the second wave to a different phase, higher state? So it is the combined nucleus-electron interaction that prevents the second “electron” wave from coming too close to the nucleus? What about two loose electrons, can they superimpose or is it that the probability of two electron waves coming close to each other is nil? Does the exclusion principle have any thing to do about the Z and W having mass or is it solely because of the variants of the spin states of these bosons? If I can speculate again, and I have seen the derivation of the equations for the Hiiggs field and I somewhat understand the definition of symmetry “breaking” to give the mass term. But physically specking, is it correct to say that it requires the interaction of two bosons to create a higher enough resonance in the composite wave to give it “mass”? But, again, why do different fermions have different half lives? In other other words why is the electron so stable of the other fermions decay so quickly? • One physical object we do experience more directly is our own brain – or at least parts of it. i think that because certain brain activity appears to have SUBJECTIVE sensations associated with it, there is a certain aspect of the reality around us that is not accessable to our scientific instruments. what do u think? • lup, you say .. I wonder. There are some very interesting ways of configuring “scientific instruments” to do a lot our own brains can achieve. Take a look at this very cool stuff: 28. Back on the subject of photons losing energy in cosmic expansion. If we’re to view that as the result of the absence of any time-like Killing vector and hence no energy conservation, how are we to view gravitational redshift in a Schwarzschild geometry, where is time-translation innvariance? 29. Let us have a mental experiment , a static closed universe where only one electron at rest reside : What is the physical meaning then of its m ? E/ c^2 ? well , what is E ? mc^2 ? see what the problem is ? in all your posts E and m were never defind independent of each other , so , what is E ALONE ? and so for m ……. here we have one electron really at rest , what is the physical reality of its E or of its m are both interconnected in a closed circle where they are in principle beyond our understanding? does E and m really have no independent physical essence that we can ever grasp ? then what is m that belongs to that single static electron ?is it a scale by which we measure the ” amount ” / ” quantity ” of ripples ? 30. Most people are making a major category mistake w.r.t. N.N.W. , neuron networks are designed to achieve specific goals within the chemoelectrical medium of brain and body , N.N.W. are systems of equations designed so that with a system of input -output and pre-set goal it converge to accomodate similar goals. What is missed by most people is the fundamental category chasm between chemoelectrical output and sensations or feelings ……. you can never in principle get feeling as output while your input was electro-chemical impulses , only monists say that nonsense , for a very simple reason : this nonsense is the ONLY one within which materialism is possible. I was obliged to write this to clarify some mistaken comments written above. • If I may concur with this in a simple way: one must distinguish between systems that appear to be conscious and systems that actually are conscious. I believe the rest of you experience consciousness because you are similar to me, and I know I experience it. But I don’t have any way to actually check experimentally that you are experiencing what, from the outside, you appear to be. That fundamental problem — the lack of an experimental test of the experience itself — will make it difficult to determine whether a machine which behaves as though conscious actually is conscious. And a question for which there is no experimental test, even in principle, lies outside the reach of science… so until someone invents a convincing test that establishes whether a particular creature, natural or artificial, experiences its apparent consciousness, I cannot easily be convinced that the issue is ever going to be understood scientifically. • The thing that we are most familiar with is also the most difficult and puzzling of all known phenomena. but nature creates it effortlessly every day. 31. Neurons and neuron networks can be reduced to the realm of forces and particles while sensations and feelings can never be reduced to any thing within material universe , as such all talks about machine consciousness are complete void ………beware of A.I. deception with completely material category claimed to represent completely non material category , but if science retreats , here comes logic and rationality …..a law ….a necessity : any non-material category can never in principle be reduced to material category ….sensations and feelings can never obtained from fields , particles and forces . This is a settled fact once all concerned be free of pre-conceptions , prejudice and materialistic naturalistic philosophy , for any individual , he is free in his stand , but once he addresses the public no personal world view is allowed to pollute the innocents . 32. TO LUP : Again you are trapped in the fallacy of deciding/decreeing a stand where no real scientific proof exists , and all logical / rational understanding converge…… you decreed that NATURE create consciousness , this is total void statement : What is nature ? laws ?fields? both have no power to implement any thing in reality. Nature is the material universe while consciousness is beyond m and E . What is your proof for your decree ? See what i mean ? this is exactly the stand 99% of scientists adopt …to claim the unproved as the ultimate fact, we need to be humble in front of the majestic creation of GOD seeing that human consciousness is the ultimate awe inspiring reality as it is the object of feeling and the feeling itself. You need to read lots of sources just to start to feel the awesome tremendous meaning of the reality of what made you understand and decide………..good luck my friend. • Did you really have to bring god into this! Nature is what I would regard as the knowable reality what ever affects or had observable affects in our reality. Laws describe what we observe in reality and fields are what we call certain entities with certain apparent characteristics. The things we are describing to affect our reality but we can only name the properties they possess which is equivalent to saying this is how they affect our reality. 33. To I was just trying to make the point that we have no idea how to build a robot that is conscious but a fertilised human egg will divide many times over the course of 9 months according to the laws of chemistry/nature etc and produce a baby that is conscious. And its done it countless times since the dawn of the human race. I agree that conscious experience appears to be something completely different from what we call the materlistic world (for a start is a private and subjective not objective ‘thing’) but where ever it appears (at least in this universe/reality) there is also this biological organ we call a brain present. Surley u see nature as GODs design – a nature that is able to support the emergence of consciouness experience. Perhaps that is a better way to say it. • Sorry but consciousness is a manifestations derived from the billions of permutations of electrical / biologic signals. When we are awake we can control our thinking via negative biofeedback processing of billions of “digitized” thoughts. And when we sleep, because we have no conscious control, our dreams become more chaotic and random in nature, but we can remember those which are channeled in conscious thoughts, make sense to us because we created them during our awaken phase. If there is a universal consciousness, (and I believe there could be due to the striking similarity between the universe and our own brain structures. see a wonderful video below) it would not belong to God since the universal consciousness would be like our own, manifestations derived from the billions of permutations of particle interactions. 34. If Matt has trouble defining the rules of the game, perhaps we are just goldfish in a bowl. No. I refuse to give up on science. 35. I was reading through this and several comments, and all seem to agree that Z and W bosons of the weak force act as massive particles because of the Higgs field, but prior I was reading Fear of Physics by Lawrence M. Krauss, and came to the conclusion that these bosons act as massive particles because of virtual particles. As virtual particles pop into existence, it takes energy to do so, so pop back out to conserve energy and momentum, but if two virtual particles are attracted to each other, and instead of one pair, fill the entire system being observed, if these particles were to be adjusted in a way to have less energy than in a system with no such particles, due to the fact that two objects bound together have less energy than two separate objects, the system could fill with these particles, which could in turn affect the weak force bosons, causing them to act massive, for the same reason that photons act as massive particles in superconductors. Either I am entirely wrong (very likely) or is it possible that either these virtual particles are themselves the Higgs field, or that these virtual particles are waves in the Higgs field, rather than the specific weak force boson field? P.S. this is a great sight for the novice high school student of 16! Thank you for such a great resource professor! That conclusion is not correct. One thing you should know: `virtual particles’ are not particles (Krauss is not alone in not explaining this well); you should read at some point. Also, have you read the Higgs FAQ yet? If not you should. Now — there are two separate questions: 1) What does the Higgs field do? 2) What is the Higgs field made from (if anything)? Answer to 1) The Higgs field gives masses to the W and Z particles; and a Higgs-like field (often called the Landau-Ginsburg field) gives mass to the photon inside a superconductor. Answer to 2) It could be a single fundamental scalar field, or several such fields, or it could be made as a bound state (simple or complicated) of other particles. The Landau-Ginsburg field turns out to be a field made from electron-electron bound states (Cooper pairs) bound together by phonons. That need not have been the case; it could have been a fundamental field of spin two, or a much more complex object that Krauss would have had even more trouble describing, and it would still have given mass to photons in a superconductor and given us all the phenomena of superconductivity. The Higgs field CANNOT be of a precisely similar form as the Landau-Ginsburg field; Cooper-pair-like objects would break Einstein’s relativity equations. There is something analogous (called `technicolor’) but then the binding between the virtual particles that make up the Higgs field must in such a case be much, much stronger, relatively speaking, than in a typical superconductor. (Incidentally, if there really is a Higgs particle with a mass around 125 GeV/c-squared, technicolor is significantly disfavored.) So you see that there is no contradiction between the two statements; they are simply operating a different levels. Question (1) is about what the Higgs field does, which we know, but knowing the answer does not answer question (2); question (2) is about the very nature of the Higgs field, which we do NOT know, and that’s why we have a Large Hadron Collider, to help us answer this question. Similarly, for superconductors, the answer to question (1) was known long before the answer to question (2). 36. I am a bit confused: I find your distinctions very insightful and helpful, but my understanding is not satisfied: 1. What is an energy what is matter? I think you should mention positive definitions so one can refute them otherwise they are too vague. 2. Nothing wrong with vagueness though, words as said are problematic, but when words have an intrinsic failure, I tend to attribute it to the failure of ‘way to observe the world as human’. I.e. when our senses cannot grab the world. EX: Time in SR as 4th dimension (or 11 dimensions in the strings theory, or particle-vague duality) – the 4th dimension is nothing but a poetic metaphor of the universe that help me to understand the meaning of it, knowing that I am condemn to observe my surrounding in 3 dimensions only and cannot really grab this dimension. Although in physics these concepts makes perfect sense, they are limited to mathematical language and cannot be translated into human one. YOU are doing a precious job in helping me and other to have a better understanding but if words fail it is because it just does not match our human perception. BTW, no problem for me, as after Copernicus I lost hope… 3. A characteristic of an object is also an object UNLESS this characteristic in a a-physical attribute. Ex: A wooden table vs. nice table. As I can assume you refereed only to physical attributes, I cannot see how they are not part of the object itself. 4. “…I have height and weight; that does not mean I am height and weight” – this is at best a problematic example. What is the I is a long philosophical question and unless you are a pure materialist the I to which you refer is not a physical object and therefore cannot be defined by these adjectives (if you are a materialist you should define what that ‘I’ is). • you’re in reality a excellent webmaster. The web site loading speed Also, The contents are masterwork. you have done a magnificent task in this subject! 37. you say that a Photon’s energy is equal to it’s motion, every calculation I have seen, says it’s energy is dependent on it’s frequency, not it’s motion. You also say that A Photon is not pure energy, but that it is a particle made up out of stuff, since a resting photon has never been seen, are you sure that a photon is not a wave/particle potential of pure energy. It seems to me that the reason a Photon can travel at C is that it is mass less, hence it experiences no “drag”, per se, as it travels. If you are saying that a Photon is not energy but has energy, then it can not be mass less as a particle it seems that in a Photons wave/particle duality. As a particle it has mass and interacts, as a wave it is pure energy and has only momentum.. A photon as pure energy explains how it can be mass less and yet still carry information as a wave. 38. Okay, I checked out the links you provided, very good information, any knowledge into this subject I welcome into my deposit of knowledge. I understand your phrase of “motion energy” in the context it was used now. As far as mass experiencing drag per se, I was using the word in context to a Photon moving in space at “C”, having no mass it experiences no loss of motion, maybe the word “drag” is not the best to use, maybe friction, or another one is more appropriate. As far as particle/wave duality, my understanding on a Photon is that it can either be a wave or a particle depending upon observation and/or.interaction as illustrated in the double slit experiment. As far as “mass” I see this the same as the emergence of consciousness, for example one neuron can not create consciousness, just as one particle can not create significant mass, but it is the number of neurons and the complexity of the neural networks that create consciousness, I see object mass in the same way, once a grouping of particles come together into a certain “level” of complexity, then “mass’ follows in the same way that consciousness follows, that is how I see the “Higgs” field, this field is the result of this complexity, the more complex the structure and the more mass it has, the more it is affected by Gravity, Hence the heavier elements are more dense, thereby they have more mass, gold or lead being a good example, but there are of course many exceptions. And as far as a Photon traveling at “c” as a wave of energy I use this observation as my basis, as one example. This link shows a cluster of stars 1 Billion light years away, which means that each Photon from this star had to travel for 1 Billion years in order to reach our solar system. (More amazing to me, is the fact that some of those stars probably do not even exist anymore, yet we still are able to see them as they did exist 1 Billion years ago). In order for the Photons to travel over space and time for 1 Billion years, it seems to me the probability that a particle could travel these distance and time frames at ‘c” for 1 Billion years and not decay is remote. I see a Photon as being a ‘timeless” carrier of information, and I do not see how a particle could travel at “c” for 1 Billion years, and not decay. It seems more probable that in order to do that, it would have to be a wave of energy to accomplish this amazing feat. Of course since no human being has ever “observed” a Photon traveling at “c” or observed one at “rest”, Ie. snapped a Photo of one in either state I think it is too early to come to any conclusions as to exactly how a Photon for example can travel through space and time for 1 Billion years all the time, maintaining “C”, and not decay. I appreciate you quick response to my inquiry, and realize that so many of these answers are not going to be available until the technology to settle some of these big issues comes into play. But I am always striving to improve my outlook and knowledge on these matters, and value any and all inputs I can gather.. Thank You 39. as a side note : The total energy contained in an object is identified with its mass, and energy cannot be created or destroyed. When matter (ordinary material particles) is changed into energy (such as energy of motion, or into radiation), the mass of the system does not change through the transformation process. this does allow for a Photon to change from a wave of energy then into a particle, and so on. Maybe this is how it travels at “c” and never decay’s as a wave of energy it is basically eternal and there fore could travel at “c” for infinity, or for 1 Billion years for example, and then as interaction or observation dictates it changes into a particle, and interacts as dictated by the contact. your thoughts on this . 40. Another side note ( should of put this all on one message, I apologize ): .the initial energy for a Photon comes from its source ( for example a star ) Energy may be stored in systems without being present as matter, or as kinetic or electromagnetic energy. Stored energy is created whenever a particle has been moved through a field it interacts with (requiring a force to do so), but the energy to accomplish this is stored as a new position of the particles in the field—a configuration that must be “held” or fixed by a different type of force (otherwise, the new configuration would resolve itself by the field pushing or pulling the particle back toward its previous position). This type of energy “stored” by force-fields and particles that have been forced into a new physical configuration in the field by doing work on them by another system, is referred to as potential energy Any form of energy may be transformed into another form. For example, all types of potential energy are converted into kinetic energy when the objects are given freedom to move to different position This mathematical entanglement of energy and time also results in the uncertainty principle – it is impossible to define the exact amount of energy during any definite time interval. The uncertainty principle should not be confused with energy conservation – rather it provides mathematical limits to which energy can in principle be defined and measured. It seems to me that as a wave of (electromagnetic) energy a photon travels at ‘C” then changes to a particle upon observation or interaction. Everything I read leads to this conclusion. 41. sorry, I just seen this quote from one the the links you gave me. your quote’s : light waves (and the waves of any relativistic field satisfying the relativistic Class 0 equation) move at the speed c. the energy stored in the wave is E = (n+1/2) h ν where h is Planck’s constant, which always appears when quantum mechanics is important. In other words, the energy associated with each quantum of oscillation depends only on the frequency of oscillation of the wave, and equals E = h ν (for each additional quantum of oscillation) This relation was first suggested, for light waves specifically, by Einstein, in 1905, in his proposed explanation of the photo-electric effect. Please explain how this does not agree with my statements. 42. Date: 05 November 2012 Time: 07:29 AM ET Now, for the first time, a new type of experiment has shown light behaving like both a particle and a wave simultaneously, providing a new dimension to the quandary that could help reveal the true nature of light, Depending on which type of experiment is used, light, or any other type of particle, will behave like a particle or like a wave. So far, both aspects of light’s nature haven’t been observed at the same time. But still, scientists have wondered, does light switch from being a particle to being a wave depending on the circumstance? Or is light always both a particle and a wave simultaneously? 43. You said in this article “Early in the universe, when the temperature was trillions of degrees and even hotter, the electron was what cosmologists consider radiation. Today, with the universe much cooler, the electron is in the category of matter.” Why does “high temperature” make the Higgs (and gluon?) field being “zero” on average early in the universe? Why/how does “temperature” affect these fields? Why does “high temperature” prevent these forces (gluon, Higgs, etc.) acting on particles? And why do these different forces start acting on particles at “different temperatures”? 44. Thank you for your excellent article, and website. I would like to ask how are ‘mass energy’ and ‘motion energy’ related? Can motion energy ever be converted into mass energy and/or vice versa? You seem to be saying in your replies to other posts that ‘E’ in Einstein’s equation E=MC2 doesn’t include motion energy. If not then why does science use the same word ‘energy’ for both concepts when to do so leads to confusion? • you’re truly a excellent webmaster. The website loading velocity is incredible. you’ve done a fantastic job in this topic! 45. Everything you said about matter, stuff, energy, I agree with. A learner has to create separation between the two concepts in order to learn these. Its just like learning the language. A child will break up a long word into sylabils and memorize its pronunciation and meaning that way. And since its almost impossible to define either one of the terms (matter or, energy), why worry about it. Or, a layperson might reach a wrong conclusion that physicist in general are afraid of the word ‘spirit’ (pure energy). I meant spirits, like God for example. Physicists may cringe but the truth is this ‘stuff’ falls into category all on it own and being ethereal it can not be detected, tested or, experimented on in LHC. There is also a big likelihood that this ‘stuff’ disapproves of humans splitting the atom and destroying something he created for the benefit of all humans_ the blue planet, third rock from the Sun. 46. Most people do not have a clue as to the things they speak, nor can they prove them. But they speak none the less, it is all theory, opinion, best guesses, until it is proven to be true. And anyone that claims to understand Quantum Mechanics and the Sub-Atomic plane, is the same person that claims to be \/\/ise, anyone can claim to be anything, but it is one thing to “understand” the “truth”, and one thing to guess about it. But it is fun to read other peoples opinions, but it be better to read the truths, but those are yet hidden. Even though much is understood, he that speaks as if understanding is a given, speaks in circles, because they do not understand the truth. It is ok professor, no one understands it completely, or has the truth. but your opinions do have merit. • Are you trying to reassure me? 🙂 No reassurance needed; my knowledge is used to make cell phones, rocket ships and lasers. What are your alternatives used for? • cell phones, rocket ships, and lasers, are great technologies for the advancement of understanding. But they are each only different manipulations of the same energy that is present in all. A photon of light travels at the speed of light throughout the universe carrying the information of its source for eternity, unless interaction upon another force disrupts or absorbs its energy, such as in vision, if not for this information your eye could not interpret the origins of said photon to turn that information into a “picture” for your neurons to process. In order to travel at the speed of light a photon needs to be a particle, a wave, or both at the same time, and is basically mass-less, but yet still has the ability to transfer information, as Darpa has shown transferring information upon photons. As soon as a Photon “interacts: upon the human eye for example, it has to assume the form of a particle in order for the information to be “read” and transported to neurons for interpretation, this biological process does not occur in non-living matter, also in order to travel at the speed of light, a photon needs to maintain the mass less form of energy in order to reach the speed of light threshold, as no particle of mass may reach this thresh hold, as I am sure you understand completely during experiments preformed using accelerated particles, none of them have achieved 100% equal to the speed of a photon, that reason is established. It has been observed in many recent experiments that light ( a photon ) can be any or all forms at once, or that a photon can change forms as needed in its travels. Decay being the key, for Photons have been collected from telescopes that collected them after they traveled billions of light years across space and time, there are no particles that can achieve this velocity unless they change into pure energy to achieve this speed. Experiments at Cern have even held this constant to be true.. so for you to say that light is not pure energy, is disputed by many recent experiments, and by many old experiments… • Wow! ya why listen at all? Especially when you have everything figured out in your own head and can just vomit streams of metaphysics all over the comment section. No wonder the good professor stopped responding to you 47. I know you use “mass” to mean rest mass, but aren’t there some cases where we really need to talk about the mass corresponding to the total energy of a system? Like the fact that a system as a whole is literally heavier (has greater mass, as one can measure by the inertia or gravitation of the system as a whole) when DeltaE of *any* type is added to it (without allowing any energy to escape of course): let in some light, add heat, set a top spinning that was at rest before, compress a spring, pull positive and negatively charged objects further away from one another within the system, etc. I’m assuming that we’re examining this system in a single inertial frame in which the system as a whole is at rest, with no external forces or fields, so we don’t have to think about the kinetic or potential energy of the system as a body itself within its environment. The extra mass we’ll measure will be given of course by DeltaE/c^2, but here the E and m both refer to the total energy of the system. 48. Thank you for a solid blog on an important subject. I’m 65 and have a question based on the above mentioned “pre-1973” science era I grew up in. In today’s Physics, is the fundamental essence of the Universe still considered to be Space, Time, and Matter, or, as I gather from the above, something more akin to one of the following: A. Space, Time, Energy B. Space-Time, Matter-Energy C. Energy (when stretched out=Space, when condensed=Matter, and otherwise simply Energy) D. None of the above 49. Excellent article, thank you Matt! I think the abuse of terminology is common to layman description of all technical fields, and is not a prerogative of physics. Part of it comes from public relations (try explaining your discovery to a reporter, and, what is way more difficult, making sure it is published without inaccuracies!), and part of a linguistic inertia. I think as the boldest example of the latest we should recall the fact that in numerous languages (less in English, though) the word “ether” is used to indicate the broadcast of a radio or tv program. “In ether” is used in these languages as a synonym to “on air”. This archaic heritage of the long-obsolete luminiferous ether theory still remains in our terminology, and even modern network technology that has nothing to do with ether bears the name of “Ethernet”. 50. Hi Matt, Below I quote two things that you have said about dark energy and ask for a bit more explanation:- “Dark energy is a property that fields, or combinations of fields, can have.” If there is one thing you’ve got into my head is the idea of “fields and ripples”. A field can have a ripple, an object (stuff) like an electron or a photon. “Dark energy isn’t an object or a set of objects.” Do you say this because the effect in the field is more subtle, like the distrubance in space-time that gives us the gravitational field (stuff, but not defined enough to be called an object)? And if gravity can be visualised as a heavy ball on a trampoline, could dark matter be visualised as something bending its field(s) to make a hill as opposed to gravity’s valley? 51. Matt, You say “it happens that every known field has a known particle, except possibly the Higgs field (whose particle is not yet certain to exist, …” I had always thought that evidence for the existence of the Higgs field and that of the Higgs particle were more or less on equal footing, in fact, that acceptance of the Higgs field’s existence was predicated on detection of the Higgs particle. But your statement above suggests to me that even if a Higgs particle were not to be found (or somehow proved to not exist), then the prevailing point of view would be that the Higgs field exists nonetheless, only having no minimum eigenstate. Is this slight (and very respectful) criticism anywhere close to accurate? I feel blessed to have stumbled upon your site, and will be an ardent reader of your posts/articles. You have an extraordinary talent for expressing complex ideas in a clear, yet scientifically responsible, manner. This is very, very rare. – Doc 52. A Photon is an eternal carrier of information that can transfer and receive energy through contact and interaction upon various fields of mass. This ability can best be represented as you gaze upon a full moon at night, for you do see the moon light shining, and you do see the moon, but in reality you are seeing photons that came from the Sun, interacted upon the mass field of the moon, then imprint the information of this mass field upon the photon, then as it enters your optic nerves the information your neurons receive is that of the physical image of the moon shining from photons originating inside of the Sun. The eternal energy and motion of these photons is lost as each transfers its energy into the organic neural/electrical network becomes a particle transfers its energy ( information ), and ceases to exist in its previous form. Photons have been captured from telescopes that have traveled billions of lights years across space and time, yet they retain there speed only slightly influenced by gravitational fields of fields of mass, and they retain the information of there source of origin, and can attain constant updates to this information as it interacts upon various fields of mass, sometimes exchanging its energy, and sometimes transferring its energy to another medium. this vast universal communication system is recently being discovered as quantum mechanics and physics are relating it to the vary method used by the brain to communicate along vast and complicated neural networks, so, as above, then so below. Energy is eternal it can not be created nor destroyed, only transferred between fields of mass…. 53. Also it is my opinion that dark energy is a result of entropy caused by the transfer of energy not being 100 % efficient during the energy transfer process. And as the universe has expanded and aged, the entropy has increased and that has caused an increase in dark energy that is in a chaotic state. This “dark energy’ then behaves as regular energy, but instead creates dark matter as a result of its chaotic state. energy in is greater then energy lost in most efficient energy systems, and this loss/entropy is the reason for the related increase and amount of dark energy in the universe, and the amount shall ever increase in relation to entropy due to inefficiency. just my opinion though… 54. Well Sir, After reading all this “stuff” , off course not that stuff which you have been mentioning about, I’ve got a feeling that some of my doubts can be cleared by this well learned friend of mine namely Mr. Matt. The following are my doubts Sir, Please address to them: 1. Is the light a form of energy as every one is thinking as on now? 2. If it happens to be right, Is the energy incident on earth’s surface from sun is in form of sunlight? 3. If it is only partially right, what are the other forms in which the energy is being conveyed to earth from the sun? Based on the answers for this queeries there may be further observations you have to answer Sir, Please reply. 55. sorry but I haven’t read the full website … can you succintly put everything in a simple short phrase ?? would it be fair to say that everything is energy and it manifests only in two ways: matter and force fields ?? 56. Hello Professor Strassler. In your post you say that energy is something that objects have, that energy is not an object itself. But I have heard string theorists like Brian Greene say on science tv shows that if string theory is true then everything is made of strings. And what are these strings? He says they are strings of energy. He says these strings are made of energy. But if strings can be made of energy, wouldn’t that mean energy is stuff? And wouldn’t that mean matter is energy at the fundamental level? To hear Brian Greene say this watch these short videos: 57. If energy is neither created nor destroyed than it is neither present nor absent in a particular space and time.Energy and matter are one and same.There is nothing called dark matter and dark energy.All divisions are like dreams.For mind time is understanding through thought/memory.Mind cannot differentiate between dream and reality because it functions by continuous reciprocal causation.One cannot prove in unreal{dream like} time and space. 58. Not sure if this will make sense to someone with knowledge, but when the big bang happened you had pure energy and massive temperatures, as the universe expanded it cooled and turned into (matter) could dark energy be the remaining energy and temperature in the universe wanting to turn into (matter)? Also when the universe started it was extremely hot its cooled so much why did it stop cooling? Why did it stop just above absolute zero in deep space? These are probably stupid questions just thought I’d try to better understand something that interests me. 59. thank you that was very informative and has given me a lot to think about , i just don’t understand how light speed or fster than light speed was achieved in the first few seconds of the start , in whatever state of energy or near mass everything was :_) 60. Is it possible for energy to act within and upon itself? Is it at least possible that consciousness is a form of energy? Or, are there reasons i tcannot be? 61. def(matter)by Google “physical substance in general, (in physics) that which occupies space and possesses rest mass, especially as distinct from energy.” It all depends on your perspective. I meant matter in general, not A specific bit. I also meant that when matter takes a form, that form IS a result of energy. So, for example, the form of a balloon, and the material components of the balloon, are shaped by the forces acting on it and them. “…especially as distinct from energy.” Matter IS distinct from energy, but is not separate from energy, since no matter can exist without energy. Just to emphasize the ambiguity of language, your phrase, “matter is A form that energy CAN take”, implies that energy can take other forms. Of course, you can say that energy takes different forms, such as EM and gravitational, taking “form” in a different sense. When you experience matter, EM energy, and gravity, are you experiencing energy? What about mass? When you perceive mass are you perceiving energy? Aren’t mater and mass simply phantasms composed of energy? In fact, it may be that energy is all there is. : ) 62. not only “may be that energy is all there is” but energy is all there is, and its different forms (force fields and “matter”) 63. And now I can say, consciousness is aware-ized energy, and, all energy is aware-ized. Of course, only subjective experience can tell. I think Matt is missing out on that part. Science has bound up the minds of even its own even most original thinkers, like Matt, for they dare not stray from certain scientific principles. When his private experience of himself does not correlate with what he is told by science, then he may become familiarized with the roots of ego consciousness. This will be done under the direction of an enlightened and expanding egotistical awareness. Then, he could use his talents to organize the hereto neglected knowledge. Not a judgement, just an observation. 64. Incredible story and string – and I always thought it was only we economists who were all screwed up with ‘on the one hand, this’ but ‘on the other hand, that’, ‘but then again maybe it’s…….’ and so on and on. How refreshing. 65. Is consciousness matter or energy, or some combination? Well, actually, you don’t know. So, I will tell you. Consciousness is aware-ized energy, and all energy is aware-ized, whether you believe me or not. : ) That statement is scientific heresy. Subjectivity cannot be demonstrated within the context of current science. How do I know? I know by experiencing my own consciousness in many ways. Learn by doing. I am confident they’ll be benefited from this website. 67. Do fields extend endlessly across or through the known universe? Can any distinction be made between fields and their particles on the one hand and the phenomenal world we actually experience by way of our sense modalities? When you suggest that “objects” have (energy), are you arguing for “entities” that have an independent, autonomous existence and an intrinsic identify? I haven’t encountered such as yet, at least not in the phenomenal sense, wherein a thing exists ONLY in relation to other things. – Cal 68. recalling my highscholl physics where the teacher would tell us that the electrons quantum jumps between orbits ? Where does it go ? 69. Two words regarding the matter\energy equivalence debate, “conserved quantities”… energy and matter are interchangeable. There is no dichotomy. Matter is an observed point particle at a specific place and time within a collapsed wave function, just an observed probability. We know that where there is energy we can observe matter popping in and out of existence (for want of a better word)ad nauseam. In my mind the day will come where we realise that scale-symmetry will clarify unequivocally a fundamental view of “what stuff is”. we will drop the idea of super-symmetry and accept that matter having physical dimensions e.g the current zoo of point particles we observe is just a misleading by-product of fundamental symmetry breaking. last month at the International Conference of High-Energy Physics in Valencia, Spain, researchers analysing the largest data set yet from the LHC said and I quote “we have found no evidence of super symmetric particles”. Yes you can always say that at higher energies we can expect to see a super massive primordial particle beyond the combined 14GeV the LHC can pump out, now we have 5 sigma results on the Higgs what’s next? I will retract this when I see a 5 sigma result on a Electron\Selectron pair stopping the Higgs boson mass inflating exponentially being observed … To quote the matrix… “there is no spoon”, but again in my mind there is (at the scales we observe from the observable universe down to the Planck length) a bloody good 4D observable representation of that spoon gaining mass from our old friend Mr H Boson…or if your a M Theorist an 11/6D one bent Uri Geller like through an unobservable Calibi Yau Manifold ;o) Or to put it more accurately the energy that constitutes it appears on our narrow scale of observation to be a spoon, because the interaction it has with the Higgs field ascribes mass to it on our scale … please understand my tongue is placed firmly in my cheek here… ;o) thanks for the article I enjoyed it and the comments below it very much… has a lot of exclusive content I’ve either written myself or content from being stolen? I’d genuinely appreciate it. 71. I knew someone who had exactly this problem, in fact I myself discovered the problem while searching the net for a particular literary piece he had written. I put several keywords into a search engine, and lo and behold, I was directed to another site. Worse still, the owner of this other site was passing off his intellectual property as her own. He had a lawyer draw up a cease and desist order (really just a letter), and the plagiarized material was gone within the next 48 hours. I’m not sure an actual lawyer is necessary, since you could probably draw up a convincing letter yourself and self-promote to “Esquire” 😉 72. Hey There. I discovered your blog the usage of msn. That more of your useful information. Thanks for the post. I will certainly return. 73. Polarity is required for thinking. So, for example, if you don’t know what up is, you don’t know what down is. If there is no up, there is no down. Energy has been a difficult concept to define because no one knows its polar opposite. Feynman has said that science has no idea about what energy is, and that makes sense, because there is no polar opposite to energy. I suggest science and philosophy will have the same problem with the objectivity and subjectivity. Like energy, science, by its own requirements, cannot define objectivity, because, there is no polar opposite. Subjectivity, and hence consciousness, can only be defined by science in objective terms. According to current science, as I understand it, subjectivity is only a kind of objectivity we do not understand yet. Hence, since there is no real polar opposite to objectivity, objectivity cannot be defined. You know, and like “consciousness,”like “energy”. 74. Sorry everyone but you are over complicating ! Consider particles are constricted energy, energy seen is variation in dark matter energy field but what is energy? 75. hello, I agree with the cringing feeling you mentioned: I wish there was a cleanly defined ontology for physics too, as there are many for other scientific fields (“”). I also agree that the dichotomy matter/energy is inappropriate (a correct one would rather be mass/energy, as they can be tranformed into each other, right?), as matter is “stuff” and mass or energy are properties of that stuff. But then another dichotomy would be stuff/void or matter/spacetime, right? • ok, no replies… Maybe ontologies are out of scope here! but that’s strange, since both “here” (the web) and ontologies are by Tim Barners Lee, a physician… Anyway, to strictly speak your language and tell about things you know/care… The below is what I wondered about, when I studied (as an hobby) Susskind’s “Special Relativity and Electrodynamics”: So, just take the SR’s invariant (a metric!): dTau^2 = dt^2 – dx^2 – dy^2 – dz^2, then divide it by dTau^2, as Susskind does in one of those lectures. You get an equation on the components of the four-velocity, but what does it say? It says that the square of time component of the 4-velocity, minus the sum of the squares of the spacial components of the 4-velocity is always equal to one. Now, what does this mean/imply? Let’s take “me”, or any “material thing”: I am always “at rest” in my own frame of reference, right? So the spatial components of my 4-velocity shall all be 0, right? That then means the time component shall be 1, right? I, in my own frame or reference, have “some kind of motion” in time, indeed I see it “passing by on my clock”, it’s “moving” with respect to me and vice-versa: the time component of my 4-velocity is 1! To me, that is basically equivalent to say “I ‘keep existing’ as time goes by”. But there’s not just things like “me”, not just (massive) “particles”, out there, right? There also are things for which “time does not pass by” and “all space is compressed in the direction of motion”, so that they cannot have a frame of reference of their own… pretty weird, right? If they could have a reference frame of their own, they would say about themselves that their time component of the 4-velocity is 0 and their space components squared sum up to -1, so if you take the motion along one single space component, their speed is i=sqrt(-1)… too weird, right? BTW: to me, this is pretty much the same thing that happens in EMF with the components of the 4-current: – take the time component: we call it charge density – take the space components: we call them currents why do we do so? To me, it’s us introducing this dichotomy, nature hasn’t (this one). 76. Hello Dr. Strassler, I love your site and the scintific facts and knowledge you share. I would like you to please explain a phenomenon I’m interested in for many years: what cause the matter to became a matter? In psychological language (Jungian and others), it is the anima (the feminine aspect), that sedused the masculin aspect to became a matter. Okay, I can understand this. But what is the equivalent scientific explanation, for lay people like me. very comlicated indeed. Would very much appreciate your reply. 77. I believe we are missing an an equation to satisfy a problem. Matter to energy must have something quantifiable to keep a constant. Since energy can not be created nor destroyed we are left with doing the math to convert both in a dance of harmony. Dark matter is the gorilla in the room. Unless we can move past the speed of light, we can never know. 78. Dr. Strassler: As a novice and a much older high school dropout, any of the sciences leaves me in shambles. However I do ask many questions which usually get me in trouble. So here goes. Might there be an antithesis of energy at the very core of this universe growing exponentially as our material universe dies at an ever increasing speedy death of attrition? While modern science tells me our universe has no center, I have my doubts. I am hoping this may turn into a good and constructive discussion? Thanks, LeftyR 79. Hi, to relieve your mind of the contradiction and ambiguity 🙂 I can tell you this: First of all, this dichotomy clearly originates from before the early twentieth century, when the dual nature of light, and with that of all matter, was not yet apparent. I mean, from before the de-Broglie equation. This dichotomy of matter and energy, actually originates more from chemists than from physicists. In chemistry we use it primarily to define the traditional study of chemistry, as opposed to the study of physics. And we frequently say, chemistry is the study of matter – properties, changes… etc. and physics is the study of everything else (energy). when we say matter, we mean atomic matter. This is not accurate, and modern chemistry and physics overlap quite a lot, but what we mean when we say this is: properties of a substance -> properties of matter -> chemistry A reaction between two substances -> changes of matter -> chemistry energy changes during a reaction -> iffy, could be physics or chemistry, but chemists study it differently intermolecular and intramolecular forces -> same as above other forces -> physics the electromagnetic spectrum -> physics subatomic particles -> physics 80. Very helpful. I landed here with a question . Assuming the properties of only a gravitational interaction, why not just call it like unbound gravity or something? I mean we won’t ever see anything else there. 81. Thank you. But I had a question. Accordingto your article: what is energy in a field? Its amplitude or its wave velocity (I guess this would be C)? 82. Let me begin by saying that every atom in the entire universe has one intrinsic characteristic and that is, Magnetism. No matter how thin we slice or dice it, the atom, even down to what we believe is its lowest common denominator, quarks, has one thing in common, spin, angular momentum or, magnetism. Is it possible that energy itself is derived from something below this level, or in the way atoms get bounced around? 83. As a novice, in trying to make sense of matter vs. energy I have read several papers on the subject and concluded that the word “energy” is nothing more than a way of describing an action and reaction on mass/matter. An example is 2 iron balls, of 5# and 10# respectively, hung at equal heights in a vacuum. Dropped simultaneously, both will hit the ground at the same time with one exception, the heavier ball will dig a deeper hole. And why? Because of “ENERGY” generated by the weight of the mass. 84. Hi, I’m in my late 50’s and have taken a fascination with science, and am thoroughly enjoying learning about it. I found your article certain helpful, although much was over my head. I have saved it and will re read when will understand more. Thank You, 85. Matt, Stumbled onto the post searching for publications on this topic. Find much of your presentation w/r to matter and energy is agreeable to me. We have recently published a short article on this topic, and perhaps should have used the term “radiation”, as you did, rather than “radiant energy” as we wrote. Don’t hold your breathe expecting solid evidence for “dark energy” or “dark matter”. 86. Hello ALL, I bumped into this article and Matter-Energy Blog just on this Thanksgiving Day. Happy Thanksgiving to all. I am searching for particles /energy that explain physical and spiritual dimensions. Is Higgs’ Filed/Bosons the fundamental physical particle representing the Cosmos and we are still after the Holy-Grail, Spritual/Atman Particle? I am convinced that Higgs’ Field explains the material construction. 87. My Question is for Mr Matt Strassler – You say there are field without mass but no mass without field. What is the basic thing for which field obtain mass and also field obtain energy. Ultimately we can not ignore E=mc^2, which means mass and energy are convertible. Also could not understand proper relation between mc^2 and kT. How they are related if say the velocity of the particle (or field?) is v? Is there any mathematical formulae for the same. I also have a question on the concept of “Point of Singularity”. All the fundamental particles which form into “matter” have some basic dimension, however small it may be, which is why they have mass (leaving the particles with zero mass, as matter can not be formed with particles of zero mass alone). Therefore, even if we reduce particles to such a level a physical space would be available for the particle. Therefore, I conclude that there may not be any point of singularity at all – it would have some minimum space consideration which is what we may assume as “singularity” although it will not be same as mathematical singularity. So it may happen that our very assumption that big bang happened from a singularity may not be a valid assumption. However, supposing such a point of singularity to even exist, how does one visualize existence of energy in a concept of singularity. 88. as a developer, I’d say that: – mass is an “excess” of position information (stuff moving in a stationary way) – energy is an “excess” of speed/momentum information (stuff moving “freely”) 89. you might find this interesting to think about ” the universe being made by energy” instead of “being made of energy” this allows for the creation of non physical properties such as space time. Try to start by thinking of energy as a wave and a particle as the work being done, where the work being done is turning nothing into something. 90. Try thinking of energy in an evolution way maybe something like. Energy starts with zero complexity then increases in complexities until it finds symmetry with entropy thus returning energy to zero complexity. Then you can map complexity on an evolutionary tree. It soon becomes clear that gravity is the least complex form of all energy and maybe the source of all energy, from a zero point singularity formed by the single characteristic of nothing. “You cannot separate objects without space time.” A zero point singularity is inevitable in a zero dimension. 91. start with least complexity space time. how energy might create it? it should answer why the speed of light is consonant whither traveling toward or away from the source in very simple terms. then understand the function of spin and you have it. so simple 92. One last question. Is it time or distance that dilates at velocity? All quantum weirdness should vanish if you get all the above right. The end result could be, not a big bang but the continuous bang of a “white hole” and not a big crunch but a continuous crunch of entering a zero dimension a “black hole”. 93. How does one explain to children the difference between solids, liquids, kinetic energy, potential energy…but oh by the way…everything we know about matter vs energy is bogus? How should one approach educating children about the distinction between matter and energy or the types of energy and matter if this is an incorrect dichotomy? For example, how would I describe the basic state of a car vs the sun or electricity or sound or motion…etc. now that doing so will only lead ultimately to misleading information – and that our basic knowledge regarding energy and matter is a false paradigm? Thanks 94. energy is nonmatter, as matter possess energy and energy do not possess matter they are 2 different sides of a single coin , and do not get confused in terms of photons particle , as photon particles as photon particles are stuff of no mass and infinite energy…their mass converts into energy…..Thanx ….by Abhinav…a highschool student…..India ….u.p..alld….ghoorpur 96. Can one describe energy as a measure of the matter which acts on a system to put it back to it’s balanced state…and if so, that without matter energy would be undetectable? 97. “All particles are ripples in fields and have energy…” As it stands this statement is blather. A field of what? A ripple caused by what? One can substitute ‘strings’ for ‘fields’ in this statement and many a Ph.D. physicist might agree. Yet both versions are without any proof; see Lee Smolin ‘The Trouble with Physics’ 98. Energy as in photons always moves at light speed. Matter cannot move at light speed. How can they be two different sides of the same coin? Sure matter has energy inherent in it, as in e=mc2, but I think if you release all that energy, the matter is still there. Dead matter, so to speak. Atomic dust if you will, with no motion and no charge so probably impossible to detect. Photons if anything are a wave. This always moving at light speed suggest they are not a sphere since no part is going to move faster or slower than light speed, whatever happens to it. It is not going to rotate. The photon has two speed components. It moves at light speed in the direction of travel, and moves from side to side, so giving us wavelength and frequency, which seems to work out at one third of light speed (one metre wavelength gives a frequency of 10^8 Hz, so one hundred million metres per second). The more energy a photon has, the smaller it’s wavelength, so the side to side movement in turn becomes faster. The constant here gives us that effect; that what it adds to the speed, the frequency, it must take from the width, the wavelength. I know what I mean but I am not good at explaining this. • Isnt frequency simply the duration of the wave? And don’t photons have different frequency? As in visable light verses microwave? • Frequency is the number of waves that pass through a fixed point over a given unit of time (so not a time measurement per se). Any yes, photons have different frequencies. Matter has the following fundamental properties: 101. I came across this discussion because I was looking up the word ‘thing’. To me a thing is something that I can either see, or hear, or touch/feel, or taste, or smell. So, aren’t matter and energy both things?